Category: llm

  • The illusion of homo rationalis: Large language models assume humans are rational decision-makers

    The illusion of homo rationalis: Large language models assume humans are rational decision-makers

    LARGE LANGUAGE MODELS (LLMs) have shown a remarkable ability to mimic human communication, making them increasingly valuable tools for everything from content creation to customer service. However, recent research reveals something unexpected about these AI systems: they consistently overestimate human rationality. When it comes to predicting human choices, they expect people to be far more logical than we are—a bias that mirrors our tendency to overestimate the rationality of others.

    (more…)
  • The consciousness conundrum: From science fiction to silicon minds

    The consciousness conundrum: From science fiction to silicon minds

    In the climactic scene of the 1982 film Blade Runner, a dying artificial being delivers a soliloquy about the memories he will lose: “All those moments will be lost in time, like tears in rain.” The scene poses a provocative question: Can a machine truly experience loss? Four decades later, as artificial intelligence (AI) systems become increasingly sophisticated, this philosophical puzzle has evolved from science fiction into a pressing technological and ethical challenge.

    The quest to determine machine consciousness has moved from Hollywood to Silicon Valley. As large language models (LLMs) engage in increasingly human-like conversations, they force a reckoning with questions first posed by Philip K. Dick’s “Do Androids Dream of Electric Sheep?”, the novel that inspired “Blade Runner.” The challenge lies not merely in creating machines that can simulate consciousness but in developing reliable methods to detect genuine synthetic sentience.

    (more…)
  • The Fourth Estate’s Feedback Loop – When Reporting Shapes Reality

    The Fourth Estate’s Feedback Loop – When Reporting Shapes Reality

    IN HIS SEMINAL work “The Open Society and Its Enemies,” Karl Popper championed societies open to criticism and peaceful change. Such openness, he argued, allows democratic institutions to identify and address their shortcomings. That capacity is being tested as never before. The twin pillars of democratic knowledge creation—journalism and science—are undergoing profound structural changes that threaten their ability to perform their essential functions.

    The transformation of scientific research offers a telling example. The cutting edge of fields like artificial intelligence, quantum computing, and biotechnology has shifted from academia to industrial laboratories. Companies like Google, IBM, and their ilk now drive progress in these domains, backed by resources that dwarf those of traditional universities. This shift raises questions about preserving the open, falsification-based scientific process that Popper identified as crucial to knowledge creation.

    (more…)
  • Are we still sure that language makes us human?

    Are we still sure that language makes us human?

    Understanding LLMs as Tools, Not Agents

    In recent years, Large Language Models (LLMs) like ChatGPT have captured the public imagination with their ability to generate human-like text, leading to bold claims about machine consciousness and artificial linguistic capabilities. These claims often suggest that with enough data and computational power, machines might achieve proper language understanding comparable to humans. However, examining what human language entails reveals fundamental limitations in this perspective. Are LLMs really on the path to becoming conscious linguistic agents, or do we misunderstand the nature of language and the capabilities of these sophisticated pattern-matching systems?

    (more…)
  • From Blade Runner to Large Language Models: Testing for Machine Consciousness

    From Blade Runner to Large Language Models: Testing for Machine Consciousness

    In 1982, the film Blade Runner presented a world where artificial beings, called replicants, were virtually indistinguishable from humans. The film’s central tension revolves around a profound question: How can we tell if an artificial mind is truly conscious? The replicants in the film display emotion, reasoning, and even empathy, yet they’re dismissed as mere machines – much like how we might view today’s artificial intelligence systems. When one replicant, facing his final moments, says “I’ve seen things you people wouldn’t believe,” we’re forced to confront the possibility that these artificial beings might have genuine inner experiences, real consciousness.

    Philip K. Dick’s 1968 novel “Do Androids Dream of Electric Sheep?” raised profound questions about consciousness and what makes us human. The book, which later inspired the film Blade Runner, follows bounty hunter Rick Deckard as he pursues androids so sophisticated they’re nearly indistinguishable from humans. The title itself poses one of the central questions we still grapple with today: Can artificial beings have genuine inner experiences?

    This question has moved from science fiction into reality. As we interact with increasingly sophisticated large language models (LLMs), we face similar challenges: How can we tell if these systems possess genuine consciousness or merely convincingly simulate it?

    (more…)