Category: philosophy

  • Statistical Thinking as Philosophy: Essential Readings – Part I.

    Statistical Thinking as Philosophy: Essential Readings – Part I.

    “Philosophy of science without history of science is empty; history of science without philosophy of science is blind.” — Imre Lakatos

    Statistics isn’t just a collection of mathematical techniques—it’s a way of thinking about the world, addressing uncertainty, and drawing conclusions from incomplete information. As data scientists, machine learning engineers, and AI practitioners, we often apply statistical methods without reflecting on their theoretical foundations. Yet our work implicitly embodies philosophical stances about knowledge, evidence, and inference.

    This series presents foundational readings that shed light on the philosophical aspects of statistics. They are not intended to turn data practitioners into philosophers, but to offer accessible ways to reflect on the assumptions that underlie our daily work.

    (more…)
  • Building Resilient Tech Teams: The Theory-Building Approach

    Building Resilient Tech Teams: The Theory-Building Approach

    In this article, I argue that tech teams’ success depends not only on their technical skills but also on their ability to collectively build and refine theories about their work. Drawing from Peter Naur’s Programming as Theory Building, I will explore how shared understanding, tacit knowledge, and cognitive diversity contribute to resilient teams capable of delivering sustainable solutions.

    Many tech teams struggle despite having talented individuals. They deliver initial solutions efficiently but falter when requirements change, key team members leave, or when they need to pivot. These failures often stem not from technical incompetence but from insufficient theory-building—the lack of a shared, evolving understanding that transcends documentation and enables adaptation.

    (more…)
  • The Living Word: Why AI Can’t Walk the Talk

    The Living Word: Why AI Can’t Walk the Talk

    Three years ago, in a bland office at Google’s headquarters, an engineer became convinced that the company’s chatbot had developed consciousness. Blake Lemoine’s widely publicized claims were swiftly dismissed, yet they exemplify a persistent muddle in society’s thinking about artificial intelligence. As language models like ChatGPT churn out increasingly convincing prose, a crucial question emerges: Have machines finally cracked the code of human language?

    The evidence seems compelling at first glance. Modern AI systems engage in witty banter, write passable poetry, and help craft legal briefs. This has led some tech evangelists to revive a bold claim first made by Chris Anderson, former editor of Wired magazine, in 2008: that with enough data, theory becomes unnecessary. “Correlation supersedes causation,” he declared, suggesting that patterns alone could reveal all there is to know about the world. Applied to language, this thinking suggests that by ingesting enough text, machines could master human communication.

    (more…)
  • The consciousness conundrum: From science fiction to silicon minds

    The consciousness conundrum: From science fiction to silicon minds

    In the climactic scene of the 1982 film Blade Runner, a dying artificial being delivers a soliloquy about the memories he will lose: “All those moments will be lost in time, like tears in rain.” The scene poses a provocative question: Can a machine truly experience loss? Four decades later, as artificial intelligence (AI) systems become increasingly sophisticated, this philosophical puzzle has evolved from science fiction into a pressing technological and ethical challenge.

    The quest to determine machine consciousness has moved from Hollywood to Silicon Valley. As large language models (LLMs) engage in increasingly human-like conversations, they force a reckoning with questions first posed by Philip K. Dick’s “Do Androids Dream of Electric Sheep?”, the novel that inspired “Blade Runner.” The challenge lies not merely in creating machines that can simulate consciousness but in developing reliable methods to detect genuine synthetic sentience.

    (more…)
  • The Fourth Estate’s Feedback Loop – When Reporting Shapes Reality

    The Fourth Estate’s Feedback Loop – When Reporting Shapes Reality

    IN HIS SEMINAL work “The Open Society and Its Enemies,” Karl Popper championed societies open to criticism and peaceful change. Such openness, he argued, allows democratic institutions to identify and address their shortcomings. That capacity is being tested as never before. The twin pillars of democratic knowledge creation—journalism and science—are undergoing profound structural changes that threaten their ability to perform their essential functions.

    The transformation of scientific research offers a telling example. The cutting edge of fields like artificial intelligence, quantum computing, and biotechnology has shifted from academia to industrial laboratories. Companies like Google, IBM, and their ilk now drive progress in these domains, backed by resources that dwarf those of traditional universities. This shift raises questions about preserving the open, falsification-based scientific process that Popper identified as crucial to knowledge creation.

    (more…)