Crow Intelligence

AI is just a tool.  To use it effectively, you must understand how humans think and communicate. We know the strengths of both natural and artificial intelligence and how to combine them for optimal results. By bridging cognitive science and AI, we create solutions that enhance human capabilities and ensure seamless interaction.

Our Approach

Just as a well-designed tool feels like an extension of your hand, AI should feel like an extension of human intelligence. The best AI systems are built on two key principles:

Human Cognition

Understanding human thought and language ensures AI integrates seamlessly with natural cognitive processes.

Advanced AI Engineering

Cutting-edge AI technology, designed with cognitive awareness, creates powerful and intuitive systems.

Are you interested?

✉️ hello@crowintelligence.org

Blog

  • Rust: Python’s New Best Friend – A Data Scientist’s Journey

    Rust: Python’s New Best Friend – A Data Scientist’s Journey

    As Python continues to dominate data science, a quiet revolution is happening underneath the surface. Increasingly, Rust is powering our most critical Python tools—bringing unprecedented performance while maintaining the Python interface we know and love. This hybrid approach transforms our work as data scientists, enabling rapid development and production-grade performance.

    My journey with Rust began six years ago as a distant curiosity. I heard the name in conference talks and saw it climbing GitHub’s language popularity charts, but it remained just another programming language on my “maybe someday” list.

    (more…)
  • Why Probabilistic Programming? A Journey Through the Monty Hall Problem

    Why Probabilistic Programming? A Journey Through the Monty Hall Problem

    Even brilliant minds can be led astray by probability puzzles. When presented with the Monty Hall Problem, renowned mathematician Paul Erdős initially rejected the correct solution – and he wasn’t alone. Thousands of readers, including PhDs in mathematics and statistics, wrote angry letters to Marilyn vos Savant when she published the correct solution in Parade magazine. Their passionate resistance reveals something fascinating about how humans reason about uncertainty.

    To explore these ideas hands-on, we’ve created a Jupyter notebook that implements both traditional and probabilistic programming approaches to the Monty Hall Problem. The notebook includes code for simulating the game, modeling player behavior, and analyzing how people learn from experience.

    (more…)
  • The illusion of homo rationalis: Large language models assume humans are rational decision-makers

    The illusion of homo rationalis: Large language models assume humans are rational decision-makers

    LARGE LANGUAGE MODELS (LLMs) have shown a remarkable ability to mimic human communication, making them increasingly valuable tools for everything from content creation to customer service. However, recent research reveals something unexpected about these AI systems: they consistently overestimate human rationality. When it comes to predicting human choices, they expect people to be far more logical than we are—a bias that mirrors our tendency to overestimate the rationality of others.

    (more…)
  • Building Resilient Tech Teams: The Theory-Building Approach

    Building Resilient Tech Teams: The Theory-Building Approach

    In this article, I argue that tech teams’ success depends not only on their technical skills but also on their ability to collectively build and refine theories about their work. Drawing from Peter Naur’s Programming as Theory Building, I will explore how shared understanding, tacit knowledge, and cognitive diversity contribute to resilient teams capable of delivering sustainable solutions.

    Many tech teams struggle despite having talented individuals. They deliver initial solutions efficiently but falter when requirements change, key team members leave, or when they need to pivot. These failures often stem not from technical incompetence but from insufficient theory-building—the lack of a shared, evolving understanding that transcends documentation and enables adaptation.

    (more…)
  • The Living Word: Why AI Can’t Walk the Talk

    The Living Word: Why AI Can’t Walk the Talk

    Three years ago, in a bland office at Google’s headquarters, an engineer became convinced that the company’s chatbot had developed consciousness. Blake Lemoine’s widely publicized claims were swiftly dismissed, yet they exemplify a persistent muddle in society’s thinking about artificial intelligence. As language models like ChatGPT churn out increasingly convincing prose, a crucial question emerges: Have machines finally cracked the code of human language?

    The evidence seems compelling at first glance. Modern AI systems engage in witty banter, write passable poetry, and help craft legal briefs. This has led some tech evangelists to revive a bold claim first made by Chris Anderson, former editor of Wired magazine, in 2008: that with enough data, theory becomes unnecessary. “Correlation supersedes causation,” he declared, suggesting that patterns alone could reveal all there is to know about the world. Applied to language, this thinking suggests that by ingesting enough text, machines could master human communication.

    (more…)