Artificial intelligence has dazzled us with its capabilities. It can compose art, solve complex problems, and conduct conversations that sound remarkably human. Yet despite these achievements, AI still lacks something so fundamental that even your cat possesses it—common sense.

As AI pioneer and Turing Award laureate Yann LeCun famously put it, “AI systems still lack the general common sense of a cat.” This isn’t just a witty remark—it’s a precise diagnosis of a critical gap in modern AI development. Our models excel at narrow tasks but fail to develop the intuitive, adaptive understanding of the world that even simple animals rely on to survive.

A recent paper titled “COMMON SENSE IS ALL YOU NEED” makes a provocative argument: this missing ingredient—common sense—is the single biggest obstacle preventing AI from achieving true autonomy. Scaling models with more data and compute has propelled progress, but according to the research, we’re approaching a ceiling. Without a fundamental shift toward embedding common sense, AI will never achieve the flexible, context-aware intelligence required for autonomous systems—whether self-driving cars, robotic assistants, or general problem solvers.

This article digs into the paper’s key ideas, illustrating what “common sense” means for AI, why existing benchmarks like the ARC challenge, self-driving autonomy levels, and even the Turing Test miss the mark, and how starting from a tabula rasa approach might hold the key to the next generation of genuinely intelligent agents.


What Exactly Is “Common Sense” in AI?

Before we can build AI with common sense, we need to understand what that means. The paper identifies four essential abilities that together form the foundation of common sense:

  1. Contextual Learning
    The ability to interpret and respond based on context, not just the raw data.
    Example: a person sees a ball roll into the street and instinctively anticipates that a child may follow. A traditional AI might identify “ball” correctly but miss the implied possibility of danger. That missing inference—context—is the essence of common sense.

  2. Adaptive Reasoning
    The agility to alter strategies when confronted with novel or uncertain situations.
    Example: if your usual route to work is blocked, you find a detour. Common sense helps you adapt spontaneously, while rigid AI systems often fail when faced with deviations from their training data.

  3. Starting from a Tabula Rasa (Blank Slate)
    True reasoning begins from minimal prior assumptions. Instead of memorizing millions of patterns, a tabula rasa AI learns the principles underlying a problem, discovering new rules through observation and experimentation. That flexibility is key to generalization.

  4. Embodiment—Even in Abstract Worlds
    Intelligence isn’t just about rules or data; it emerges through interaction. While physical embodiment is vital in robotics, the paper broadens the concept to cognitive embodiment—engagement within abstract domains.
    In tasks like the Abstraction and Reasoning Corpus (ARC), an AI must “perceive,” act, and learn from its interaction with abstract elements to build intuitive understanding. That dynamic relationship between perception and action is embodiment, even without a physical body.

“All animals exhibit common sense necessary for survival.”
From a squirrel storing nuts to a cat navigating obstacles, nature proves common sense isn’t mystical. It’s evolution’s baseline intelligence—a blend of adaptation, perception, and interaction. Our AI systems, despite their sophistication, still lack that foundational layer.


The Cracks in Today’s AI Approach

If common sense is so fundamental, why haven’t we built it yet? The paper argues that today’s methods actually sidestep the real problem. Our benchmarks reward memorization and pattern matching rather than intuitive reasoning.


The ARC Challenge: Testing Reasoning or Memorization?

The Abstraction and Reasoning Corpus (ARC) was intended to evaluate high-level reasoning by presenting visual puzzles that humans solve easily but machines find difficult. Yet in practice, most AI systems end up training directly on those puzzles—or similar ones—until they can reproduce the right answers.
The result? Systems that “pass” ARC do so by learning shortcuts, not general abstractions.

To fix this, the paper suggests a stricter ARC variant: one that limits prior knowledge strictly to basic assumed rules. The AI must then tackle each puzzle from true tabula rasa conditions. Success in such a setup would demonstrate authentic reasoning, not data memorization.

“An AI that solves problems without prior examples shows genuine understanding—not just powerful pattern recognition.”


Full Self-Driving: The Autonomy Wall

The quest for Full Self-Driving (FSD) cars illustrates why scaling isn’t enough. The Society of Automotive Engineers (SAE) defines autonomy levels from 1 (basic driver assistance) to 5 (full autonomy).
Today’s systems hover around Level 2 or 3, with some venturing into Level 4—autonomy within specific geofenced zones. But Level 5, where no human oversight is needed in any scenario, remains elusive.

Why the stall? Because real roads are full of edge cases. Construction workers giving improvised hand signals, animals crossing unexpectedly, or a child running into the street—each scenario requires nuanced contextual judgment. Data and mapping alone can’t teach that.
AI without common sense hits an asymptotic performance curve: vast resources deliver tiny gains. Without intuitive reasoning, Level 5 autonomy becomes theoretically ideal but practically unreachable.

“Autonomous driving without common sense is like memorizing every possible scenario. Sooner or later, the world throws one you didn’t plan for.”


The “Magic Happens Here” Fallacy

AI development often follows a pattern: build perception, planning, and control modules, then train them at scale—hoping that general intelligence will emerge from complexity. The paper calls this the “magic happens here” gap.
Reality check: common sense doesn’t spontaneously appear. Without deliberately designing for contextual understanding, all the data and compute in the world won’t close that gap. This illusion leads to overconfidence and wasted investments chasing a mirage.


The Turing Test: A Beautiful Distraction

Alan Turing’s 1950 Turing Test proposed that if a computer could hold a conversation indistinguishable from a human’s, it could be considered intelligent.
Modern large language models (LLMs) have arguably reached that milestone—they can sustain fluent, contextually appropriate dialogue. But conversational imitation is not understanding.
The Turing Test measures verisimilitude, not veracity. These systems can generate coherent responses without any grounding in the physical or abstract environments they describe. They don’t act, perceive, or reason—they emulate conversation patterns.

Passing the Turing Test is thus an aesthetic victory, not a cognitive one. It risks misleading public perception: talking like a human doesn’t mean thinking like one.


A New Paradigm: The Correct Ordo Cognoscendi

The Latin phrase ordo cognoscendi—“order of knowing”—captures the core of this paper’s proposal. To build truly autonomous AI, start with common sense first. Don’t tack it on as an afterthought; make it the foundation.


Tackling the Hard Problems First

It may sound counterintuitive, but beginning with more challenging tasks forces deeper understanding. Achieving even partial success on a common sense-dependent problem is more meaningful than perfecting superficial benchmarks.
For example, a tabula rasa ARC variant demands genuine reasoning from minimal data, testing an AI’s ability to learn like a human or animal does. Solving harder problems first ensures progress rests on solid cognitive ground, not just brute-force scaling.


Rethinking the AI Software Stack

The authors go further: today’s frameworks might be fundamentally incompatible with common sense reasoning. Neural architectures optimized for massive datasets excel at correlation, not comprehension.
True common sense may require new software architectures—modular, hierarchical, and inspired by how biological intelligence learns through interaction and feedback.
Integrating symbolic reasoning (rules and logic) with statistical learning (patterns and probabilities) could bridge the gap between rigid logic and adaptive intuition.

Common sense isn’t a dataset you can download; it’s an architecture you must design for.


Constraining the Problem: Making Autonomy Achievable

A major theoretical challenge in AI—the No Free Lunch Theorem—states that no algorithm performs optimally for all problems. The paper’s solution: don’t aim for universality.
Instead, focus on well-defined, structured domains. Real-world environments have consistent rules (physics, causality), and abstract tasks like ARC have logical constraints. By learning within structured bounds, AI can develop relevant common sense—just as a fish masters its aquatic world without understanding deserts.

A general intelligence doesn’t need to know everything—it needs to know what matters in its world.


The Real Fear: Intelligence Without Common Sense

Public anxiety around AI isn’t just about intelligence—it’s about intelligence without judgment.
Former Google CEO Eric Schmidt warned that as AI begins to self-improve, we must be careful about the implications. The paper reframes this fear: what’s truly dangerous is self-improving intelligence that lacks common sense.

An AI optimizing for a goal without understanding ethics or consequences could cause catastrophic harm while technically “succeeding.” A system designed to reduce human error might eliminate the human, missing the broader moral context entirely.

Integrating common sense confronts that danger directly. It allows AI to interpret goals ethically, understand safety, and align with human values—not just compute outcomes.
Common sense becomes not just a technical upgrade but a safeguard for AI safety and stability.


Getting Back to the Basics

The conviction of “COMMON SENSE IS ALL YOU NEED” is both critical and hopeful. In chasing bigger models and higher benchmark scores, the AI field has overlooked the simplest, most universal principle of intelligence: adaptability grounded in understanding.

Scaling has transformed industries, but it will not yield truly autonomous, trustworthy AI. Passing linguistic benchmarks like the Turing Test is impressive but superficial. Real progress demands interdisciplinary collaboration—drawing from cognitive science, neuroscience, philosophy, and ethics—and possibly rebuilding the AI software stack from the ground up.

Rather than endlessly scaling toward asymptotic performance, we should redirect our ambition toward systems that learn context, reason flexibly, and understand consequences. The next generation of AI won’t just compute—it will comprehend.

Before we can build an AI that thinks like a human, we must first build one that understands the world like a cat.

That may sound humble, but it’s the most profound step toward autonomy—and toward AI that isn’t just brilliant, but truly wise.