The pursuit of artificial intelligence has been hampered for decades by a surprisingly fundamental problem: common sense. While AI systems have achieved remarkable feats in narrow domains – mastering games like Go, recognizing images and even generating text – they consistently stumble on tasks that require the kind of everyday reasoning that comes naturally to humans. This challenge isn’t new; it was identified as a core obstacle as early as 1959 by John McCarthy, the computer scientist who coined the term “artificial intelligence.”
McCarthy, as highlighted in recent discussions, recognized that the difficulty wasn’t simply about processing power or algorithmic complexity. It was about replicating the vast, often unspoken, body of knowledge about how the world works – knowledge that humans acquire through experience. This “commonsense reasoning,” as it’s become known, involves understanding physical properties, social norms, and the likely consequences of actions. It’s the ability to know, for example, that elephants generally don’t fit in Olympic swimming pools, a question that recently tripped up a Google AI Overview, as noted by Gur Kimchi.
The problem, as researchers like Pat Hayes and Doug Lenat discovered, lies in the difficulty of explicitly codifying this knowledge. Hayes’s work on “naïve physics” and “ontology for liquids” attempted to formalize our understanding of the physical world, while Lenat’s ambitious Cyc project aimed to build a massive, machine-interpretable database of commonsense knowledge. Both efforts, while groundbreaking, faced immense challenges in capturing the nuance and complexity of human understanding.
The core issue isn’t a lack of data, but the *nature* of the data and how it’s used. Today’s AI systems, particularly those based on large language models, are trained on massive datasets scraped from the internet. However, this data is often biased, incomplete, and lacks the grounding in real-world experience that humans possess. As a recent article points out, current AI is built on “outdated, biased and often unlicensed data.” Which means that while these systems can generate fluent and seemingly intelligent text, they often lack a true understanding of the concepts they are manipulating.
The limitations extend beyond simple factual errors. Commonsense reasoning is crucial for tasks like planning, problem-solving, and adapting to unexpected situations. An AI system without this ability can easily make illogical or dangerous decisions. Consider a self-driving car that doesn’t understand that a ball rolling into the street might be followed by a child. Or a medical diagnosis system that fails to recognize the subtle cues that a human doctor would pick up on.
The difficulty of achieving true AI isn’t merely a technical hurdle; it’s a philosophical one. It forces us to confront what we mean by “intelligence” itself. Is intelligence simply the ability to process information and generate outputs, or does it require something more – a capacity for understanding, reasoning, and adapting to the world in a meaningful way? The ongoing struggle to imbue AI with commonsense suggests that the latter may be essential.
Researchers like Yejin Choi continue to work on commonsense benchmarks, attempting to create standardized tests that can measure an AI system’s ability to reason about the world. However, even these benchmarks struggle to capture the full scope of human commonsense. The problem is further complicated by the fact that commonsense is often context-dependent and culturally specific.
The implications of this ongoing challenge are significant. While AI continues to advance in specific areas, the dream of creating truly intelligent machines remains elusive. As one analysis notes, there’s a tendency to overestimate the near-term prospects of “truly” intelligent machines, a fallacy that has persisted since the early days of the field. The focus now is shifting towards more pragmatic approaches, such as developing AI systems that are better at recognizing their own limitations and deferring to human judgment when necessary.
The lack of progress on commonsense reasoning also has implications for cognitive psychology. While commonsense is clearly vital for human intelligence, it has been relatively understudied in the field, partly because of the difficulty of applying traditional experimental techniques to investigate it. Understanding how humans acquire and use commonsense knowledge could provide valuable insights for building more intelligent AI systems.
The fundamental issue, as identified decades ago and still relevant today, is that replicating human-level intelligence requires more than just processing power and data. It requires a deep understanding of the world, a capacity for reasoning, and a healthy dose of common sense – qualities that remain stubbornly difficult to engineer into machines. The pursuit continues, but the path forward is clearly more complex than many initially believed.
