Should AI Mimic the Mind or Rebuild the Brain?

Y
By YumariInsights & Opinion
Should AI Mimic the Mind or Rebuild the Brain?
Should AI Mimic the Mind or Rebuild the Brain?

For decades, the quest to build truly intelligent machines has been a central drama in technology. But behind the scenes, this drama has been split between two fundamentally different philosophies. The core question isn't just how to build artificial intelligence, but what we should be trying to copy. Should we replicate the logical, symbol-based thinking of the human mind, or should we try to recreate the biological machinery of the brain itself? This question marks a major fork in the road in the evolution of AI.

At the heart of the matter is the incredible complexity of our own intelligence. The human brain, a roughly three-pound mass of tissue, contains around 100 billion neurons connected by 1,000 trillion synapses. In computational terms, it’s estimated to perform about 10^18 floating-point operations per second—orders of magnitude faster than today's most powerful personal computers. But raw processing power isn't the whole story. The real mystery is how this biological hardware gives rise to the mind. Early AI pioneers had to decide: do we focus on the hardware or the software?

The First Path: Recreating the Mind with Symbols

In the mid-20th century, the fields of artificial intelligence and cognitive science emerged together, pushing back against the idea that we could only study observable behavior. Researchers wanted to understand the computations producing that behavior—things like reasoning and representation. This shift culminated in the legendary 1956 Dartmouth Conference, which became the official birthplace of AI and championed an approach now known as symbolic AI.

Two attendees, Allen Newell and Herbert Simon, proposed a powerful idea called the physical symbol system hypothesis. They argued that human intelligence is essentially just symbol manipulation. If we could teach a computer to manipulate symbols in the right way, they believed, we could make it intelligent.

Their early programs were surprisingly effective. Logic Theorist could prove mathematical theorems, sometimes finding more elegant proofs than humans had. Its successor, the General Problem Solver (GPS), was designed to solve logical problems in a human-like way using a method called means-end analysis. Given a starting point and a goal, GPS would break the problem down into smaller steps, searching for operators that could close the gap between its current state and the desired outcome. It successfully tackled classic puzzles like the Towers of Hanoi, demonstrating that a machine could "reason" its way to a solution.

This symbolic approach peaked in the 1970s with the rise of "expert systems." These programs aimed to capture the knowledge of human experts in a specific domain. A famous example was mycin, a system developed at Stanford to diagnose blood infections. It contained a knowledge base of around 600 rules and would ask a physician a series of questions before providing a list of possible diagnoses. Studies showed that mycin's recommendations were as good as, or even better than, those of human specialists.

However, mycin was never actually used in medical practice. Its non-deployment highlighted some of the early ethical and societal implications of AI, particularly around issues of liability and reliability in a medical setting—concerns that remain highly relevant today.

The Second Path: Rebuilding the Brain with Networks

For all its success in logic and reasoning, symbolic AI hit a wall. It turned out that the problems humans find most difficult, like chess and algebra, were relatively easy for computers. But the things we do effortlessly—recognizing a face, walking across a room, having a conversation—were incredibly hard for symbol-based systems. This phenomenon, known as Moravec's paradox, suggested a different approach was needed.

This led to the rise of connectionism, a school of thought built around Artificial Neural Networks (ANNs). Instead of representing knowledge with explicit symbols, ANNs are inspired by the brain's physical structure. They consist of interconnected nodes, or "neurons," that pass signals to one another. Information isn't stored in one place but is distributed across the network as a pattern of connection strengths, or "weights," much like synapses in a biological brain.

While the mathematical foundations for ANNs existed as far back as the 1940s, it wasn't until the 1980s that they really took off. Researchers showed that these networks could learn to recognize patterns and account for a wide range of psychological phenomena. To get an ANN to work, you have to train it. In a common method called supervised learning, you show the network thousands of labeled examples—say, images labeled "cat" or "dog"—and an algorithm adjusts the network's weights until it can correctly classify new, unseen images.

The real breakthrough came in the 2010s with the "deep learning revolution." By building ANNs with many layers—"deep" networks—researchers achieved and even surpassed human performance on certain tasks. For instance, a deep neural network achieved 99.46% accuracy in recognizing German traffic signs, while humans scored 98.84%. Models were developed that could predict breast cancer from mammograms more accurately than certified radiologists. This marked a huge milestone in the evolution of AI.

Brain-Inspired AI: Similar but Not the Same

The parallels between deep neural networks and the brain are fascinating. The hierarchical structure of deep networks, where lower layers learn simple features like edges and higher layers learn abstract concepts like faces, closely resembles how the primate visual cortex processes information. Specific architectures are directly inspired by biology:

  • Convolutional Neural Networks (CNNs), which are excellent at image recognition, are inspired by the animal visual cortex, where neurons respond to specific regions in the visual field.
  • Residual Neural Networks (ResNets) allow for extremely deep and powerful models by using "skip connections" that mimic the architecture of pyramidal cells in the cerebral cortex.

Yet, for all the similarities, ANNs are not perfect replicas of the brain. The primary algorithm used for training, called backpropagation, is considered biologically implausible; there's no known mechanism for neurons to send error signals backward through the network. Furthermore, the reliance on huge, perfectly labeled datasets for supervised learning is nothing like the messy, trial-and-error way humans learn.

Perhaps the most telling difference is the phenomenon of "adversarial examples." Researchers found that you can take an image that an ANN correctly classifies with high confidence—like a picture of an elephant—and change it so slightly that a human can't see the difference, yet the network will now classify it as something completely different, like a baseball. This vulnerability suggests that while these networks are powerful pattern-matchers, their "understanding" of the world is fundamentally different and more brittle than our own.

The Road Ahead

The history of AI has been a tale of two competing ideas: one focused on symbolic reasoning and the other on biologically inspired networks. Both have led to incredible advances, yet neither has produced a machine with the flexible, general-purpose intelligence that humans possess.

Today's most advanced systems are masters of narrow domains but lack the common sense to operate outside of them. The long road in the evolution of AI continues, with many researchers believing that true artificial general intelligence will require a hybrid approach that combines the strengths of both philosophies. For now, the quest to build an intelligent machine remains deeply intertwined with the quest to understand ourselves. New discoveries about the brain will almost certainly continue to inspire the next generation of artificial minds.

Related Articles