Researchers have identified a new learning principle in the human brain that is different from AI

TL;DR:

  • Research reveals a novel learning principle distinguishing the human brain from AI systems.
  • The brain’s ‘prospective configuration’ approach optimizes neuron activity before adjusting synaptic connections.
  • Humans outshine AI in rapid, efficient learning while retaining existing knowledge.
  • Prospective configuration enables faster and more effective learning in computer simulations.
  • Implications for the development of specialized brain-inspired hardware for machine learning.

Main AI News:

In the realm of learning, a recent study has unearthed a stark contrast between the human brain and artificial intelligence systems. Conducted by researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science, this groundbreaking research has introduced a novel principle to elucidate how the brain fine-tunes connections between neurons during the learning process. This newfound insight holds the potential not only to steer future investigations into brain network learning but also to catalyze the development of more efficient and robust learning algorithms in the realm of artificial intelligence.

At the core of learning lies the ability to pinpoint the components within the information-processing pipeline that give rise to errors in output. In the realm of artificial intelligence, this process is achieved through a technique known as backpropagation, which entails the adjustment of a model’s parameters to minimize output errors. Many researchers have long conjectured that the brain employs a similar principle for learning.

Nevertheless, the biological brain exhibits a remarkable superiority over current machine learning systems. Notably, humans have the capacity to acquire new information with just a single exposure, whereas artificial systems often necessitate hundreds of repetitions to assimilate the same knowledge. Furthermore, humans can seamlessly integrate new information while retaining existing knowledge, a feat that artificial neural networks struggle to replicate, often leading to interference and rapid degradation of established knowledge.

These compelling observations have spurred the researchers to uncover the fundamental learning principle that underpins the human brain’s remarkable learning capabilities. Their investigation delved into established mathematical equations describing changes in neuron behavior and synaptic connections. Through meticulous analysis and simulation of these information-processing models, they made a groundbreaking discovery—the brain employs a fundamentally distinct learning principle compared to artificial neural networks.

In the realm of artificial neural networks, an external algorithm endeavors to modify synaptic connections to reduce errors. In contrast, the researchers propose that the human brain first stabilizes the activity of neurons into an optimal, balanced configuration before adjusting synaptic connections. This approach, termed ‘prospective configuration,’ is believed to be an efficient feature of human learning as it minimizes interference by preserving existing knowledge, consequently accelerating the learning process.

Detailed in the pages of Nature Neuroscience, the researchers articulate this novel learning principle, ‘prospective configuration.’ Computer simulations conducted by the researchers have demonstrated that models utilizing this prospective configuration learn faster and more effectively than artificial neural networks when tackling tasks typically encountered by animals and humans in their natural environments.

To illustrate this concept, the authors use a real-life example involving a bear fishing for salmon. The bear relies on sensory cues from the environment—sight, sound, and smell—to catch its prey. However, on a day when the bear’s hearing is impaired, an artificial neural network information processing model would erroneously conclude that the absence of sound equates to the absence of smell, potentially leading the bear to go hungry. In contrast, the animal brain adeptly preserves the knowledge that the smell of salmon still persists, ensuring the bear’s continued likelihood of success.

The researchers have further developed a comprehensive mathematical theory substantiating the benefits of allowing neurons to settle into a prospective configuration. This strategy significantly reduces interference between pieces of information during the learning process. Their research also highlights that prospective configuration offers superior explanatory power for neural activity and behavior in various learning experiments compared to artificial neural networks.

Lead researcher Professor Rafal Bogacz, affiliated with MRC Brain Network Dynamics Unit and Oxford’s Nuffield Department of Clinical Neurosciences, emphasized the significance of bridging the gap between abstract models and the anatomical structure of brain networks. Their future research endeavors aim to unravel how the algorithm of prospective configuration is implemented within anatomically identified cortical networks.

Dr. Yuhang Song, the first author of the study, pointed out the challenges faced in simulating prospective configurations on existing computers due to their fundamental differences from the biological brain. She stressed the need for the development of a new breed of computers or specialized brain-inspired hardware capable of rapidly and energy-efficiently implementing prospective configuration—an exciting prospect that could revolutionize the field of machine learning.

Conclusion:

The discovery of ‘prospective configuration’ in the human brain presents a paradigm shift in AI learning. This newfound insight could revolutionize the market by paving the way for the development of specialized hardware, enabling AI systems to learn faster and more efficiently, akin to the human brain, while preserving existing knowledge—a game-changer for industries reliant on machine learning applications.

Source