May 17, 2024
New Insight into Brain's Learning Process Could Improve AI Algorithms

New Insight into Brain’s Learning Process Could Improve AI Algorithms

Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have discovered a new principle that explains how the brain adjusts connections between neurons during the learning process. This breakthrough could lead to more efficient and robust learning algorithms in artificial intelligence (AI) systems.

The key to learning is identifying the components in the information-processing pipeline that are responsible for errors in output. In AI, this is achieved through backpropagation, where the model’s parameters are adjusted to minimize output errors. Scientists believe that the brain uses a similar principle for learning.

However, the brain’s learning capabilities surpass current machine learning systems. Humans can learn new information after just one exposure, whereas AI systems need to be trained hundreds of times on the same material. Additionally, humans can acquire new knowledge while retaining existing knowledge, but learning new information in AI neural networks often interferes with and degrades existing knowledge rapidly.

Motivated by these differences, the researchers set out to uncover the fundamental principle employed by the brain during learning. They analyzed mathematical equations describing changes in neuron behavior and synaptic connections, which led them to discover that the brain employs a different learning principle than AI neural networks.

While AI networks rely on external algorithms to modify synaptic connections and reduce errors, the researchers propose that the brain first settles the activity of neurons into an optimal balanced configuration before adjusting connections. They believe this is an efficient learning feature as it reduces interference, preserves existing knowledge, and accelerates the learning process.

Described in a new study published in Nature Neuroscience, this principle is called “prospective configuration.” The researchers demonstrated through computer simulations that models based on prospective configuration learn faster and more effectively than AI neural networks in tasks resembling those encountered in nature by animals and humans.

To illustrate this concept, the researchers used the example of a bear fishing for salmon. If the bear arrives at the river with a damaged ear, it can’t hear the river. In an AI neural network, this loss of hearing would also result in a loss of smell, leading the bear to believe there are no salmon. However, in the animal brain, the lack of sound does not interfere with the knowledge that there is still the smell of salmon, allowing the bear to anticipate the presence of fish.

The researchers developed a mathematical theory demonstrating that allowing neurons to settle into a prospective configuration reduces interference during learning. They showed that this principle better explains neural activity and behavior in various learning experiments compared to AI neural networks.

Lead researcher Professor Rafal Bogacz of the MRC Brain Network Dynamics Unit and Oxford’s Nuffield Department of Clinical Neurosciences states, “There is currently a big gap between abstract models performing prospective configuration and our detailed knowledge of the brain’s anatomy. Future research aims to bridge this gap and understand how the algorithm of prospective configuration is implemented in real brain networks.”

Dr. Yuhang Song, the study’s first author, adds, “In the case of machine learning, simulating prospective configuration on current computers is slow because they operate differently from the biological brain. We need to develop a new type of computer or dedicated brain-inspired hardware that can rapidly and efficiently implement prospective configuration with minimal energy consumption.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it