A new study used an experiment with mice to investigate adaptive behavior in decision-making and "how reward expectations are affected by differences in internal representations." File Photo by toubibe/Pixabay
June 6 (UPI) -- Brains apply "data compression" to maximize performance and minimize cost when making decisions, according to a study published Monday that could affect future research into artificial intelligence.
The study in the journal Nature Neuroscience used an experiment with mice to investigate adaptive behavior in decision-making and "how reward expectations are affected by differences in internal representations."
The mice were challenged with estimating if two tones were separated by an interval longer than 1.5 seconds while researchers recorded the activity of dopamine neurons, known to play a key role in learning the value of actions.
"If the animal wrongly estimated the duration of the interval on a given trial, then the activity of these neurons would produce a 'prediction error' that should help improve performance on future trials," Christian Machens, one of the study's senior authors, said in a news release.
The researchers noted in a preprint of the study that the mice almost always made the correct choice, but that the results became more variable the closer they were to the 1.5-second target. Previous research has shown that animals estimate their own ability to correctly classify different stimuli.
Researchers created models using the concepts of reinforcement learning and temporal-difference learning, areas of machine learning associated with artificial intelligence, and compared those results with the recorded data of the mice's behavior.
The study noted that by comparing such models against the recorded responses, researchers "were able to infer the nature of internal representations animals might be using during a task."
The "data compression" essentially refers to researchers making the brain forget about just enough of information, through "tunnel vision" and by the mice's own actions, that the mice still arrive at a solution but not enough to arrive at the wrong one.
"Compressing the representations of the external world is akin to eliminating all irrelevant information and adopting temporary 'tunnel vision' of the situation," said Machens, the head of the Theoretical Neuroscience lab at the Champalimaud Foundation in Portugal.
The researchers noted that the findings have "broad implications for neuroscience, as well as for artificial intelligence."
"While the brain has clearly evolved to process information efficiently, AI algorithms often solve problems by brute force: using lots of data and lots of parameters," said senior author Joe Paton, director of the Champalimaud Neuroscience Research Program.
"Our work provides a set of principles to guide future studies on how internal representations of the world may support intelligent behavior in the context of biology and AI."