Advertisement

Programming tweak helps AI software imitate human visual learning

Researchers programmed an artificial neural network to use a more sophisticated approach to visual processing and learning, which allowed it to recognize objects faster. File Photo by iunewind/Shutterstock
Researchers programmed an artificial neural network to use a more sophisticated approach to visual processing and learning, which allowed it to recognize objects faster. File Photo by iunewind/Shutterstock

Jan. 12 (UPI) -- Using a novel programming tweak, a pair of neuroscientists have managed to replicate human visual learning in computer-based artificial intelligence.

The tweak, described Tuesday in the journal Frontiers in Computational Neuroscience, yielded a model capable learning new objects faster than earlier AI programs.

Advertisement

"Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples," lead study author Maximilian Riesenhuber said in a news release.

"We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing," said Riesenhuber, a professor of neuroscience at Georgetown University Medical Center.

RELATED Facebook says sophisticated AI catches 95% of hate speech posts

At three or four months, human babies are building categories to make sense of the world and its many visual inputs. For example, with limited examples, babies can learn to recognize and differentiate zebras from other animals.

Computers, on the other hand, must process a large number of visual examples of an object before they're able to recognize it.

Traditional AI learning models rely on basic information, like shape and color. But to improve the AI learning process, Riesenhuber and research partner Joshua Rule, a postdoctoral scholar at the University of California, Berkeley, programmed an AI model to ignore low-level data and instead focus on relationships between entire visual categories.

Advertisement
RELATED Education key to developing lifelike intelligent robots, study argues

"The computational power of the brain's hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects," Riesenhuber said.

The researchers programmed their artificial neural network to use a more sophisticated approach to visual processing and learning, relying on its previously acquired visual knowledge.

Their programming tweak helped the AI network learn to recognize new objects much faster.

RELATED Scientists program robot swarms to create art

"Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts," Rule said. "It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter."

Based on brain imaging and object recognition experiments with human subjects, neuroscientists have previously theorized that the anterior temporal lobe of the brain powers an ability to recognize abstract visual concepts.

This allows humans to learn new objects by analyzing relationships between entire visual categories. Instead of starting from scratch each time humans are tasked with learning new objects, these complex neural hierarchies allow humans to leverage prior learning.

"By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe," Riesenhuber said.

Advertisement

Computers have been programmed to beat humans at chess and other sophisticated logic games, but the human brain's ability to quickly process visual information remains unmatched.

"Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood," Riesenhuber said.

Latest Headlines