inspired algorithm helps AI systems multitask and remember | AITopics
Behind most of today's artificial intelligence technologies, from self-driving cars to facial recognition and virtual assistants, lie artificial neural networks. Though based loosely on the way neurons communicate in the brain, these "deep learning" systems remain incapable of many basic functions that would be essential for primates and other organisms. However, a new study from University of Chicago neuroscientists found that adapting a well-known brain mechanism can dramatically improve the ability of artificial neural networks to learn multiple tasks and avoid the persistent AI challenge of "catastrophic forgetting." The study, published in Proceedings of the National Academy of Sciences, provides a unique example of how neuroscience research can inform new computer science strategies, and, conversely, how AI technology can help scientists better understand the human brain. When combined with previously reported methods for stabilizing synaptic connections in artificial neural networks, the new algorithm allowed single artificial neural networks to learn and perform hundreds of tasks with only minimal loss of accuracy, potentially enabling more powerful and efficient AI technologies.