Original Source Here
Cross-influence history of neuroscience and AI
When I started reading about the history of AI, I saw that most of them were inspired by neuroscience/cognition. Here are my notes on the cross-influence of two discipline.
Understanding the human brain and building human-level intelligence has been a quest since Turing. Neuroscience provides a rich source of inspiration for new types of algorithms and architectures, independent of and complementary to mathematical and logic-based methods and ideas that have largely dominated traditional approaches to AI. The most popular and today’s state of art deep learning and reinforcement learning have roots in cognition.
Deep learning in other words fully connected neural network mimics the biological neurons and its roots are from McCulloch and Pitt’s perceptron, which could compute logical functions.
Current state of art Convolutional Neural Networks (CNNs) implements several topics from neural computation, including nonlinear transduction, divisive normalization, and maximum-based pooling of inputs.
Reinforcement Learning methods address the problem of how to maximize the future rewards by mapping states in the environment to actions. It is like an infant. When an infant wave its hands or looks about, it has no explicit teacher, but it does have a sensorimotor connection to this environment. So, reinforcement learning is all about the computational approach of learning from interactions.
Another important topic, attention is also implemented in the CNN algorithms in AI, to outperform the existing approach. Because CNN’s are now processing attentionally important regions of the images. This is not only computationally efficient but also allows other implementations to benefit from it like machine translation.
Another important implementation of attention is generative deep learning (GAN) architecture which generates synthetic samples from a training set.
Memory is also a key mechanism adapted from the brain and implemented in AI applications such as LSTM (Long-short term memory) for feedback connection and DQN for experience replay (Deep Q-Networks). On the other hand, research in AI also gives some insights to neuroscientists such as the backpropagation of the brain, a neural network with external memory that is critical for reasoning over multiple input statements that relate to a particular query, LSTM that motivates the development of working memory model.
Even though the AI applications like Atari or GO agents can beat human counterparts, there are still many gaps between machine and human intelligence and these gaps can only be closed by collaborative research of neuroscience and AI together.
- Demis Hassabis, D. K. C. S. M. B., 2017. Neuroscience-Inspired Artificial Intelligence. Neuron, Volume 95, p. 256.
- McCulloch, W. S. & Pitts, W., 1943. A logical calculus of ideas immanent in nervous activity. The bulleting of mathematical biophysics.
- Sutton, R. & Barto, A., 2018. Reinforcement Learning. Second Edition ed. Cambridge: MIT Press.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot