Does the brain run on deep learning?

Original Source Here

Does the brain run on deep learning?


Editor’s note: The TDS Podcast is hosted by Jeremie Harris, who is the co-founder of Gladstone AI. Every week, Jeremie chats with researchers and business leaders at the forefront of the field to unpack the most pressing questions around data science, machine learning, and AI.

Deep learning models — transformers in particular — are defining the cutting edge of AI today. They’re based on an architecture called an artificial neural network, as you probably already know if you’re a regular Towards Data Science reader. And if you are, then you might also already know that as their name suggests, artificial neural networks were inspired by the structure and function of biological neural networks, like those that handle information processing in our brains.

So it’s a natural question to ask: how far does that analogy go? Today, deep neural networks can master an increasingly wide range of skills that were historically unique to humans — skills like creating images, or using language, planning, playing video games, and so on. Could that mean that these systems are processing information like the human brain, too?

To explore that question, we’ll be talking to JR King, a CNRS researcher at the Ecole Normale Supérieure, affiliated with Meta AI, where he leads the Brain & AI group. There, he works on identifying the computational basis of human intelligence, with a focus on language. JR is a remarkably insightful thinker, who’s spent a lot of time studying biological intelligence, where it comes from, and how it maps onto artificial intelligence. And he joined me to explore the fascinating intersection of biological and artificial information processing on this episode of the TDS podcast.

Here were some of my favourite take-homes from the conversation:

  • JR’s work focuses on studying the activations of artificial neurons in different layers of modern deep neural networks, and comparing them to activations of cell clusters inside the human brain. He goes with biological cell clusters, rather than individual biological neurons, because we simply can’t get resolution down to the single neuron level from brain imaging. These cell clusters correspond to small pixels of brain volume, called voxels. His work involves detecting statistical correlations between the activations of neurons at a given layer of a large deep neural net trained to do language modelling, and voxel activations in parts of the brain that are associated with language.
  • Deep neural nets are known to have a hirearchical structure, where simpler, more concrete concepts (like corners and lines in images, or basic spelling rules in text) are captured by lower layers in the network, and more complex and abstract concepts (like face shapes or wheels in images, and sentence-level ideas in text) appear deeper in the structure. Interestingly, this hirearchy also tends to show up in the brain, suggesting that the analogy between deep networks and the brain extends beyond the neuron level, to the level of the macro-structure of the brain as well. I asked JR if he thinks this is a coincidence, or if it might even hint at a universal property of intelligence: should we expect all intelligence to involve this kind of hirearchical information processing?
  • There’s been controversy in AI recently over whether AI systems truly “understand” concepts in a meaningful sense. We discussed whether or not that’s the case, and whether or not it’s even constructive to talk about the “understanding” of AI systems (our consensus answer was “yes”, and “yes”, but you do you).
  • A central challenge in carrying out brain <> neural network comparisons is that the brain is an incredibly noisy organ, constantly generating and processing signals related to things like heartbeat, breathing, eye movement, coughing, and so on. For that reason, correlating brain behaviour to neural network behaviour is challenging: noisy data, plus small effect sizes is a recipe for frustration at the best of times. To compensate, researchers tend to rely on collecting a huge amount of data, which can result in very high confidence in the existence of interesting correlations, despite the weakness of those correlations.


  • 0:00 Intro
  • 2:30 What is JR’s day-to-day?
  • 5:00 AI and neuroscience
  • 12:15 Quality of signals within the research
  • 21:30 Universality of structures
  • 28:45 What makes up a brain?
  • 37:00 Scaling AI systems
  • 43:30 Growth of the human brain
  • 48:45 Observing certain overlaps
  • 55:30 Wrap-up


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: