Académique Documents
Professionnel Documents
Culture Documents
DEEP LEARNING
D
eep learning is a subfield of machine
learning that focuses on training artifi- lem (Guerguiev et al., 2017). Central to their
cial systems to find useful representa- model is the structure of the pyramidal neuron,
tions of inputs. Recent advances in deep which is the most prevalent cell type in the cor-
learning have propelled the once arcane field of tex (the outer layer of the brain). Pyramidal neu-
artificial neural networks into mainstream tech- rons have been a source of aesthetic pleasure
nology (LeCun et al., 2015). Deep neural net- and interesting research questions for neuro-
works now regularly outperform humans on scientists for decades. Each neuron is shaped
difficult problems like face recognition and like a tree with a trunk reaching up and dividing
games such as Go (He et al., 2015; Silver et al., into branches near the surface of the brain as if
2017). Traditional neuroscientists have also extending toward a source of energy or informa-
taken an interest in deep learning because it tion. Can it be that, while most cells of the body
seemed initially that there were telling analogies have relatively simple shapes, evolution has seen
between deep networks and the human brain. to it that cortical neurons are so intricately
Nevertheless, there is a growing impression that shaped as to be apparently impractical?
the field might be approaching a new ‘wall’ and Guerguiev et al. – who are based at the Uni-
that deep networks and the brain are intrinsically versity of Toronto, the Canadian Institute for
different. Advanced Research, and DeepMind – report
Chief among these differences is the widely that this impractical shape has an advantage: the
held belief that backpropagation, the learning long branched structure means that error signals
algorithm at the heart of modern artificial neural at one end of the neuron and sensory input at
networks, is biologically implausible. This issue is the other end are kept separate from each
so central to current thinking about the relation- other. These sources of information can then be
ship between artificial and real brains that it has brought together at the right moment in order
its own name: the credit assignment problem. to find the best solution to a problem.
The error in the output of a neural network (that As Guerguiev et al. note, many facts about
Copyright Shai and Larkum. This is, the difference between the output and the real neurons and the structure of the cortex turn
article is distributed under the terms ’correct’ answer) can be reported or ’backpropa- out to be just right to find optimal solutions to
of the Creative Commons Attribution problems. For instance, the bottoms of cortical
gated’ to any connection in the network, no
License, which permits unrestricted
use and redistribution provided that
matter where it is, to teach the network how to neurons are located just where they need to be
the original author and source are refine the output. But for a biological brain, neu- to receive signals about sensory input, while the
credited. rons only receive information from the neurons tops of these neurons are well placed to receive