Académique Documents
Professionnel Documents
Culture Documents
by the
journal Neural Networks, founded in 1988.
Jrgen Schmidhuber
Pronounce: You_again Shmidhoobuh
J. Schmidhuber. Deep Learning in Neural Networks: An Overview. Neural Networks, Volume 61,
January 2015, Pages 85-117 (DOI: 10.1016/j.neunet.2014.09.003), published online in 2014.
Based on Preprint IDSIA-03-14 (88 pages, 888 references): arXiv:1404.7828 [cs.NE]; version v4
(PDF, 8 Oct 2014); LATEX source; complete public BIBTEX file (888 kB). (Older PDF versions: v1 of
30 April; v1.5 of 15 May; v2 of 28 May; v3 of 2 July.)
BibTex:
@article{888,
author = "J. Schmidhuber",
title = "Deep Learning in Neural Networks: An Overview",
journal = "Neural Networks",
pages = "85-117",
volume = "61",
doi = "10.1016/j.neunet.2014.09.003",
note = "Published online 2014; based on TR arXiv:1404.7828 [cs.NE]",
year = "2015"}
Abstract. In recent years, deep artificial neural networks (including recurrent ones) have won
numerous contests in pattern recognition and machine learning. This historical survey compactly
summarises relevant work, much of it from the previous millennium. Shallow and deep learners are
distinguished by the depth of their credit assignment paths, which are chains of possibly learnable,
causal links between actions and effects. I review deep supervised learning (also recapitulating the
history of backpropagation), unsupervised learning, reinforcement learning & evolutionary
computation, and indirect search for short programs encoding deep and large networks.
As a machine learning researcher, I am obsessed with credit assignment. In case you know of
references to add or correct, please send them with brief explanations to juergen@idsia.ch,
preferably together with URL links to PDFs for verification. Between 16 April and 8 October 2014,
drafts of this paper have already undergone massive open online peer review through public mailing
lists including connectionists@cs.cmu.edu, ml-news@googlegroups.com, compneuro@neuroinf.org,
genetic_programming@yahoogroups.com, rl-list@googlegroups.com, imageworld-@diku.dk, and
Google+. Thanks to numerous experts for valuable comments!
The contents of this paper may be used for educational and non-commercial purposes, including
articles for Wikipedia and similar sites.
Table of Contents
1 Introduction to Deep Learning (DL) in Neural Networks (NNs)
2 Event-Oriented Notation for Activation Spreading in Feedforward NNs (FNNs) and Recurrent NNs (RNNs)
5 Supervised NNs, Some Helped by Unsupervised NNs (with Deep Learning Timeline)
7 Conclusion
Overview web sites with lots of additional details and papers on Deep Learning
[A] 1991: Fundamental Deep Learning Problem discovered and analysed: in standard NNs,
backpropagated error gradients tend to vanish or explode. More
[B] Our first Deep Learner of 1991 (RNN stack pre-trained in unsupervised fashion) + Deep Learning
timeline 1962-2013. More, also under www.deeplearning.me
[C] 2009: First recurrent Deep Learner to win international competitions with secret test sets: deep
LSTM recurrent neural networks [H] won three connected handwriting contests at ICDAR 2009
(French, Arabic, Farsi), performing simultaneous segmentation and recognition. More
[D] Very Deep Learning 1991-2013 - our deep NNs have, so far, won 9 important contests in pattern
recognition, image segmentation, object detection. More, also under www.deeplearning.it
[E] 2011: First superhuman visual pattern recognition in an official international competition (with
secret test set known only to the organisers) - twice better than humans, three times better than the
closest artificial NN competitor, six times better than the best non-neural method. More
[F] 2012: First Deep Learner to win a contest on object detection in large images: our deep NNs won
both the ICPR 2012 Contest and the MICCAI 2013 Grand Challenge on Mitosis Detection (important
for cancer prognosis etc, perhaps the most important application area of Deep Learning). More
[G] 2012: First Deep Learner to win a pure image segmentation competition: our deep NNs won the
ISBI'12 Brain Image Segmentation Challenge (relevant for the billion Euro brain projects in EU and
US). More
[L] Who Invented Backpropagation? A brief history of Deep Learning's central algorithm 1960-1981
and beyond: More