13
LINK TO ORIGINAL ARTICLE LINK TO INITIAL CORRESPONDENCE In my recent Timeline article, I described the emergence of neural network models as an important paradigm in neuroscience research (From the neuron doctrine to neu- ral networks. Nat. Rev. Neurosci. 16, 487–497 (2015)) 1 . In his correspondence (Neural networks in the future of neuroscience research. Nat. Rev. Neurosci. http://dx.doi. org/10.1038/nrn4042 (2015)) 2 , Rubinov provides some thoughtful comments about the distinction between artificial neural networks and biologically inspired ones and about how a strictly data-driven approach may succeed at providing a general theory of neural circuits. I thank Rubinov for these comments and note that this theory agnosti- cism is a methodological approach that we respect and indeed sponsored in our Brain Activity Map proposal that led to the BRAIN Initiative 3 . Also, although in my Timeline article I tried to provide a brief summary of the history of artificial neural network models, I am not yet personally convinced that there are clear instances in which a bio- logically inspired neural network model has yet been validated (“…it is unclear whether existing neural network models have enough predictive value to be considered valid or useful for explaining brain circuits.” (REF. 1)). There are many exciting areas of progress in current neuroscience detailing phenom- enology that is consistent with some neural network models, some of which I tried to summarize and illustrate, but at the same time we are still far from a rigorous demon- stration of any neural network model with causal experiments. I therefore could not agree more with Rubinov that we still have “largely not bridged the gap between elegant theory and neuroscientific observation”. But when will we know that we have bridged that gap? This is a difficult question to answer, depending on the particular viewpoint, and I would leave this open to the reader’s own interpretation. In my mind, a success- ful neural model should have quantitative accuracy in predicting either the behaviour, mental or perceptual state of the animal, or at least the future internal dynamics of the system. Another characteristic of a success- ful model could be its effective use in design- ing therapies of brain-based diseases. On the other hand, one of my mentors, David Tank, argued that for a true understanding of a neural circuit we should be able to actu- ally build it, which is a stricter definition of a successful theory (D. Tank, personal communication) Finally, as mentioned in the Timeline article, one will also need to connect neural network models to theories and facts at the structural and biophysical levels of neural circuits and to those in cog- nitive sciences as well, for proper ‘scientific knowledge’ to occur in the Kantian sense. Rafael Yuste is at the Neurotechnology Center and Kavli Institute of Brain Sciences, Departments of Biological Sciences and Neuroscience, Columbia University, New York, New York 10027, USA. e-mail: [email protected] doi:10.1038/nrn4043 Published online 21 October 2015 1. Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487–497 (2015). 2. Rubinov, M. Neural networks in the future of neuroscience research. Nat. Rev. Neurosci. http://dx. doi.org/10.1038/nrn4042 (2015). 3. Alivisatos, A. P. et al. The brain activity map project and the challenge of functional connectomics. Neuron 74, 970–974 (2012). Acknowledgements The author is supported by the US National Institutes of Health (DP1EY024503) and ARO W911NF-12-1-0594 (MURI). Competing interests statement The author declares no competing interests. On testing neural network models Rafael Yuste CORRESPONDENCE NATURE REVIEWS | NEUROSCIENCE www.nature.com/reviews/neuro © 2015 Macmillan Publishers Limited. All rights reserved

On testing neural network models - University of Arizonaamygdala.psychdept.arizona.edu/Jclub/Yuste-Neuron... · existing neural network models have enough predictive value to be considered

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

  • L I N K TO O R I G I N A L A RT I C L EL I N K TO I N I T I A L C O R R E S P O N D E N C E

    In my recent Timeline article, I described the emergence of neural network models as an important paradigm in neuroscience research (From the neuron doctrine to neu-ral networks. Nat. Rev. Neurosci. 16, 487–497 (2015))1. In his correspondence (Neural networks in the future of neuroscience research. Nat. Rev. Neurosci. http://dx.doi.org/10.1038/nrn4042 (2015))2, Rubinov provides some thoughtful comments about the distinction between artificial neural networks and biologically inspired ones and about how a strictly data-driven approach may succeed at providing a general theory of neural circuits. I thank Rubinov for these comments and note that this theory agnosti-cism is a methodological approach that we respect and indeed sponsored in our Brain Activity Map proposal that led to the BRAIN Initiative3. Also, although in my Timeline article I tried to provide a brief summary of the history of artificial neural network models, I am not yet personally convinced that there are clear instances in which a bio-logically inspired neural network model has yet been validated (“…it is unclear whether

    existing neural network models have enough predictive value to be considered valid or useful for explaining brain circuits.” (REF. 1)). There are many exciting areas of progress in current neuroscience detailing phenom-enology that is consistent with some neural network models, some of which I tried to summarize and illustrate, but at the same time we are still far from a rigorous demon-stration of any neural network model with causal experiments. I therefore could not agree more with Rubinov that we still have “largely not bridged the gap between elegant theory and neuroscientific observation”. But when will we know that we have bridged that gap? This is a difficult question to answer, depending on the particular viewpoint, and I would leave this open to the reader’s own interpretation. In my mind, a success-ful neural model should have quantitative accuracy in predicting either the behaviour, mental or perceptual state of the animal, or at least the future internal dynamics of the system. Another characteristic of a success-ful model could be its effective use in design-ing therapies of brain-based diseases. On the

    other hand, one of my mentors, David Tank, argued that for a true understanding of a neural circuit we should be able to actu-ally build it, which is a stricter definition of a successful theory (D. Tank, personal communication) Finally, as mentioned in the Timeline article, one will also need to connect neural network models to theories and facts at the structural and biophysical levels of neural circuits and to those in cog-nitive sciences as well, for proper ‘scientific knowledge’ to occur in the Kantian sense.

    Rafael Yuste is at the Neurotechnology Center and Kavli Institute of Brain Sciences, Departments of Biological Sciences and Neuroscience, Columbia

    University, New York, New York 10027, USA.

    e-mail: [email protected]

    doi:10.1038/nrn4043 Published online 21 October 2015

    1. Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487–497 (2015).

    2. Rubinov, M. Neural networks in the future of neuroscience research. Nat. Rev. Neurosci. http://dx.doi.org/10.1038/nrn4042 (2015).

    3. Alivisatos, A. P. et al. The brain activity map project and the challenge of functional connectomics. Neuron 74, 970–974 (2012).

    AcknowledgementsThe author is supported by the US National Institutes of Health (DP1EY024503) and ARO W911NF-12-1-0594 (MURI).

    Competing interests statementThe author declares no competing interests.

    On testing neural network modelsRafael Yuste

    CORRESPONDENCE

    NATURE REVIEWS | NEUROSCIENCE www.nature.com/reviews/neuro

    © 2015 Macmillan Publishers Limited. All rights reserved

    http://www.nature.com/nrn/journal/v16/n8/abs/nrn3962.htmlhttp://dx.doi:10.1038/nrn4042mailto:rmy5%40columbia.edu?subject=http://dx.doi.org/10.1038/nrn4042http://dx.doi.org/10.1038/nrn4042

  • L I N K TO O R I G I N A L A RT I C L EL I N K TO A U T H O R ’ S R E P LY

    Neural networks are increasingly seen to supersede neurons as fundamental units of complex brain function. In his Timeline article (From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487–497 (2015))1, Yuste provides a timely overview of this process, but does not clearly differ-entiate between biological neural network models (broadly and imprecisely defined as empirically valid models of (embodied) neuronal or brain systems, which enable the emergence of complex brain function through distributed computation) and arti-ficial neural network models (a relatively well-defined class of networks originally designed to model complex brain function2 but now mainly viewed as a class of biologi-cally inspired data-analysis algorithms useful in diverse scientific fields3).

    A distinction between biological and arti-ficial neural network models is important as the neuroscience network paradigm is mainly driven by the aim of uncovering bio-logically valid mechanisms of neural com-putation. Artificial neural networks were initially proposed as candidate models for such computation but, despite being enthu-siastically researched at the end of the twen-tieth century, they have largely not bridged the gap between elegant theory and neu-roscientific observation4,5. In this context,

    Yuste’s emphasis on some classic artificial neural network models does not seem to be supported by the evidence of, or the promise for, the problem-solving capacity of these models in neuroscience6.

    What could be an alternative promising approach to biologically valid neural network modelling? At present we can only specu-late, but the ongoing development of high-resolution high-throughput brain imaging technologies — including those being devel-oped as part of the BRAIN Initiative7 — and the consequent availability of increasingly large structural8 and functional9 imaging data sets, make it appealing to initially search for patterns in such data in less theory-bound and more data-driven ways10,11, and to subsequently construct theories a priori constrained on these discovered patterns12. A famous example of this approach in biology is the formulation of the theory of evolution by natural selection; this theory arose from an initial aim to catalogue all living biological organisms on earth, and from a subsequent careful analysis of the obtained diverse bio-logical data13. Interestingly, artificial neural networks may yet prove to be important in this quest but in the role of powerful tools for analysing complex imaging data sets14, rather than as a theoretical foundation for how the brain computes.

    Mikail Rubinov is at the Department of Psychiatry and Churchill College, University of Cambridge,

    Cambridge CB3 0DS, UK; and the Janelia Research Campus, Howard Hughes Medical Institute, Ashburn,

    Virginia 20147, USA.

    e-mail: [email protected]

    doi:10.1038/nrn4042 Published online 21 October 2015

    1. Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487–497 (2015).

    2. Rumelhart, D. E., McClelland, J. L. & The PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition (MIT Press, 1986).

    3. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    4. Marcus, G. in The Future of the Brain: Essays by the World’s Leading Neuroscientists (eds Marcus, G. & Freeman, J.) 205–215 (Princeton Univ. Press, 2014).

    5. Zador, A. in The Future of the Brain: Essays by the World’s Leading Neuroscientists (eds Marcus, G. & Freeman, J.) 40–49 (Princeton Univ. Press, 2014).

    6. Laudan, L. Progress and Its Problems: Towards a Theory of Scientific Growth (University of California Press, 1978).

    7. Alivisatos, A. P. et al. Nanotools for neuroscience and brain activity mapping. ACS Nano 7, 1850–1866 (2013).

    8. Oh, S. W. et al. A mesoscale connectome of the mouse brain. Nature 508, 207–214 (2014).

    9. Ahrens, M. B. et al. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).

    10. Sporns, O. Discovering the Human Connectome (MIT Press, 2012).

    11. Vogelstein, J. T. et al. Discovery of brainwide neural-behavioral maps via multiscale unsupervised structure learning. Science 344, 386–392 (2014).

    12. Sejnowski, T. J., Churchland, P. S. & Movshon, J. A. Putting big data to good use in neuroscience. Nat. Neurosci. 17, 1440–1441 (2014).

    13. Kell, D. B. & Oliver, S. G. Here is the evidence, now what is the hypothesis? The complementary roles of inductive and hypothesis-driven science in the post-genomic era. BioEssays 26, 99–105 (2004).

    14. Helmstaedter, M. et al. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature 500, 168–174 (2013).

    AcknowledgementsThe author thanks C. Chang for helpful comments. The author has received funding from the NARSAD Young Investigator Award, the Isaac Newton Trust Research Grant and the Parke Davis Exchange Fellowship.

    Competing interests statementThe author declares no competing interests.

    Neural networks in the future of neuroscience researchMikail Rubinov

    CORRESPONDENCE

    NATURE REVIEWS | NEUROSCIENCE www.nature.com/reviews/neuro

    © 2015 Macmillan Publishers Limited. All rights reserved

    http://www.nature.com/nrn/journal/v16/n8/abs/nrn3962.htmlhttp://dx.doi:10.1038/nrn4043

  • In a way, the history of neuroscience is the history of its methods. This is evident in the case of the neuron doctrine, which states that the structural and functional unit of the nervous system is the individual neuron1. The neuron doctrine was first enunciated by Cajal2 and Sherrington3 (FIG. 1) and has served as the central conceptual foundation for neuroscience1. This focus on the prop-erties of individual neurons was a natural consequence of the use of single-cell ana-tomical and physiological techniques, such as the Golgi stain4 or the microelectrode5. The piecemeal reconstruction of neuronal circuits into their individual neuronal components using these methods enabled researchers to decipher the structural plan and design logic of many regions of the brain, with the analysis of the retina providing a remarkable early example6 (FIG. 2a). Furthermore, single-neuron recordings opened up the possibility of functional studies of the cerebral cortex7. Nevertheless, in spite of the enormous advancements in knowledge facilitated by these techniques, a general theory of brain function with the explanatory power to account for behavioural or cognitive states, or to explain mental pathologies, remains elu-sive. It is possible that the neuron doctrine, with its focus on individual neurons, may be partly to blame.

    Unlike the neuron doctrine, neural network models assume that neural circuit function arises from the activation of groups or ensembles of neurons8. According to these models, these ensembles generate emergent functional states that, by definition, can-not be identified by studying one neuron at a time. In fact, it is thought that the brain, unlike other body organs, could be specifi-cally built to generate emergent functional states9. Although the earliest neural network models were formulated in the 1940s10,11, they have only recently become experimen-tally testable as a result of the development of new optical, electrophysiological and computational tools12–15. Using data gener-ated by these novel methods, neural network models could incorporate the phenomeno-logical insights acquired using single-neuron approaches and also explain phenomena that do not easily fit within single-neuron frameworks.

    In this Perspective I describe how the neuron doctrine arose and flourished as a result of the use of single-neuron techniques and consider the resulting limitations of its view of neural circuits. The subsequent growth of neural network models is dis-cussed, highlighting results obtained with new multineuronal recording methods. I suggest that neuronal network models could

    be a useful paradigm, or act as guideposts, to understand many brain computations. This article does not provide an exhaustive review but instead illustrates with a small number of examples the transition between these two paradigms of neuroscience.

    History of the neuron doctrineOrigins. Many neuroscience textbooks begin by explaining Cajal’s proposal that the unit of the structure of the nervous system is the individual neuron2,16,17 (FIGS 1,2a). This idea, actively debated at the time, contrasted with the ‘reticular theory’ — defended by Golgi himself — which hypothesized that neurons were linked in a single overarch-ing syncytium1. Cajal’s keen observations of physical discontinuities between neuronal processes were proven correct: decades later, the introduction of electron microscopy18 demonstrated synaptic clefts between neu-rons19,20. The neuron doctrine was the logical extension of Virchow’s cell theory, which itself arose from the works of Leeuwenhoek, Hooke, Schleiden and Schwann, among others, who, using microscopes, described the cell as the basic unit of the structure, reproduction and pathology of all biologi-cal organisms21. Partly thanks to an influ-ential review by the renowned anatomist Waldeyer22, the neuron doctrine became widely accepted and developed into the essential conceptual basis for the piecemeal description of the structure of nervous sys-tems6 carried out by early anatomists and many subsequent researchers.

    The functional aspect of the neuron doctrine — the hypothesis that individual neurons are also the unit of function in the nervous system — evolved in paral-lel and was spearheaded by Sherrington3. Closely linked to this was the concept of the receptive field, originally formulated by Sherrington as the area of skin from which a scratch reflex is elicited. This concept was cemented with the development of tech-niques that enabled investigators to record activity from individual nerve fibres23, revealing that different neurons responded specifically to different sensory stimuli24. Thus, each neuron had its own receptive field: a specific feature of the sensory world that activates it and defines its function.

    T I M E L I N E

    From the neuron doctrine to neural networksRafael Yuste

    Abstract | For over a century, the neuron doctrine — which states that the neuron is the structural and functional unit of the nervous system — has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.

    PERSPECTIVES

    NATURE REVIEWS | NEUROSCIENCE VOLUME 16 | AUGUST 2015 | 487

    © 2015 Macmillan Publishers Limited. All rights reserved

  • One example of this concept was the dis-covery of ‘bug detector’ neurons in the frog retina: neurons with small, motion-sensi-tive, receptive fields that appeared perfectly designed to detect moving flies25 (FIG. 2b–d).

    Over the decades, the focus on single neurons and receptive fields became the cornerstone of electrophysiology, espe-cially after the introduction of the tungsten microelectrode by Hubel5. A rich tradition of single-cell recordings, which continues to this day, has mapped receptive fields throughout the brain. Particularly influ-ential were the discoveries of topographi-cally organized receptive fields in cortical ‘columns’ described by Mountcastle26 and by Hubel and Wiesel7,27. These successes crystalized conceptually the idea that the single neuron was not only the anatomical and functional unit of the brain but also its perceptual unit28. Following this logic, for example, at the top of the hierarchy of the mammalian visual system one could find ‘grandmother cells’ that were responsible for the perception of our grandmother28. Consistent with this, ‘face cells’ that responded to images of specific individu-als were found in the temporal cortex of monkeys and humans29–31. Moreover, elec-trical stimulation of a very small number of cortical neurons32, or even of individual neurons33,34, can lead to behavioural altera-tions in monkeys and rodents, suggesting that the functional properties of individual neurons could represent the functional units of the perception or even the behaviour of the animal.

    Limitations. A century after Cajal and Sherrington, it is clear that the nervous sys-tem is built out of individual neurons and that their responses can be correlated with particular sensory stimuli, motor actions and behaviours. There is no question that work based on the basic assumptions made by the neuron doctrine has been ground-breaking. At the same time, when examining the historical evolution of neuroscience, one appreciates the direct links between the neuron doctrine and the use of single-neuron methods1. The neuron doctrine was cemented by the Golgi stain4, which ena-bled investigators to visualize with relative completeness the morphologies of isolated neurons, and by electrodes5, which provided routine recordings of individual neurons in whole brains. It therefore seems quite natural that neuroscientists emphasized the impor-tance of individual neurons in the brain’s structure and function. As in other fields of science, there is a direct link between the techniques used and the concepts and paradigms that arose from these studies21, as investigators cannot make discoveries beyond those that their techniques reveal35. However, as with every established scientific paradigm36, over the years the neuron doc-trine may have become limiting.

    It is possible, for example, that the concept of receptive fields may have led to an under-estimation of the true complexity of neuronal function37. The fact that neurons are specifi-cally activated by particular inputs may not necessarily mean that this is their role in the circuit. It may be too narrow or simplistic to

    equate neuronal function with the fact that a neuron fires in response to a stimulus: its function could be related to its firing, to the exact time at which it fires, to whether or not it fires in synchrony with or builds a dynami-cal pattern with other neurons, or even to its lack of firing37. Indeed, even in primary sensory areas, and particularly in awake animals, neurons do not always respond in the same way to identical sensory stimuli38,39, suggesting that their coding could be more sophisticated than originally thought. In fact, organized spontaneous activity appears to be prevalent in many brain regions40–43, particu-larly in humans44,45. This spontaneous activity, already described in the first electroencepha-lography (EEG) recordings44, cannot be easily explained from the perspective of receptive fields, as it occurs in the absence of sensory inputs, and thus indicates that neurons could be engaged in intrinsic functions unrelated to sensory stimulus or motor action (FIG. 3).

    In addition, when interpreting ‘face neuron’ data31, perhaps one of the strongest pieces of evidence for feature selectivity in receptive fields, it is difficult to understand how the investigators can be lucky enough to find a neuron that codes for the face of a particular person when recording from one neuron at a time in a cortical area that contains hundreds of thousands, or even millions, of neurons. It is more likely that coding for any particular face is distributed across large populations of neurons. A similar argument has been made for find-ing place cells in the hippocampus46. Thus, the receptive field could be reinterpreted

    Two-photon imaging of neuronal activity in vitro achieved162

    Neural network model of orientation selectivity proposed191

    Organic calcium indicators synthesized

    1873 1888 1891 1906 1914 1929 1938 1943 1945 1949 19581955 1957 1959 1962

    Nature Reviews | Neuroscience

    Invention of Golgi method4

    1969 1970 1972 1973 1977 1979 1982 1983 1984 1985 1986 1990 1991 1994 1995 1997 2002 2003 2004 2006 2007

    Birth of the neuron doctrine2

    Endorsement of neuron doctrine by Waldeyer22

    Description of receptive fields of neurons in the skin3

    Discovery of central pattern generators70

    Electrical recordings from single nerve fibres23

    Development of EEG44

    Description of reverberating activity188

    First neural network model100

    Electron microscopy of biological specimens18

    Neuronal assemblies proposed11

    Ultrastructural confirmation that neurons are separate cells19

    Introduction of the tungsten microelectrode5

    Invention of the perceptron92

    Description of ‘bug detector’ neurons189

    Discovery of cortical receptive fields7

    Neural network models of cerebellum and cortex developed90,133

    Neuron doctrine extended to perception28

    Self-organizing maps achieved with neural networks104

    Organic voltage indicators synthesized153

    Tensor neural networks models of cerebellum proposed122

    Attractor neural networks incorporated into models96

    Multi-electrode electrical recording developed148

    Discovery of ‘face cells’ 29

    Introduction of cooled CCDs to neuroscience160

    Two-photon microscopy developed161

    Functional MRI developed167

    Calcium imaging of neural circuits in vitro developed190

    Microstimulation of cortex confirms receptive field coding32

    Multi-electrode arrays developed149

    Single-cell experimental or theoretical publication

    Multicellular experimental or theoretical publication Key methods

    Two-photon imaging of neuronal activity in vivo achieved163

    Genetically encoded calcium indicators generated155

    Linear attractor networks described107

    Liquid-state neural networks described109

    Calcium imaging of circuits in vivo achieved179

    Single-cell stimulation elicits movement33

    Deep-belief networks invented94

    Calcium imaging of awake mice achieved180

    Calcium imaging of an entire zebrafish nervous system described42

    Figure 1 | Historical evolution of the neuron doctrine and neural network models. Historical summary of the key single-cell or multicellular experi-mental or theoretical publications used to support the neuron doctrine or neural network paradigms. CCD, charge-coupled device; EEG, electroencephalography.

    P E R S P E C T I V E S

    488 | AUGUST 2015 | VOLUME 16 www.nature.com/reviews/neuro

    © 2015 Macmillan Publishers Limited. All rights reserved

  • more generally as the single-cell manifesta-tion of distributed circuit states: that is, the activation of a large number of neurons by a stimulus or a location. If this is the case, we should re-examine the assumption that single neurons are the functional units of the nervous system, and instead focus our attention on groups of neurons11,47.

    Moving to neural circuitsStructural evidence for distributed circuits. Is there any evidence that groups of neurons, rather than single neurons, serve as func-tional units in neural circuits? Indeed, there is anatomical evidence to support the notion that most neural circuits, particularly in the mammalian brain, are built with a distrib-uted connectivity: that is, as a connectivity matrix in which each neuron receives inputs from many other neurons while sending its outputs to large populations of cells48,49. Furthermore, the majority of the excitatory connections in the brain are weak, as though each neuron is trying to integrate as many excitatory inputs as possible without satura-tion50. For example, the average pyramidal cell neuron in the mammalian cortex prob-ably receives inputs from and connects to tens of thousands of other cells51. More dra-matically, each Purkinje cell in the cerebel-lum probably receives a single input from as many as several hundred thousands of gran-ule cells, and each granule cell itself connects with as many Purkinje cells as it can, given its axonal length52. This distributed design, which did not escape Cajal’s notice (he compared it to telegraph lines)6, appears to be built to enhance the distribution of infor-mation. A distributed design principle is also prominent in inhibitory neurons. Most subtypes of cortical GABAergic interneurons

    (with the exception of vasoactive intes-tinal peptide-expressing interneurons)53 appear to connect with as many excitatory neighbours as possible, with a connectivity approaching the physical limit (connec-tion to 100% of local targets)54–56. Moreover, inhibitory neurons are often linked to each other by gap junctions57–59, as though they are designed to work as a unit. In addition, some interneurons release GABA directly onto the neuropil60, affecting all of their local neigh-bours. Thus, inhibitory neurons appear to be designed to extend a ‘blanket of inhibition’ onto excitatory cells56.

    This distributed connectivity plan is also reflected in the biophysical properties of neu-rons. For example, many mammalian neu-rons are covered with dendritic spines, which receive essentially all excitatory inputs61. The fact that these excitatory inputs choose to connect on spines and not on neighbouring dendritic shafts indicates that spines must have a fundamental role in neuronal integra-tion62. One possibility is that spines facilitate distributed connectivity by maximizing the assortment of different axons that dendrites can connect to63. Also, by avoiding input saturation, spines could enable the independ-ent integration of each excitatory input while simultaneously allowing the neuron to alter the synaptic strength of each input individu-ally64. These properties only make sense if the neuron is trying to integrate as many different inputs as possible.

    Now, if one assumes that neural circuits are built to maximize connectivity, one could then argue that the more connected a neu-ron is, the less important it becomes in the circuit9. If every neuron is connected with every other neuron, any individual neuron becomes dispensable (like an individual vote

    in a democracy). Because of this, individual neurons in the mammalian brain are likely to be irrelevant for the overall circuit function, which must depend instead on interactions among a large number of neurons. This design is unique among other organs in the body, as the overall function of organs such as the liver, kidney, lung, skin or muscle can, in principle, be comprehended by understand-ing the function of each of their cells, whereas for the brain one may need to consider the activity of selected populations of cells.

    The situation in the nervous system — in which many elements are connected and contribute structurally or functionally to a larger structure — is characteristic of physical systems that generate emergent properties8,65. Emergent properties arise from interactions among elements but are, by definition, not present in the individual elements. Even something as mundane as watching a movie on a TV screen is an example of the importance of emergent properties: one cannot comprehend the scene by looking at individual pixels but instead needs to simultaneously view many pixels to decipher the image. Although the neuron doctrine and single neuronal tech-niques have focused on the exhaustive analy-sis of the individual ‘pixels’ of the brain, it is possible that the function of neural circuits may not be apparent unless one can visualize many, or most, ‘pixels’ in the screen.

    Neuronal assemblies and spontaneous activ-ity. The idea that neural circuits are built for an emergent function is not new. As early as the 1930s, Cajal’s disciple Rafael Lorente de Nó argued that the structural design of many parts of the nervous system is one of recurrent connectivity whose purpose could be

    Two-photon imaging of neuronal activity in vitro achieved162

    Neural network model of orientation selectivity proposed191

    Organic calcium indicators synthesized

    1873 1888 1891 1906 1914 1929 1938 1943 1945 1949 19581955 1957 1959 1962

    Nature Reviews | Neuroscience

    Invention of Golgi method4

    1969 1970 1972 1973 1977 1979 1982 1983 1984 1985 1986 1990 1991 1994 1995 1997 2002 2003 2004 2006 2007

    Birth of the neuron doctrine2

    Endorsement of neuron doctrine by Waldeyer22

    Description of receptive fields of neurons in the skin3

    Discovery of central pattern generators70

    Electrical recordings from single nerve fibres23

    Development of EEG44

    Description of reverberating activity188

    First neural network model100

    Electron microscopy of biological specimens18

    Neuronal assemblies proposed11

    Ultrastructural confirmation that neurons are separate cells19

    Introduction of the tungsten microelectrode5

    Invention of the perceptron92

    Description of ‘bug detector’ neurons189

    Discovery of cortical receptive fields7

    Neural network models of cerebellum and cortex developed90,133

    Neuron doctrine extended to perception28

    Self-organizing maps achieved with neural networks104

    Organic voltage indicators synthesized153

    Tensor neural networks models of cerebellum proposed122

    Attractor neural networks incorporated into models96

    Multi-electrode electrical recording developed148

    Discovery of ‘face cells’ 29

    Introduction of cooled CCDs to neuroscience160

    Two-photon microscopy developed161

    Functional MRI developed167

    Calcium imaging of neural circuits in vitro developed190

    Microstimulation of cortex confirms receptive field coding32

    Multi-electrode arrays developed149

    Single-cell experimental or theoretical publication

    Multicellular experimental or theoretical publication Key methods

    Two-photon imaging of neuronal activity in vivo achieved163

    Genetically encoded calcium indicators generated155

    Linear attractor networks described107

    Liquid-state neural networks described109

    Calcium imaging of circuits in vivo achieved179

    Single-cell stimulation elicits movement33

    Deep-belief networks invented94

    Calcium imaging of awake mice achieved180

    Calcium imaging of an entire zebrafish nervous system described42

    P E R S P E C T I V E S

    NATURE REVIEWS | NEUROSCIENCE VOLUME 16 | AUGUST 2015 | 489

    © 2015 Macmillan Publishers Limited. All rights reserved

  • to generate functional reverberations (pat-terns of neuronal activity that persist after the initial stimulus has ceased) among groups of neurons66,67. This idea was embraced by Donald Hebb, who proposed that neural circuits worked by sequentially activating groups of neurons, which he called ‘cell assemblies’11. According to Hebb, these recur-sive and reverberating patterns of neuronal activation, firing in closed loops, would be responsible for generating functional states of the brain, such as memories or specific behaviours11. He proposed that synaptic con-nections between neurons could be altered by a learning rule (a local change in synaptic strength governed by correlated patterns of activity), thus linking neurons into an assem-bly68. In doing so, the circuit has ‘learned’ a pattern of activity, storing it into its altered repertoire of synaptic connections.

    In parallel with these ideas, a rich phe-nomenology demonstrated the presence of intrinsic, spontaneous activity in many neural circuits (FIG. 3). Rhythmic types of activity are generated by central pattern gen-erators (CPGs), which are responsible for stereotypical behaviours such as digestion, locomotion or respiration69. The concept of CPGs originated with Sherrington’s student, Graham Brown, who observed the persis-tence of spinal cord activity in the absence

    of sensory stimuli70,71. Although the idea ran contrary to Sherrington’s view that neural circuits operate through an input–output sequence of reflexive actions, Sherrington himself later appeared to be open to the importance of intrinsic activity patterns72. Thus, the scientists responsible for the neuron doctrine, Cajal and Sherrington, trained the early pioneers of the alternative viewpoints.

    A related line of experimental work, which began with the first use of EEG by Berger44, led to the description of spontane-ous electrical oscillations throughout the brain40,73,74. These rhythmic modulations in neuronal activity, which can arise from the dynamical properties of neurons75,76, have been linked to a variety of important functional roles, including attention, brain states, sensory or computational processing, decision-making, perceptual binding and consciousness77–87. The role of spontaneous activity in brain function could be basic and ancient: during evolution, the function of the CNS may have resulted from the encephaliza-tion of simpler fixed action pattern rhythms88. From this point of view, repeated or oscilla-tory firing patterns may no longer correspond to simple rhythmic movements but could have acquired a symbolic or computational meaning88.

    Emergent circuit propertiesThe first neural network models. Neuronal reverberations, neuronal assemblies, ensem-bles, CPGs and oscillations are examples of functional emergent states that may be of great importance but cannot be captured within a single-neuron framework. These ideas have attracted many theorists, who, over the decades, formalized these emergent models, creating the concept of a neural network8,89,90. The term ‘neural network’ has become synonymous with models of distributed neural circuits in which neurons are abstracted into nodes and linked by con-nections that change through learning rules8 (FIG. 4). Typically, neurons in neural networks are connected in an all-to-all or a random fashion and integrate inputs linearly, leading to a threshold nonlinearity that causes the cell to fire and activates its outputs.

    In the first neural network models10, neurons merely summed inputs to reach a threshold and fire action potentials. If the threshold is set at a high level, the neuron will only fire if many (or all) of its inputs are active. This strategy corresponds to the AND logical function and could be used, for example, to build neurons that are very selective to the conjunction of inputs and to detect and recognize a pattern or particu-lar object. At the same time, if one sets the threshold to a low level, the neuron would fire whenever any of its inputs is active. This corresponds to the logical OR function and enables neurons to respond to a set of inputs, thus generating an invariant response, even if inputs are changing. Hence, even these simple circuits could implement Boolean logic, the mathematical foundation of digital calculus and computers, as demonstrated by Turing91. Neural networks have, in princi-ple, the computational abilities of the most sophisticated computers. Importantly, these networks generate emergent computations: the overall logic and function implemented in the circuit (for example, object recogni-tion or invariant response) depends on the activity — or lack of activity — of all of its components.

    Over the ensuing decades, more complex models were created. These belonged to two basic types, based on their architecture: feed-forward networks, which are governed by one-way connections (FIG. 4a), and recurrent networks, in which feedback connectivity is dominant (FIG. 4b). Feedforward networks (sometimes referred to as multilayer per-ceptrons) are organized in layers and linked by unidirectional connections92. Such cir-cuits can solve effectively problems such as categorization or classification of inputs.

    Nature Reviews | Neuroscience

    a b

    c

    Magnet

    Electrode

    Frog

    d

    Figure 2 | Anatomical and physiological examples of the neuron doctrine. a | Cajal’s schematic illustration of a section of a bird retina, depicting individual neurons that were assumed to be the units of the circuit. The arrows indicate the direction of electrical impulses, correctly deduced by Cajal’s application of the neuron doctrine and the law of dynamic polarization. b–d | Physiological application of the neuron doctrine. The diagram of the experiment (panel b) shows the electrical activity of a ganglion cell in a frog’s retina recorded while a visual stimulus is moved with a magnet over a screen (panel c), as well as the electrical activity in response to a stationary stimulus (panel d). Note how the neuron responds vigorously to the moving stimulus but only weakly to the stationary stimulus. The neuron was defined as a ‘bug detector’, because its receptive field matches the movement of a physi-ological prey of the frog. Thus, the single neuron would have a very specific function, in agreement with the idea that single neurons are the functional units of the circuit. Part a adapted, with permission, from REF. 192 © The Nobel Foundation 1906. Parts b–d adapted, with permission, from REF. 25 © 1960 Maturana et al. Journal of General Physiology. 43:129–175. doi:10.1085/jgp.43.6.129.

    P E R S P E C T I V E S

    490 | AUGUST 2015 | VOLUME 16 www.nature.com/reviews/neuro

    © 2015 Macmillan Publishers Limited. All rights reserved

  • Although originally viewed with suspicion by the artificial intelligence community93, feedforward networks have recently under-gone a renaissance in computer science, through the development of novel training rules, an expansion in the number of layers and the access of large-scale datasets and bet-ter hardware implementations (convolutional or deep belief networks)94,95.

    Recurrent networks, however, empha-size feedback connections between pools of neurons. In some models, the recurrent con-nectivity enables these networks to generate intrinsic activity, which becomes stable at particular points in time, termed attractors96. Attractor models were inspired by the Ising model of ferromagnetism, in which individ-ual atomic spins interact with neighbouring spins and spontaneously align into emergent states by minimizing an energy variable97. Likewise, in a recurrent neural network with symmetric connections (in which synapses between any pair of neurons have the same synaptic strength), one can define an ‘energy’ function that assigns a value to any activ-ity pattern to measure the propensity of the network to change its activity. It can be dem-onstrated mathematically that this energy tends to decrease, endowing the network with a dynamical trajectory that coalesces into several lower energy states. Because of this, the activity map for such networks con-tains multiple stable points, which ‘attract’ the activity; hence the term ‘attractors’96,98 (FIG. 4c). Attractors are another example of the emergent states of the activity of the network and could serve to implement associative memories, decision-making, or — more gen-erally — solutions to optimization or other computational problems99,100. Moreover, the trend towards lower energy states endows these networks with pattern completion properties: that is, the internal dynamics of the system can ‘complete’ a spatiotemporal

    pattern of activity when provided with a par-tial stimulus. Pattern completion is found in memory recall and many neuroethological fixed action patterns101,102.

    Recent neural network models. Starting with the original models of McCulloch and Pitts10, neural networks were traditionally based on circuits that had an all-to-all connectivity or were widespread, where the exact spatial pattern of the connections did not matter. At the same time, connections in the brain often have particular spatial properties. For exam-ple, inhibition tends to mostly affect local neighbours (known as lateral inhibition)103. This was explored in one set of neural net-work models, in which adding a spatial local profile to the connectivity enabled networks to implement competitive ‘winner takes all’ algorithms, in which individual neurons stand out among their neighbours, stifling their activity. These algorithms perform pattern separation: that is, they differentiate similar inputs by having them excite differ-ent sets of neurons, thus ‘placing’ them into different locations of the activity map of network104,105. Interestingly, these excitatory–inhibitory networks were able to spontane-ously assemble into self-organizing maps in which the computational variables of the input space became systematically ordered onto the planar physical structure of the network106. This may be particularly interesting for neu-robiologists because many areas of the brain have sensory, motor or cognitive maps, and perhaps lateral inhibitory connections could help to build these maps spontaneously during development.

    Continuing with this trend, recent genera-tions of neural network models have tried to better capture known structural and func-tional features of brain circuits107–113. In fact, unlike the original attractor networks (which assumed all-to-all, symmetric connections

    between neurons, and were deterministic as they were locked into discrete stable activ-ity states), an entirely new type of recurrent neural networks (which are stochastic, not deterministic) allows weights to be asym-metric and exhibits transient dynamical patterns without stable states109. Moreover, the asymmetry in the synaptic connectivity matrix naturally endows these models with temporally organized activity89. In fact, many of these newer dynamical networks models can produce repeated temporal patterns in the firing of the neurons114, which — because of the recurrent connectivity — can be gener-ated in the absence of input to the network. Spatiotemporal patterns of activity are pro-duced in recurrent dynamical models by spike-timing-dependent synaptic plasticity and could be used as an emergent substrate for neural coding115.

    Through these refinements, newer neural networks are becoming useful for experimentalists as models of neural circuits, capturing effectively the recurrent nature of excitatory neural connections and the intrinsic firing of neurons in the absence of stimuli, as observed, for example, during working memory tasks116,117. Furthermore, recurrent models can also be used to explain binary circuit states, such as those that must occur during decision-making111, or pro-vide continuous solutions to computational problems, as often observed during smooth physiological responses107.

    Importantly, in neural network models the computation is an emergent collective property, carried out by the assembly of neu-rons rather than by single cells96,118. In fact, individual neurons can participate in differ-ent functional groups, flexibly reorganizing themselves and diluting the concept of the receptive field. This combinatorial flexibility, originally proposed by Hebb11, is a natural consequence of synaptic plasticity and it also allows the modular composition of small assemblies into larger ones. Because of this flexibility, neural circuits may never be able to be in the same functional state twice, responding differently even if the exact same sensory stimulus is presented. Neural circuits could be constantly changing, as if they were a ‘liquid state’ machine109,119. This could be used as an emergent mechanism to encode time120, providing different time stamps to different moments121.

    Experimental evidence for emergent prop-erties. The possibility that neural circuits generate emergent states of activity is fasci-nating, but is there any evidence that biologi-cal neural circuits actually operate as such

    Figure 3 | Spontaneous cortical activity. The figure illustrates one of the first electroencephalo-grams by Hans Berger (1929)44, recorded from his son Klaus (15 years old). The upper trace represents a sample of the ‘alpha rhythm’ (a sinusoidal rhythm of approximately 10 Hz), often found in the visual cortex when the eyes are closed (thus in the absence of visual stimulation). The lower trace is a generated 10 Hz sine wave, for reference44. Patterned spontaneous activity is present throughout the nervous system and is an example of a phenomenon that cannot be easily explained by the Sherringtonian physiological neuron doctrine because it occurs in the absence of sensory inputs. Rather, spontaneous activity is likely to be generated by interactions among groups of neurons and indicates that neurons could be engaged in intrinsic functions, unrelated to a sensory stimulus or motor action. Adapted from REF. 44, Steinkopff-Verlag, with kind permission from Springer Science and Business Media.

    Nature Reviews | Neuroscience

    P E R S P E C T I V E S

    NATURE REVIEWS | NEUROSCIENCE VOLUME 16 | AUGUST 2015 | 491

    © 2015 Macmillan Publishers Limited. All rights reserved

  • neural networks? From a naive point of view, if one assumes that a neural network simply consists of interconnected neurons, every neural circuit is indeed a neural network, and no experimental evidence is needed. A more relevant question is whether these feedforward or recurrent neural network models have any validity in explaining the phenomenology measured in brain circuits. Is there any evidence for emergent states of activity that may make it necessary to use these neural network models? Are neural network models helpful for understanding how neural circuits operate?

    One could argue that traditional single-cell circuit models can be explained as par-ticular examples of feedforward or recurrent neural networks. For example, the Hubel and Wiesel model for orientation selectiv-ity is equivalent to a multilayer perceptron performing conjunction or disjunction89. Likewise, oscillatory dynamics present throughout the CNS can be reinterpreted as reverberating activity patterns generated by recurrent neural networks with stable dynamical trajectories73,88,122.

    In some cases, neural network models have already been used by researchers to help design and interpret their experi-ments. In particular, the circuit architecture of the mammalian hippocampus has been

    proposed to represent a series of sequen-tial feedforward and recurrent neural networks123, which generate attractors46,124. Attractor networks have also been used to model grid cells in the entorhinal cortex125–127 and to explain their remapping in new envi-ronments128 — something that is hard to understand from a single-neuron point of view. Pattern separation, pattern completion and replay, which are well-known proper-ties of recurrent neural networks100,123, are also found in hippocampal activity129,130. Furthermore, network models are being used to guide the optogenetic manipulation of hippocampal circuits in mice to enable feats that include activating a memory131 or implanting ‘false’ memories by activating a neuronal ensemble132.

    Similarly, neural network models have been used to understand emergent func-tional properties of the cerebral cortex133,134. For example, repeated temporal sequences of action potentials described in vivo135–137, and even in brain slices138, could result from recurrent neural network architecture. In fact, some of these stimulus-evoked activ-ity patterns are similar to those that occur spontaneously41,43,139–141, as would be pre-dicted from some dynamical network mod-els115. Also, neural network models based on the multidimensional representation of

    information by neuronal ensembles have been recently used to explain, for example, context-dependent coding114, multidimen-sional selectivity in the functional responses of neurons in the prefrontal cortex142, and complex motor actions in awake behaving monkeys143. In these studies, multidimen-sional activity patterns appear to repeat in systematic fashion during the performance of the behavioural task (FIG. 5a).

    Recent evidence for the existence of emergent circuit states in the mamma-lian cortex comes from experiments on mice navigating a virtual maze144 (FIG. 5a). Researchers used two-photon calcium imag-ing to measure the activity of groups of neu-rons in the parietal cortex while the mouse made a behavioural choice, based on visual cues. Although single-neuron activity could not be used to explain decision-making, the temporal trajectory of the population of neu-rons could be used to decode the behaviour, indicating the possible existence of an emer-gent code. Strikingly, the temporal sequences of firing were predictive of the behavioural choice (FIG. 5b). These experiments echo earlier work on the behavioural switching of leeches between swimming and crawling, in which the dynamical activity of a population of neurons in the ganglion could be used to decode and predict a behavioural choice145.

    Figure 4 | Neural networks. Examples of common types of neural net-work models. a | Feedforward network. The diagram shows a multilayer perceptron, consisting of three sequential layers of neurons (repre-sented by circles), in which every neuron from each layer is connected to every neuron of the next layer. Each connection has an associated synaptic strength or ‘weight’ (w

    ij or w’

    jk) that changes according to a

    learning rule that is applied to all the connections; w has a numerical value and is indexed by the presynaptic neuron and the postsynaptic neuron, which generate the connection. In this network, inputs are sequentially processed layer by layer in a unidirectional fashion, from the input layer on the left, to the ‘hidden’ layer in the middle, to the output layer on the right. The simple addition of synaptic weights in the output layer results in the generation of selective responses. The computation is an emergent property of the activity of the entire network. b | Recurrent network: an example of an attractor (feedback) neural network in which four pyramidal neurons (blue) are connected to themselves through

    recurrent axons (thin lines) with synaptic weights (wij) that change owing

    to a learning rule. The network receives an external set of inputs (top con-nections) and generates an output (bottom arrows). In networks with recurrent and symmetric connectivity the activity becomes ‘attracted’ to particular stable patterns. c | Recurrent network: an activity map of an attractor neural network (Hopfield model). Each point in the grid repre-sents a particular state of activity of the entire network, and the three-dimensional height of the map represents the ‘energy’ of the network in that particular activity pattern. This energy is a mathematical function that captures the propensity of the network to change its activity. The landscape thus represents all possible activity patterns, where the ‘valleys’ (red dots) are circuit attractors, which represent stable (that is, low energy) states of activity. The dashed circle represents an attractor ‘basin’ in which the network activity patterns converge into the attractor. Part b repro-duced, with permission, from REF. 193, by permission of Oxford University Press. Part c reprinted from REF. 194.

    Nature Reviews | Neuroscience

    aOutput layer (k)

    Hiddenlayer (j)

    Input layer (i)

    wij w’jk

    External input

    Output

    b c

    Attractor basin Attractor

    Energy landscape

    P E R S P E C T I V E S

    492 | AUGUST 2015 | VOLUME 16 www.nature.com/reviews/neuro

    © 2015 Macmillan Publishers Limited. All rights reserved

  • New methods to study networksIt is not an accident that the experiments that provide the strongest support for neural network properties have been per-formed with multineuronal recording tech-niques56,146,147, highlighting the ties between techniques and scientific paradigms35. Moving beyond the microelectrode5, advances in electrical recordings — such as the EEG44, the development of tetrodes148, multi-electrode arrays149 and nanofabricated high-density complementary metal-oxide semiconductor (CMOS) arrays150 — have enabled neurophysiologists to record pop-ulation-wide activities and decipher coding properties and the functional connectivity of circuits such as those in the retina151. A simi-lar case could be made for optical recordings of neuronal activity. From the initial devel-opment of organic calcium152 or voltage153,154 indicators to the more recent genetically encoded indicators155–158, it has become pos-sible to measure the activity of many — or, in some cases, most42,159 — neurons in a neural

    circuit. These advances in optical probe design and synthesis have been accompanied by a similar revolution in optical hardware. From the introduction of cooled charge-coupled device (CCD) cameras160, which enabled quantitative optical imaging from different regions of a neuron, to the develop-ment of ultrafast infrared lasers that enabled two-photon microscopy161, which allowed imaging of neurons deep into living brain circuits162,163, and to more recent optical designs for three-dimensional imaging of neural activity42,164, these new methods are bringing not just a quantitative change in the amount of data acquired but a qualitative modification in the mindset with which neu-roscientists approach neural circuits. Besides microscopy, new optical or magnetic meth-ods to image the activity of entire cortical areas should be also highlighted, although they do not yet possess the spatial resolution to visualize individual neurons. For exam-ple, intrinsic signal imaging165 has enabled visualization of the functional architecture

    of cortical areas with unprecedented reso-lution166. Also, the development of func-tional MRI167 has enabled the pinpointing of critical regions of the brain involved in specific behaviours, mental states or disease processes in human subjects. These large-scale imaging methods are starting to build bridges between neural circuits and topics at the core of psychology168 and as complex as consciousness169.

    Novel techniques have also been devel-oped to optically alter the activity of neural circuits, such as optogenetics12 or opto-chemistry170,171. This optical large-scale manipulation of neural circuits can be car-ried out while preserving single-cell resolu-tion172–174, while simultaneously imaging neuronal activity172,175, thus allowing one to ‘play the piano’ with neuronal circuits in order to generate spatiotemporal patterns of activity with the same precision as the ones encountered naturally.

    Finally, novel computational and ana-lytical approaches have been developed to analyse and decipher the meaning of multi-neuronal datasets. Using dimensionality reduction methods143, dynamical systems analysis176, information theoretic frame-works177 and a rich variety of other novel theoretical tools15,113, researchers can visualize and understand multidimensional neuronal dynamics in ways that enable them to probe brain circuits at the multicellular level.

    Challenges and outlookDespite the very good progress made over more than a century using the neuron doctrine as a foundation, neuroscience still lacks a general theory of how neural circuits operate, how they generate behaviour or mental states, and how their dysfunction leads to mental or neurological diseases. I would argue that this may be due partly to the methodological focus on single cells, which — despite propelling the field forward — has left multicellular phenomenology and its corresponding emergent properties relatively unexplored. Although one can, in principle, study circuit-level properties with single-neuron techniques (such as local field potentials that monitor the aggregate activity of groups of neurons, or even whole-cell recordings that provide access to the population of excitatory or inhibitory inputs onto an individual cell), one may still miss emergent circuit properties unless more comprehensive measurements of population activity are made. In this respect, the above-mentioned new methods to measure multi-neuronal activity in vitro or in vivo14,149,178–180 or to analyse and model multidimensional

    Figure 5 | Emergent functional states in multineuronal dynamics during virtual navigation. a | A virtual navigation task is shown. A T-maze is projected in a virtual reality arena. A mouse runs along a linear track and has to choose to turn right or left depending on the cue that is presented to it via pat-terns present on the virtual maze walls. b | Repeated spatiotemporal dynamics are observed during behaviour. The colour panels show the activity of a population of neurons in the mouse parietal cortex during the virtual navigation task, measured using two-photon calcium imaging. Each panel displays the calcium-related fluorescence (ΔF/F) in pseudocolour for every individual cell (y axis), as a function of time. The multineuronal activity from 101 cue-preferring cells (left) or 170 turn-preferring cells (right) is aligned to the trial start and turn onset, and displays a smooth progression in time of the activity through the population. Whereas individual neurons are activated at variable times, the overall activity faithfully tracks the behaviour of the animal. c | Choice-specific multineuronal trajectories. Analysis of similar data to that shown in part b. Here, the multineuronal activity is now condensed into three-dimensional plots of principal component axes. The left panel shows the time course of average multidimensional dynamical trajectories on the right (red) and left (blue) choice trials from one session. Points labelled 1, 2 and 3 correspond to the times of the cue offset, turn onset and trial end, respec-tively. The right panel superimposes several individual (thin lines) and mean (thick lines) trajectories for correct trials. Note how the trajectories of the activity of the neuronal population differ on the right and left trials, yet are similar within each type of trial. Thus, one can decode the behaviour of the animal from the multineuronal activity patterns, as an emergent property of its dynamics. Figure modified from REF. 144, Nature Publishing Group.

    Nature Reviews | Neuroscience

    ba

    Sort

    ed c

    ue-

    pref

    erri

    ng c

    ells

    1s

    Reward Reward

    Correctleftchoice

    Correctrightchoice

    Delayperiod

    Cueoffset

    Cueperiod

    Trialstart

    50 cm

    Maze 1 Maze 2 X2

    X1

    X3

    X2

    X1

    X3

    2

    1 2

    3

    3

    211

    0

    –1–1 –10 01 1

    2

    1

    0

    –1–2

    02

    –20

    2

    c

    Sort

    ed c

    ue-

    pref

    erri

    ng c

    ells

    1s

    Turn onset

    1

    0

    Nor

    mal

    ized

    m

    ean

    ΔF/F

    P E R S P E C T I V E S

    NATURE REVIEWS | NEUROSCIENCE VOLUME 16 | AUGUST 2015 | 493

    © 2015 Macmillan Publishers Limited. All rights reserved

  • and dynamical activity37,109,181 may usher in a Kuhnian ‘scientific revolution’36, in which the single-neuron doctrine taught in text-books is replaced by a new neural network paradigm that assumes that assemblies of neurons are the basic building blocks of the function of the brain.

    However, the adoption of neural networks as a new paradigm faces some potential chal-lenges, at least when one considers current models. For example, it is unclear whether existing neural network models have enough predictive value to be considered valid or useful for explaining brain circuits. Given the nonlinearity of the interactions among neurons present in most neural network models, numerical simulations can result

    in vastly different outcomes if they have too many free parameters. Alternatively, the same outcome can be generated from many different network simulations, underspecify-ing any biological predictions. Thus, it could become difficult to disentangle how current models of neural circuits generate dynami-cal structured or emergent functional states. Because of this, it is possible that although artificial neural networks could operate well in principle and even be very useful for engi-neering applications, in order to be applied rigorously to realistic neural circuits they may need to be constrained with quantita-tive data, which are still not available. In this respect, although there is increasing evidence supporting some of these neural network

    models, the data are still correlative and criti-cal experiments to demonstrate their impor-tance or disprove them have not yet been carried out. Novel methods to systematically modify or manipulate neuronal activity at the population level are key, because they can directly reveal causal interactions and test the validity of these emergent-level models. Perhaps the new tools generated by the BRAIN initiative182,183 to measure, manipulate or ana-lyse multineuronal activity could critically contribute to the refining and proper testing of neural network models. It should also be pointed out that in addition to gathering and analysing the data it is equally important to generate an organizational framework to store, distribute and share these data in a fashion whereby knowledge could be gained from the parallel efforts of the entire research community.

    Simply recording from more neurons, or even manipulating large numbers of them, may not suffice and may only be a first step. Developing an understanding of how neural circuits work may require integration of essential knowledge from many — or all — levels, with a detailed characterization of the way in which the elements at different levels work together and interact. This is not a new idea: Marr emphasized the intercon-nectivity of the different levels as a neces-sity for acquiring a proper knowledge of how vision works118. Earlier than this, Kant pointed out that science is a ladder in which every rung is connected to those above and below it, and it is only once the facts become properly connected to the ladder that they finally become knowledge184. To be truly paradigm shifting, neural circuit models must assimilate the knowledge of single-cell properties and interactions that was painstakingly acquired by the past century of research, as well as multineuronal data acquired with EEGs, local field potential and multi-electrode recordings. Moreover, a proper synthesis needs to be carried out, integrating the new anatomical and physi-ological large-scale datasets (termed ‘struc-tural’ and ‘functional’ connectomics), and evaluating how neuromodulators can alter their function185,186.

    Finally, it should be noted that research based on the principles of the neuron doc-trine is far from being finished1,16. There are still some important questions remaining about the function of individual cells, like, for example, what local computations are car-ried out by dendrites187 (which in some cases serve as both input and output devices)49. The future integration of different levels of analysis by neural network models should be

    Glossary

    AttractorsStable or semi-stable states in the temporal dynamics of the activity of a neuronal population. They arise naturally in neural networks that have a recurrent (feedback) architecture with symmetric connections.

    Boolean logicA form of algebra in which all values are reduced to either true or false. Boolean logic is especially important for computer science because it fits nicely with its binary numbering system. Boolean logic depends on the use of three logical operators: AND, OR and NOT.

    BRAIN initiativeThe Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative is a decade-long large-scale scientific project, sponsored by the White House, to accelerate the development and application of innovative neurotechnologies to revolutionize the understanding of the brain.

    Activity mapIn a neural network context, the activity map is a three-dimensional representation of all the activity states of the network, where the depth dimension corresponds to the energy function of the activity, which captures the propensity of the network activity to change. This topological representation provides an intuition of how the activity of the circuit evolves in time, as it progresses through this energy landscape to find its lower-energy (attractor) points.

    EnsemblesA group of neurons that show spatiotemporal co-activation. Ensembles provide an example of an emergent state of the circuit.

    Gap junctionsCellular specializations that allow the non-selective passage of small molecules between the cytoplasm of adjacent cells. They are formed by channels termed connexons, which are multimeric complexes of proteins known as connexins. Gap junctions are structural elements of electrical synapses.

    Golgi stainA staining technique introduced by Camillo Golgi in 1873 that involves impregnating the tissue with silver nitrate. This labels a random subset of neurons, allowing the entire cell and its processes to be visualized.

    Grid cellsNeurons in the rodent entorhinal cortex that fire when the animal is at one of several specific locations in an environment; these locations are organized in a grid-like manner.

    Learning ruleThe alteration of the strength of a synaptic connection in a neural network, as a consequence of the pattern of activity experienced by that synapse (or the network).

    Neuronal assembliesOriginally proposed by Hebb; groups of neurons that become bound together owing to synaptic plasticity, and whose coordinated activity progresses through the circuits, often in a closed loop.

    Pattern completionA process by which a stored neural representation is reactivated by a cue that consists of a subset of that representation.

    Pattern separationA process by which overlapping neural representations are separated to keep episodes independent of each other in memory.

    PerceptronsMultilayer feedforward artificial neural networks in which activity flows unidirectonally from one layer to the next. Multilayer perceptrons are often used to implement classification problems.

    Place cellsHippocampal neurons that specifically respond to stimuli in certain spatial locations. Their firing rate increases when an animal or subject approaches the respective location.

    Recurrent connectivityThe concept that neurons within a class connect with one another, implying feedback communication within the network.

    ReplayRecapitulation of experience-dependent patterns of neural activity previously observed during awake periods.

    P E R S P E C T I V E S

    494 | AUGUST 2015 | VOLUME 16 www.nature.com/reviews/neuro

    © 2015 Macmillan Publishers Limited. All rights reserved

  • able to incorporate and explain, from a circuit perspective, this neuron subcompartment phenomenology.

    In closing, neural network models based on the conjoint activity of groups of neurons could explain the phenomenology described with single-neuron approaches, but may also go beyond that and help understand observa-tions that did not fit the single-neuron mould. If successful, neural network models could help to reveal the nature of the neuronal code and reformulate classical — yet still unan-swered — questions in neuroscience, such as the physiological basis of learning and mem-ory, perception, motor planning, ideation and mental states; for example, in an emergent theoretical framework. A new framework may help us to take a fresh look at data, reveal novel phenomena, and perhaps help generate a unified theory about how neural circuits give rise to behaviour and mental or pathological states.

    Rafael Yuste is at the Neurotechnology Center and Kavli Institute of Brain Sciences, Departments of Biological Sciences and Neuroscience, Columbia

    University, New York, New York 10027, USA.

    e-mail: [email protected]

    doi:10.1038/nrn3962 Published online 8 July 2015

    1. Shepherd, G. M. Foundations of the Neuron Doctrine (Oxford Univ. Press, 1991).

    2. Ramón y Cajal, S. Estructura de los centros nerviosos de las aves. Rev. Trim. Histol. Norm. Pat. 1, 1–10 (1888) (in Spanish).

    3. Sherrington, C. S. Observations on the scratch-reflex in the spinal dog. J. Physiol. 34, 1–50 (1906).

    4. Golgi, C. Sulla struttura della sostanza grigia del cervello. Gazz. Med. Ital. (Lombardia) 33, 244–246 (1873) (in Italian).

    5. Hubel, D. H. Tungsten microelectrode for recording from single units. Science 125, 549–550 (1957).

    6. Ramón y Cajal, S. La Textura del Sistema Nerviosa del Hombre y los Vertebrados 1st edn (Moya, 1899).

    7. Hubel, D. H. & Wiesel, T. N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160, 106–154 (1962).

    8. Churchland, P. S. & Sejnowski, T. The Computational Brain (MIT Press, 1992).

    9. Yuste, R. Dendritic spines and distributed circuits. Neuron 71, 772–781 (2011).

    10. McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).

    11. Hebb, D. O. The Organization Of Behaviour (Wiley, 1949).

    12. Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G. & Deisseroth, K. Millisecond-timescale, genetically targeted optical control of neural activity. Nat. Neurosci. 8, 1263–1268 (2005).

    13. Yuste, R. (ed) Imaging: A Laboratory Manual (Cold Spring Harbor Press, 2011).

    14. Berenyi, A. et al. Large-scale, high-density (up to 512 channels) recording of local circuits in behaving animals. J. Neurophysiol. 111, 1132–1149 (2014).

    15. Dayan, P. & Abbott, L. F. Theoretical Neuroscience (MIT Press, 2001).

    16. Bullock, T. H. et al. Neuroscience. The neuron doctrine, redux. Science 310, 791–793 (2005).

    17. Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A. & Hudspeth, A. J. Principles of Neural Science 5th edn (McGraw-Hill, 2013).

    18. Porter, K. R., Claude, A. & Fullam, E. F. A study of tissue culture cells by electron microscopy: methods and preliminary observations. J. Exp. Med. 81, 233–246 (1945).

    19. DeRobertis, E. D. P. & Bennett, H. S. Some features of the submicroscopic morphology of synapses in frog and earthworm. J. Biophys. Biochem. Cytol. 1, 47–58 (1955).

    20. Palay, S. L. Synapses in the central nervous system. J. Biophysiol. Biochem. Cytol. 2, 193–201 (1956).

    21. Magner, L. N. A History of the Life Sciences (Marcel Dekker, 1979).

    22. von Waldeyer-Hartz, H. W. G. Ueber Einige Neuere Forschungen Gebiete Anatomie Centralnervensystems. Dtsch Med. Wochenschr. 17, 1213–1356 (1891) (in German).

    23. Adrian, E. D. The Basis of Sensation (W. W. Norton & Co., 1928).

    24. Hartline, H. K. The response of single optic nerve fibres of the vertebrate eye to illumination of the retina. Am. J. Physiol. 121, 400–415 (1938).

    25. Maturana, H. R., Lettvin, J. Y., McCulloch, W. S. & Pitts, W. H. Anatomy and physiology of vision in the frog (Rana pipiens). J. Gen. Physiol. 43, 129–175 (1960).

    26. Mountcastle, V. B. Modality and topographic properties of single neurons of cat’s somatosensory cortex. J. Neurophysiol. 20, 408–443 (1957).

    27. Hubel, D. H. Eye, Brain and Vision (Scientific American Library, 1988).

    28. Barlow, H. B. Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1, 371–394 (1972).

    29. Desimone, R., Albright, T. D., Gross, C. G. & Bruce, C. Stimulus-selective properties of inferior temporal neurons in the macaque. J. Neurosci. 4, 2051–2062 (1984).

    30. Tanaka, K. Inferotemporal cortex and object vision. Annu. Rev. Neurosci. 19, 109–139 (1996).

    31. Kreiman, G., Koch, C. & Fried, I. Category-specific visual responses of single neurons in the human medial temporal lobe. Nat. Neurosci. 946–953 (2000).

    32. Salzman, C. D. & Newsome, W. T. Neural mechanisms for forming a perceptual decision. Science 264, 231–237 (1994).

    33. Brecht, M., Schneider, M., Sakmann, B. & Margrie, T. W. Whisker movements evoked by stimulation of single pyramidal cells in rat motor cortex. Nature 427, 704–710 (2004).

    34. Houweling, A. R. & Brecht, M. Behavioural report of single neuron stimulation in somatosensory cortex. Nature 451, 65–68 (2008).

    35. Dyson, F. J. History of science. Is science mostly driven by ideas or by tools? Science 338, 1426–1427 (2012).

    36. Kuhn, T. S. The Structure of Scientific Revolutions (Univ. of Chicago Press, 1963).

    37. Fairhall, A. The receptive field is dead. Long live the receptive field? Curr. Opin. Neurobiol. 25, ix–xii (2014).

    38. Gallant, J. L., Connor, C. E. & Van Essen, D. C. Responses of visual cortical neurons in a monkey freely viewing natural scenes. Soc. Neurosci. Abstr. 20, 838 (1994).

    39. Ko, H. et al. Functional specificity of local synaptic connections in neocortical networks. Nature 473, 87–91 (2011).

    40. Steriade, M., Gloor, P., Llinás, R. R., Lopes da Silva, F. & Mesulam, M. M. Basic mechanisms of cerebral rhythmic activities. Electroencephalogr. Clin. Neurophysiol. 30, 481–508 (1990).

    41. Kenet, T., Bibitchkov, D., Tsodyks, M., Grinvald, A. & Arieli, A. Spontaneously emerging cortical representations of visual attributes. Nature 425, 954–956 (2003).

    42. Ahrens, M. B., Orger, M. B., Robson, D. N., Li, J. M. & Keller, P. J. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nat. Methods 10, 413–420 (2013).

    43. Miller, J. E., Ayzenshtat, I., Carrillo-Reid, L. & Yuste, R. Visual stimuli recruit intrinsically generated cortical ensembles. Proc. Natl Acad. Sci. USA 111, E4053–E4061 (2014).

    44. Berger, H. Über das elektrenkephalogramm des menschen. Arch. Psychiatr. Nervenkr. 87, 527–570 (1929) (in German).

    45. Fox, M. D. & Raichle, M. E. Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat. Rev. Neurosci. 8, 700–711 (2007).

    46. O’Keefe, J. & Dostrovsky, J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971).

    47. Buzsaki, G. Neural syntax: cell assemblies, synapsembles, and readers. Neuron 68, 362–385 (2010).

    48. Braitenberg, V. & Schüz, A. Anatomy of the Cortex (Springer, 1998).

    49. Shepherd, G. M. The Synaptic Organization of the Brain (Oxford Univ. Press, 1990).

    50. Abeles, M. Corticonics (Cambrdige Univ. Press, 1991).

    51. Peters, A. & Jones, E. G. (eds) Cerebral Cortex (Plenum, 1984).

    52. Llinás, R. & Sotelo, C. (eds). The Cerebellum Revisited (Springer, 1992).

    53. Pfeffer, C. K., Xue, M., He, M., Huang, Z. J. & Scanziani, M. Inhibition of inhibition in visual cortex: the logic of connections between molecularly distinct interneurons. Nat. Neurosci. 16, 1068–1076 (2013).

    54. Fino, E., Packer, A. M. & Yuste, R. The logic of inhibitory connectivity in the neocortex. Neuroscientist 19, 228–237 (2013).

    55. Avermann, M., Tomm, C., Mateo, C., Gerstner, W. & Petersen, C. C. H. Microcircuits of excitatory and inhibitory neurons in layer 2/3 of mouse barrel cortex. J. Neurophysiol. 107, 3116–3134 (2012).

    56. Karnani, M. M., Agetsuma, M. & Yuste, R. A blanket of inhibition: functional inferences from dense inhibitory connectivity. Curr. Opin. Neurobiol. 26, 96–102 (2014).

    57. Gibson, J. R., Beierlein, M. & Connors, B. W. Two networks of electrically coupled inhibitory neurons in neocortex. Nature 402, 75–79 (1999).

    58. Galarreta, M. & Hestrin, S. A network of fast-spiking cells in the neocortex connected by electrical synapses. Nature 402, 72–75 (1999).

    59. Monyer, H. & Markram, H. Interneuron diversity series: molecular and genetic tools to study GABAergic interneuron diversity and function. Trends Neurosci. 27, 90–97 (2004).

    60. Olah, S. et al. Regulation of cortical microcircuits by unitary GABA-mediated volume transmission. Nature 461, 1278–1281 (2009).

    61. Gray, E. G. Axo-somatic and axo-dendritic synapses of the cerebral cortex: an electron microscopic study. J. Anat. 83, 420–433 (1959).

    62. Harris, K. M. & Kater, S. B. Dendritic spines: cellular specializations imparting both stability and flexibility to synaptic function. Annu. Rev. Neurosci. 17, 341–371 (1994).

    63. Chklovskii, D. B., Schikorski, T. & Stevens, C. F. Wiring optimization in cortical circuits. Neuron 34, 341–347 (2002).

    64. Yuste, R. Dendritic Spines (MIT Press, 2010).65. Anderson, P. W. More is different. Science 177,

    393–396 (1972).66. Lorente de Nó, R. Studies on the structure of the

    cerebral cortex. I. The area entorhinalis. J. Psychol. Neurol. 45, 381–438 (1933).

    67. Lorente de Nó, R. in Physiology of the Nervous System (ed. Fulton, J. F.) 228–330 (Oxford Univ. Press, 1949).

    68. Sejnowski, T. J. The book of Hebb. Neuron 24, 773–776 (1999).

    69. Selverston, A. General principles of rhythmic motor pattern generation derived from invertebrate CPGs. Prog. Brain Res. 123, 247–257 (1999).

    70. Brown, T. G. On the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression, and a theory of the evolution of function in the nervous system. J. Physiol. 48, 18–46 (1914).

    71. Brown, T. G. On the activities of the central nervous system of the unborn foetus of the cat, with a discussion of the question whether progression (walking, etc.) is a ‘learnt’ complex. J. Physiol. 49, 208–215 (1915).

    72. Sherrington, C. S. Inhibition as a coordinative factor. Nobelprize.org [online], http://nobelprize.org/medicine/laureates/1932/sherrington-lecture.html (1932).

    73. Llinás, R. R. in Induced Rhythms in the Brain (eds Basar, E. & Bullock, T. H.) 269–283 (Birkhauser, 1992).

    74. Buzsaki, G., Horvath, Z., Urioste, R., Hetke, J. & Wise, K. High-frequency network oscillations in the hippocampus. Science 256, 1025–1027 (1992).

    75. Llinás, R. & Yarom, Y. Properties and distribution of ionic conductances generating electroresponsiveness of mammalian inferior olivary neurones in vitro. J. Physiol. 315, 569–584 (1981).

    P E R S P E C T I V E S

    NATURE REVIEWS | NEUROSCIENCE VOLUME 16 | AUGUST 2015 | 495

    © 2015 Macmillan Publishers Limited. All rights reserved

    mailto:rafaelyuste%40columbia.edu?subject=http://nobelprize.org/medicine/laureates/1932/sherrington-lecture.htmlhttp://nobelprize.org/medicine/laureates/1932/sherrington-lecture.html

  • 76. Llinas, R. R. The intrinsic electrophysiological properties of mammalian neurons: insights into central nervous system function. Science 242, 1654–1664 (1988).

    77. Steriade, M. Cholinergic blockage of network- and intrinsically generated slow oscillations promotes waking and REM sleep activity patterns in thalamic and cortical neurons. Prog. Brain Res. 98, 345–355 (1993).

    78. Steriade, M. & McCarley, R. W. Brainstem Control of Wakefulness and Sleep (Plenum, 1990).

    79. Gray, C. M., Konig, P., Engel, A. K. & Singer, W. Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature 338, 334–337 (1989).

    80. Engel, A. K., Fries, P. & Singer, W. Dynamic predictions: oscillations and synchrony in top-down processing. Nat. Rev. Neurosci. 2, 704–716 (2001).

    81. Crick, F. & Koch, C. Some reflections on visual awareness. Cold Spring Harb. Symp. Quant. Biol. 55, 953–962 (1990).

    82. Jeanmonod, D. et al. Neuropsychiatric thalamocortical dysrhythmia: surgical implications. Neurosurg. Clin. N. Am. 14, 251–265 (2003).

    83. Buzsaki, G. Rhythms of the Brain (Oxford Univ. Press, 2011).

    84. Llinás, R. R., Ribary, U., Jeanmonod, D., Kronberg, E. & Mitra, P. P. Thalamocortical dysrhythmia: a neurological and neuropsychiatric syndrome characterized by magnetoencephalography. Proc. Natl Acad. Sci. USA 96, 15222–15227 (1999).

    85. Uhlhaas, P. J. & Singer, W. Abnormal neural oscillations and synchrony in schizophrenia. Nat. Rev. Neurosci. 11, 100–113 (2010).

    86. Llinas, R. & Ribary, U. Coherent 40-Hz oscillation characterizes dream state in humans. Proc. Natl Acad. Sci. USA 90, 2078–2081 (1993).

    87. Fries, P., Roelfsema, P. R., Engel, A. K., König, P. & Singer, W. Synchronization of oscillatory responses in visual cortex correlates with perception in interocular rivalry. Proc. Natl Acad. Sci. USA 94, 12699–12704 (1997).

    88. Llinás, R. I of the Vortex: From Neurons to Self (MIT Press, 2001).

    89. Seung, H. S. & Yuste, R. in Principles of Neural Science 5th edn (eds Kandel, E. R., Schwartz, J. H., Jessel, T. M., Siegelbaum, S. A. & Hudspeth, A. J.) 1581–1600 (Mc Graw-Hill, 2013).

    90. Marr, D. A theory of cerebellar cortex. J. Physiol. 202, 437–470 (1969).

    91. Turing, A. M. On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 42, 230–265 (1936).

    92. Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958).

    93. Papert, S. & Minsky, M. L. Perceptrons: An Introduction to Computational Geometry (MIT Press, 1988).

    94. Hinton, G. E., Osindero, S. & Teh, Y. W. A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006).

    95. Farabet, C., Couprie, C., Najman, L. & Lecun, Y. Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1915–1929 (2013).

    96. Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).

    97. Ising, E. Contribution to the theory of ferromagnetism. Z. Phys. 31, 253–258 (1925).

    98. Amit, D. J., Gutfreund, H. & Sompolinsky, H. Spin-glass models of neural networks. Phys. Rev. A 32, 1007–1018 (1985).

    99. Hopfield, J. J. & Tank, D. W. “Neural” computation of decisions in optimization problems. Biol. Cybern. 52, 141–152 (1985).

    100. Hopfield, J. J. & Tank, D. W. Computing with neural circuits: a model. Science 233, 625–633 (1986).

    101. Pfluger, H. J. & Menzel, R. Neuroethology, its roots and future. J. Comp. Physiol. A 185, 389–392 (1999).

    102. Zupanc, G. K. H. Behavioral Neurobiology: An Integrative Approach (Oxford Univ. Press, 2010).

    103. Ratliff, F., Knight, B. W., Toyoda, J. & Hartline, H. K. Enhancement of flicker by lateral inhibition. Science 158, 392–393 (1967).

    104. von der Malsburg, C. Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14, 85–100 (1973).

    105. Kohonen, T. & Oja, E. Fast adaptive formation of orthogonalizing filters and associative memory in recurrent networks of neuron-like elements. Biol. Cybern. 21, 85–95 (1976).

    106. Kohonen, T. Self-Organizing Maps (Springer, 1995).107. Seung, H. S., Lee, D. D., Reis, B. Y. & Tank, D. W.

    Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 26, 259–271 (2000).

    108. Ben-Yishai, R., Lev Bar-Or, R. & Sompolinsky, H. Orientation tuning by recurrent neural networks in visual cortex. Proc. Natl Acad. Sci. USA 92, 3844–3848 (1995).

    109. Maass, W., Natschlager, T. & Markram, H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).

    110. Sussillo, D. & Abbott, L. F. Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557 (2009).

    111. Wang, X. J. Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36, 955–968 (2002).

    112. Buonomano, D. V. & Maass, W. State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10, 113–125 (2009).

    113. Abbott, L. F., Fusi, S. & Miller, E. K. in Principles of Neural Science 5th edn (eds Kandel, E. R., Schwartz, J. H., Jessel, T. M., Siegelbaum, S. A. & Hudspeth, A. J.) 1601–1617 (McGraw-Hill, 2013).

    114. Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

    115. Klampfl, S. & Maass, W. Emergence of dynamic memory traces in cortical microcircuit models through STDP. J. Neurosci. 33, 11515–11529 (2013).

    116. Constantinidis, C. & Wang, X. J. A neural circuit basis for spatial working memory. Neuroscientist 10, 553–565 (2004).

    117. Wang, X. J. Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. J. Neurosci. 19, 9587–9603 (1999).

    118. Marr, D. Vision (W. H. Freeman, 1982).119. Jaeger, H. & Haas, H. Harnessing nonlinearity:

    predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004).

    120. Goel, A. & Buonomano, D. V. Timing as an intrinsic property of neural networks: evidence from in vivo and in vitro experiments. Phil. Trans. R. Soc. B 369, 20120460 (2014).

    121. Ganguli, S., Huh, D. & Sompolinsky, H. Memory traces in dynamical systems. Proc. Natl Acad. Sci. USA 105, 18970–18975 (2008).

    122. Pellionisz, A. & Llinas, R. Brain modeling by tensor network theory and computer simulation. The cerebellum: distributed processor for predictive coordination. Neuroscience 4, 323–348 (1979).

    123. Rolls, E. T. & Treves, A. Neural Networks and Brain Function (Oxford Univ. Press, 1998).

    124. Wills, T. J., Lever, C., Cacucci, F., Burgess, N. & O’Keefe, J. Attractor dynamics in the hippocampal representation of the local environment. Science 308, 873–876 (2005).

    125. Colgin, L. L. et al. Attractor-map versus autoassociation based attractor dynamics in the hippocampal network. J. Neurophysiol. 104, 35–50 (2010).

    126. Solstad, T., Moser, E. I. & Einevoll, G. T. From grid cells to place cells: a mathematical model. Hippocampus 16, 1026–1031 (2006).

    127. Paulsen, O. & Moser, E. I. A model of hippocampal memory encoding and retrieval: GABAergic control of synaptic plasticity. Trends Neurosci. 21, 273–278 (1998).

    128. Fyhn, M., Hafting, T., Treves, A., Moser, M. B. & Moser, E. I. Hippocampal remapping and grid realignment in entorhinal cortex. Nature 446, 190–194 (2007).

    129. Nakashiba, T. et al. Young dentate granule cells mediate pattern separation, whereas old granule cells facilitate pattern completion. Cell 149, 188–201 (2012).

    130. Nadasdy, Z., Hirase, H., Czurko, A., Csicsvari, J. & Buzsaki, G. Replay and time compression of recurring spike sequences in the hippocampus. J. Neurosci. 19, 9497–9507 (1999).

    131. Liu, X. et al. Optogenetic stimulation of a hippocampal engram activates fear memory recall. Nature 484, 381–385 (2012).

    132. Ramirez, S. et al. Creating a false memory in the hippocampus. Science 341, 387–391 (2013).

    133. Marr, D. A theory for cerebral neocortex. Proc. R. Soc. Lond. B 176, 161–234 (1970).

    134. Grinvald, A., Arieli, A., Tsodyks, M. & Kenet, T. Neuronal assemblies: single cortical neurons are obedient members of a huge orchestra. Biopolymers 68, 422–436 (2003).

    135. Abeles, M., Bergman, H., Marga