Vous êtes sur la page 1sur 195

Neural Dynamics

and Neural Coding


Two Complementary Approaches
to an Understanding
of the Nervous System

Habilitationsschrift
zur Erlangung der Lehrbefugnis
fur Theoretische Physik
am Fachbereich 1: Physik/Elektrotechnik
der Universitat Bremen

vorgelegt von
Christian Eurich

Bremen, November 2003


ii
Preface

This work entitled

Neural Dynamics and Neural Coding: Two Complementary


Approaches to an Understanding of the Nervous System

describes my scientific activities in Bremen, Chicago, Osnabruck, Gottingen and


Tokyo. It is submitted as Habilitationsschrift to obtain the teaching authority
for Theoretical Physics at the Fachbereich 1 (Physik/ Elektrotechnik) of the
University of Bremen.
The thesis deals with two topics summarizing my work, neural dynamics and
neural coding . They are located in Theoretical Neuroscience or, to be more
precise, in those fields of Theoretical Neuroscience that can be subsumed under
the label of Theoretical Neurophysics because they employ methods and consider
topics that are typical for Theoretical Physics.1
Part II is considered to be the main part of this thesis. It is a collection of
reprints of selected papers which I have published in the last years.2 The research
area of Theoretical Neurophysics is relatively young which may explain the fact
that many scientists including myself cover a fairly broad range of topics
ranging from the encoding of sensory signals to the dynamics of motor systems.
This broad scope is reflected in the rather general title of the thesis. Furthermore,
it requires a fairly broad introductory text which is given in Part I of the thesis.
It summarizes aspects of the abovementioned two fields of neural dynamics and
neural coding. The literature, however, is vast (as usual), and the summary is
therefore by no means a comprehensive review. I rather focus on those topics to
which I contributed as author or co-author of research articles. As far as my own
work is concerned, I worked out the internal connection of the original papers,
omitting technical details.

Bremen, November 2003 Christian Eurich

1
There is, of course, another reason for this name: Theoretical neurophysics is made by
physicists. By definition, everything a physicist does is called Physics.
2
Citations in the text referring to one of these reprints are marked by an additional L (for
List of reprints) after the year of publication, for example Eurich et al. (1995:L).

iii
iv
Contents

I Introduction to Dynamics and Coding


in the Nervous System I-1
1 Introduction I-3

2 Neural Coding I-11


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-11
2.2 Tasks, Constraints, Optimization Principles, Objectives
and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-12
2.2.1 Tasks, Constraints and Optimization Principles . . . . . . I-12
2.2.2 Objectives in Research on Neural Coding . . . . . . . . . . I-15
2.2.2.1 The Encoding of Single Objects . . . . . . . . . . I-15
2.2.2.2 The Encoding of Multiple Objects and
Complex Scenes . . . . . . . . . . . . . . . . . . . I-16
2.2.2.3 Dynamical Stimuli and Dynamics of Signal
Processing . . . . . . . . . . . . . . . . . . . . . . I-17
2.2.2.4 Signal Processing and Mental States . . . . . . . I-19
2.2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-21
2.2.3.1 Introduction . . . . . . . . . . . . . . . . . . . . I-21
2.2.3.2 Example from Data Analysis: Tuning Curve Es-
timation from Noisy Data . . . . . . . . . . . . . I-25
2.2.3.3 Vector-Based Reconstruction Methods . . . . . . I-27
2.2.3.4 Statistical Estimation Theory . . . . . . . . . . . I-29
2.2.3.5 Information Theory . . . . . . . . . . . . . . . . I-37
2.3 Neural Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-38
2.3.1 Single-Neuron Codes . . . . . . . . . . . . . . . . . . . . . I-38
2.3.2 Population Codes Based on Spike Activity . . . . . . . . . I-39
2.3.2.1 Binary Coding . . . . . . . . . . . . . . . . . . . I-40
2.3.2.2 Rate Coding . . . . . . . . . . . . . . . . . . . . I-41
2.3.2.3 Temporal Coding:
Structure of Neural Spike Trains . . . . . . . . . I-42
2.3.2.4 Rank-Order Coding . . . . . . . . . . . . . . . . I-45
2.3.3 Further Encoding Schemes . . . . . . . . . . . . . . . . . . I-45
2.3.4 Which Code is Realized in the Nervous System? . . . . . . I-47

v
vi CONTENTS

2.4 The Encoding Accuracy of a Single Object . . . . . . . . . . . . . I-49


2.4.1 The Notion of a Single Object . . . . . . . . . . . . . . . . I-49
2.4.2 Binary Coding . . . . . . . . . . . . . . . . . . . . . . . . I-51
2.4.3 Rate Coding . . . . . . . . . . . . . . . . . . . . . . . . . . I-61
2.4.3.1 Tuning Widths . . . . . . . . . . . . . . . . . . . I-62
2.4.3.2 Noise Model . . . . . . . . . . . . . . . . . . . . . I-69
2.4.3.3 Noise Correlations . . . . . . . . . . . . . . . . . I-73
2.4.4 The Case of Multiple Objects . . . . . . . . . . . . . . . . I-79

3 Neural Dynamics I-85


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-85
3.2 Dynamical System Models in Theoretical Neuroscience . . . . . . I-89
3.2.1 Single Neurons . . . . . . . . . . . . . . . . . . . . . . . . I-89
3.2.2 Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . I-95
3.2.3 Synaptic Dynamics and Hebbian Learning . . . . . . . . . I-96
3.3 Mechanisms in Systems with Delays and Noise . . . . . . . . . . . I-99
3.3.1 Systems with Interaction Delays . . . . . . . . . . . . . . . I-99
3.3.1.1 Delays in Sensorimotor Control Loops . . . . . . I-102
3.3.1.2 Delay Distributions . . . . . . . . . . . . . . . . . I-106
3.3.1.3 Delay Adaptation . . . . . . . . . . . . . . . . . . I-111
3.3.2 Systems with Noise . . . . . . . . . . . . . . . . . . . . . . I-116
3.3.2.1 Avalanches of Spike Activity . . . . . . . . . . . I-117
3.3.2.2 Wave Propagation in Inhomogeneous Media . . . I-120

4 Towards a Combined Approach I-127

Bibliography I-134

II Reprints of Original Papers II-1


5 List of Original Papers II-3
CONTENTS vii
viii CONTENTS
Part I

Introduction to Dynamics and


Coding in the Nervous System

I-1
Chapter 1

Introduction

In the last decades, the neurosciences have emerged as an interdisciplinary field


where researchers from biology, medicine, psychology, physics, mathematics, com-
puter science and electrical engineering gather around a common goal: to yield
an understanding of the nervous system. The scientific community is large (the
annual Neuroscience conference attracted more than 25,000 attendees in 2002),
and a large number of sophisticated experimental and theoretical methods are
at hand to investigate brain processes on many length and time scales. Em-
pirical methods include electrophysiological multi-cell recordings of neural ac-
tivity, imaging techniques like fMRI or PET, EEG and MEG recordings, and
psychophysics. Theoretical neuroscience uses large-scale neural network simu-
lations, dynamical systems theory including stochastic models, and information
theory. There is a strong overlap with the machine learning community for de-
signing powerful data analysis tools. Applications range from the construction of
real-world autonomous robots to the development of brain implants like artifial
retinae or cochleae for disabled people.
What is the specific contribution of theoretical physics to the neurosciences?
It seems that the methodology of physics is particularly suited to address typ-
ical problems arising in the neurosciences. Sophisticated quantitative tools are
required to yield an understanding of the complex phenomena observed in elec-
trophysiological, neuroanatomical, psychophysical and behavioral experiments.
These include the analysis of quantitative data, the development and analytical
treatment of models, and the use of numerical computer simulations where an-
alytical considerations and approximation methods fail. Only in rare cases can
complete models developed in the context of physical systems be transferred to
problems in the neurosciences.1

A combination of the following two approaches seems to be suitable to yield an


understanding of brain processes.
1
The most prominent example is the Hopfield model (Hopfield, 1982; see Amit, 1989 for a
thorough covery of the topic); it is closely related to the Ising model (Ising, 1925).

I-3
I-4 CHAPTER 1. INTRODUCTION

The brain as a dynamical system. From the physics point of view, a


straightforward approach to modelling in the neurosciences is to regard the brain
as a dynamical system. Dynamical processes occur on many length scales, starting
from subcellular behavior (synaptic dynamics, axonal delays), the level of single
nerve cells (production of action potentials) and neural networks (synchroniza-
tion, neural maps, adaptation and learning), up to the system level (sensorimotor
integration, psychophysical and cognitive phenomena). In the temporal domain,
dynamical processes range from fast (sub-millisecond) channel kinetics to long-
term plasticity occurring on time scales of minutes, days, or even years.
In the context of this approach to an understanding of the nervous system, the
term dynamical system is understood in the same sense as in the mathematical
theory of dynamical systems (chaos theory). Such systems are described by
various types of mostly nonlinear differential or difference equations. In
my work, I focus on two properties of neural systems: First, neural responses
usually appear to be stochastic such that stochastic differential equations are
used. Second, I am also much interested in the consequences of interaction delays,
for example in Hebbian learning or in neural feedback systems; in such cases,
delay-differential equations are employed.
A hallmark of the models derived from dynamical systems theory is their rel-
ative simplicity: Physiological details are omitted whereever possible to reduce
the model to the ingredients necessary to account for a given class of phenomena.
For example, when modelling neural populations for the sake of studying adap-
tation phenomena, details on the synaptic transmission or the spike generation
process may be omitted. It may even be feasible to drop the individuality of the
neurons and to define the dynamics of a neural population with a neural density
instead (Eurich et al., 1999:L, 2000). This strategy is usually awarded with a
partial analytical tractability allowing for the identification of mechanisms of
signal processing in the nervous system instead of considering single phenomena
only. Recently identified mechanisms include
noise-induced transitions between co-existing periodic orbits in human pos-
tural sway (Eurich and Milton, 1996:L);
stable steady states of Hebbian dynamics yielding an equalization of con-
duction delay times (Eurich et al., 1999:L, 2000);
and the propagation of activity waves in cortical tissue induced by static
disorder of neural connections (Eurich and Schulzke, in press:L).

The brain as a signal-processing system. Apart from considering the be-


havior of neural systems as it is done within the first approach, there is another
route to yielding an understanding of brain processes: the consensus among neu-
roscientists is that in many cases, the observed neural behavior serves some func-
tion. In other words, the dynamical processes characterized in the first approach
I-5

can be better understood when asking for what purpose they have evolved. Puta-
tive functions include accurately perceiving the environment, performing mean-
ingful motor actions, and developing abstract mental concepts of future events
and actions.
A paradigm which has proven to be fruitful in the last decades is to regard the
brain as a signal-processing system. As far as perception is concerned, one can look
for neurons, neural populations and brain areas which become active whenever
certain sensory stimuli are presented during an electrophysiological experiment
(Figure 1.1). Within the functional framework, such correlations between stimuli
and neural behavior are interpreted in terms of representation and coding : neural
firing is interpreted as a response to the stimulus in the sense that the neurons
encode the stimulus. Typical question to be addressed are:
Which part of the neural response carries information?
What is the nature of the neural code? Is it a single-neuron code or a
population code?
Which properties of neurons and neural populations (firing rates, tuning
widths, correlations in the activity, etc.) yield a high sensory resolution or
motor accuracy?
What is a good strategy to discriminate simultaneously presented stimuli?
Why do neural responses appear noisy?
Do mental states like arousal or attention modify neural responses and
thereby the encoding properties?
Theoretical work in this field includes the development of data analysis tech-
niques capable of integrating the responses of large neural populations, the mod-
eling of neural populations in order to study their representational properties,
and the evaluation of mathematical (esp. statistical) methods to account for the
corresponding signal processing. Mathematically, the stochastic nature of the
neural responses suggests the use of information theory (e. g., Rieke et al., 1997;
Brunel and Nadal, 1998; Borst and Theunissen, 1999) and statistical estima-
tion theory including maximum-likelihood methods (Pouget et al., 1998), Fisher
information (e. g., Paradiso, 1988; Seung and Sompolinsky, 1993; Brunel and
Nadal, 1998; Zhang and Sejnowski, 1999; Eurich and Wilke, 2000:L; Wilke and
Eurich, 2001:Lb; Bethge et al., 2002), and Bayesian estimation (e. g., Abbott,
1994; Sanger, 1996; Oram et al., 1998; Zemel et al., 1998; Zhang et al., 1998a).
Other mathematical approaches are kernel methods (Stanley et al., 1999; Wess-
berg et al., 2000) and vector-based methods (Georgopoulos et al., 1986; Scott
et al., 2001) to reconstruct stimuli from neural responses, as well as neural net-
work simulations (Eurich et al., 1995:L; Thorpe and Gautrais, 1997; Van Rullen
et al., 1998).
I-6 CHAPTER 1. INTRODUCTION

Real World Neural World

Sensory
input

}
Neuron #
Correlations?
Motor
behaviour
0 1.5
Time (s)

Figure 1.1: Schematic illustration of the functional approach to neural systems


as it is often taken in experimental or theoretical work. On the one hand, there
exist sensory stimuli (left, top) or motor actions (left, bottom) in the real world.
On the other hand, neural populations produce action potentials which can be
interpreted as responses to sensory stimuli or motor commands (right). In a
typical graph, each row shows the behavior of one cell, whereby action potentials
are simply depicted as vertical bars. A major goal is to quantify correlations
between the two worlds.

In search of a proper brain metaphor. Epistemological issues. In sci-


entific practice, the epistemological and ontological standpoint of a researcher
is usually of little or no importance. For example, problems in electrodynamics
or quantum physics can be discussed no matter if the discussion partners have
identical or different philosophical points of view, if they are realists or contruc-
tivists, etc. This is different in the neurosciences, because the object under study,
the brain, is considered to be the organ with which we perceive the world. This
means that an observers epistemological point of view is closely related to his or
her interpretation of neural phenomena.
In fact, the two approaches of dynamics and coding described above seem
to reflect different, even contradicting ontological standpoints. When studying
neural signal processing and coding, a scientific goal is to find correlations between
sensory stimuli or motor actions and brain activities. A certain aspect of an object
in the real world (for example, its visual appearance) is said to be represented
by a neural population if the correlation is sufficiently high. This interpretation
reflects a realistic epistemology where the brain is considered to be a mapping
device translating sensory stimuli into neural activities in so-called sensory areas,
or activities in so-called motor areas into body movements. Neuroscience research
quantifies this mapping and is concerned with the question how a representation
emerges from the interplay of neural activities in various areas of the brain. Ibn
other words, the notion of realism refers to the cognitive representation of a real
I-7

world.
If the brain is regarded as a dynamical system, a different aspect of brain
processes is emphasized. This becomes clear when studying the well-known com-
pilation of visual (mostly cortical) areas of the macaque monkey by Felleman and
Van Essen (1991); see Figure 1.2. In their scheme, Felleman and Van Essen also

Figure 1.2: Felleman and Van Essen (1991) compiled this scheme of brain areas in
the macaque monkey that were known at that time to process visual information.
Included are also all identified anatomical connections between these areas.

included the anatomical connections between these areas as previously described


in the literature. Although the authors claim to be able to identify a certain
hierarchy of areas, the most important feature of this system apart from its
apparent complexity is clearly the existence of massive feedback.2 Feedback
connections blur the distinction between early visual processing and late vi-
2
Feedback is present everywhere; there are even neural connections projecting from the
midbrain (Cervetto et al., 1976) and the diencephalon (Itaya, 1980) back into the retina.
I-8 CHAPTER 1. INTRODUCTION

sual processing and make the dynamics practically inextricable. Furthermore,


most neurons receive no direct visual input but only interact with other neurons
through electrical or chemical signals. The picture arising from these data is
that neurons and neural population realize a gigantic, complex dynamical system
which mainly processes its own activities and which is only marginally driven by
external, sensory input. It is not at all clear in how far this system should realize
a map of the external world. A philosophopical theory supporting this view is
the constructivism (e. g., Schmidt, 1987, 1992; Roth, 1994; von Glasersfeld, 1997),
according to which the brain is rather a device generating our internal world,
i. e. the world we perceive. This world is not necessarily identical to the real
world, and a direct comparison is impossible because we have no access to the
latter.
In the light of these arguments, one cannot always expect to find good agree-
ment between sensory stimuli and brain activity in the scheme shown in Fig-
ure 1.1. A lack of high positive correlation may not hint at an empirical or
theoretical mistake in the investigation but may merely reflect the fact that a
neural system does indeed not simply encode a stimulus. However, the paradigm
of neural coding has proven to be very successful, and a slight change in the
scheme of Figure 1.1 makes it more reasonable. If, according to the theory of
constructivism, the brain is responsible for our internal world, neural activity
should not correlate with external stimuli themselves but with the perception of
stimuli. This change in interpretation is visualized in Figure 1.3.
In many cases, external stimuli and their perception seem to correspond very
well, and the naive scheme of Figure 1.1 can be applied. In some cases, however,
discrepancies show up. For example, in visual illusions, neural activity is known
to correlate with (illusionary) percepts rather than the corresponding physical
stimuli (see for example Jancke et al., 2002 for the line motion effect).
The modified interpretation of neural behavior also opens up a new field of
investigation: brain activity can be related to mental states. An example of
a quickly emerging field is the investigation of the relation between attention,
perception and neural behavior (e. g., Moran and Desimone, 1985; Desimone
and Duncan, 1995; Groh et al., 1996; Yeshurun and Carrasco, 1998; Carrasco
and McElree, 2001; Eurich, 2003:L). This field of research will further deepen our
understanding of brain processes and our perception of the world.
In Chapters 2 and 3, the two approaches of neural coding and neural dynamics,
respectively, will be investigated with a strong focus on my contributions to these
fields. Chapter 4 gives hints aiming at a unification of the two approaches. Part
II of this work is a collection of most of my reprints in this research area.
I-9

Internal World Neural World

Perception

}
Neuron #

Motor
planning Correlations?
and
output
0 1.5
Time (s)
Mental
states

Figure 1.3: A modified scheme correlates perception, motor planning and execu-
tion, and mental states rather than objects and features of the real world
with neural activity. The image on the lower left symbolizes attention: humans
and animals are able to attend a certain object (gray dot) while neglecting other
objects (black dot).
I-10 CHAPTER 1. INTRODUCTION
Chapter 2

Neural Coding

2.1 Introduction
This chapter deals with the functional approach to phenomena of the nervous
system. The starting point is the framework of Figure 1.3 or in a simpler
interpretation Figure 1.1. Once a correlation between a subset of the internal
world (perception of a certain class of stimuli, a class of motor actions, or some
mental phenomenon) and the neural world (usually some aspect of the spiking
behavior of a single neuron or a population of neurons) has been identified, one
may speak of a neural representation of the considered phenomena of the internal
world.1 For example, the direction of three-dimensional arm movements has
been suggested to be represented in the firing rates of a population of neurons in
monkey motor cortex (Georgopoulos et al., 1986; Wessberg et al., 2000). The set
of neural activities correlating with the subset of the internal world defines an
alphabet, and the neural code is a mapping assigning each element of the subset
of the internal world a neural activity, i. e., a letter of the alphabet. In the case
of the monkey arm movements, the alphabet consists of the set of possible neural
firing rates of the population of neurons in motor cortex, and the code maps
each arm movement to a population activity; the mapping is computed via the
so-called population vector (Georgopoulos et al., 1986; Wessberg et al., 2000).
It should be noted that the interpretation of the issue of neural coding is con-
troversially discussed in the neuroscience community. As outlined in Chapter 1,
it assumes that the brain at least partially represents an outside world. Further-
more, a code makes sense only if encoded messages are decoded somewhere; it is
not clear where and in which form this happens in the nervous system. Last not
least, the notion of a neural code is not used consistently among scientists. See
Section 2.3.4 for further discussion. Overviews over the field of neural coding are
1
This working definition of representation is fairly naive from a philosophical point of view.
However, this is the way it is often used in neuroscience. For a more detailed discussion of the
notion of representation see Peschl (1994); Ziemke and Breidbach (1996).

I-11
I-12 CHAPTER 2. NEURAL CODING

given in Rieke et al. (1997); Abbott and Sejnowski (1999); deCharms and Zador
(2000).

In Sections 2.2 and 2.3, a brief overview of the field of neural coding is given.
Section 2.2 summarizes the most common experimental and theoretical methods,
scientific objectives in the context of coding, as well as optimization principles
and biological constraints. Section 2.3 provides a list of neural codes which have
been suggested in the last decades and discusses the question which code might
be realized in the nervous system. The remainder of this chapter, Section 2.4,
integrates my own work in the given framework and outlines its most important
results.

2.2 Tasks, Constraints, Optimization Principles,


Objectives and Methods
2.2.1 Tasks, Constraints and Optimization Principles
The functional approach to phenomena in the nervous system requires a speci-
fication of tasks that the brain has to perform. At the same time, neural signal
processing is constrained both by the environment in which systems have evolved
and by biological (i. e., anatomical and pysiological) properties of brain tissue.
Another issue to be discussed is the question if signal processing is optimal in
some sense which is of course related to the various effective constraints.

Tasks of the Nervous System and Demands of the Environment. An-


imals including humans have evolved in a complex and dangerous environment
that has shaped signal processing in the nervous system. The different tasks that
animals and humans have to solve for a successful survival will therefore be inti-
mately related to properties and constraints imposed by the environment. Tasks
and contraints include the following:

In order to make sense of the world, complex sensory information has to be


analyzed and categorized for further processing. In general, various proper-
ties of natural scenes such as depth cues, texture, motion etc. are evaluated
and integrated to solve segmentation and categorization tasks; see the lit-
erature on Gestalt psychology (e. g., Henle, 1961; Kohler, 1992). It is very
likly that the brain employs the natural statistics of the environment when
performing object segmentation and object recognition (e. g., Ruderman,
1994; Olshausen and Field, 1996b; Reinagel, 2001, see also Section 2.2.2.2).
Apart from this, the nervous system seems to be specialized in specific
frequently occurring, relevant stimuli and tasks, for example the precise
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-13
OBJECTIVES AND METHODS

recognition of faces and the interpretation of facial expressions which are


important in a social context (Young and Ellis, 1989).
The environment and in particular objects have to be evaluated and labeled,
for example in terms of new vs. familiar, relevant vs. not relevant,
dangerous vs. not dangerous. This makes a memory necessary, as well
as rapid evaluations by the limbic system that appear as emotions from a
subjective point of view. See, for example, Roth (2001) for an introduction
to such issues.

Successful behavior frequently requires an exact representation of stimulus


features (such as color or position in space) and a precise control of mo-
tor actions. For example, these abilities are prerequisites for a successful
handling of objects.
An important demand is given by the necessity of fast signal processing , for
example for predator recognition and the corresponding avoidance behavior.
An impressive example was given by Thorpe et al. (1996) and Delorme et al.
(2000) who studied rapid visual categorization in humans and monkeys.
This requirements constrains neural codes (see Section 2.3).
A further requirement is an exact timing for behavioral responses and muscle
coordination. This issue is not trivial given the fact that signal processing in
the brain is mediated by the concurrent activity of millions of neurons which
have to be precisely coordinated to produce an exactly timed decision.
Finally, systems may be equipped with internal models of the world, for
example forward internal models and inverse internal models in the com-
putation of movements (Wolpert and Ghahramani, 2000). More generally,
humans and probably also many animals are able to predict phenomena in
the world which allows for planning and the rejection of non-viable solutions
to given problems.
Such tasks of the nervous system are reflected in current issues of neural coding;
see Section 2.2.2.

Constraints of the Biological System. Signal processing must comply with


constraints of the nervous system. Such contraints may be of a physical nature
(such as conformation changes in molecules resulting in a particular channel ki-
netics) or of a biological nature (such as anatomical and physiological properties
of neurons or brain areas). Such constraints are believed to be the result of evolu-
tionary processes and may therefore be adapted to relevant tasks and to demands
of the environment. From a theoretical point of view, the consideration of such
constraints restricts models of neural processing and parameter spaces for such
models which is usually advantageous. Biological constraints include
I-14 CHAPTER 2. NEURAL CODING

the maximal neural firing rate which limits the bandwidth of signal trans-
duction;

the existence of a refractory period (which is of course related to the max-


imal firing rate). For example, the spread of large-scale activity in neural
layers such as target waves or spiral waves (Jung and Mayer-Kress, 1995a;
Fohlmeister et al., 1995; Eurich and Schulzke, in press:L) requires such a
dead time;

delays in signal integration and signal conduction which shapes the spatio-
temporal flow of information;

the energy consumed by processes in the nervous system. Neural process-


ing is metabolically expensive; in particular the maintenance of the neu-
rons resting potential by ion pumps requires much energy. The metabolic
cost of neural information has been computed in Laughlin et al. (1998);
energy-efficient computing has been considered in Levy and Baxter (1996);
Sarpeshkar (1998); Balasubramanian et al. (2001);

the topology of brain areas which may, for example, determine the topology
of neural maps (Wolf et al., 1996);

the amount of neural tissue, i. e., the number of neurons and glial cells,
the total length of cables, the thickness of myelin sheath, the number
of synapses on a restricted neural surface area etc. These constraints are
related to both energy consumption and brain size;

the size of the brain which is likely to be limited by the size of the head of
a baby during birth.

Further constraints may well apply.

Optimization Principles. In the light of the multifaceted demands on the


brain and the various constraints that are effective, it is not an easy task to
decide if neural processing operates at some optimal compromise. Theoretical
investigations usually consider one or two factors only. An optimization may be
studied with respect to the following properties:

Representational accuracy : the requirement to encode stimuli or motor ac-


tions as accurately as possible;

Discrimination thresholds: the possibility to discriminate stimuli with as


few errors as possible. Note that high representational accuracy is not
necessarily accompanied with good discriminational abilities (Dinse, private
communication);
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-15
OBJECTIVES AND METHODS

Energy efficiency : metabolic costs of neural processing;

Material costs: number of neurons etc.; this is related to energy efficiency


because each is metabolically costly;

Speed of processing : it is advantageous to process signals fast in oder to


have small reaction times;

Robustness: a neural code should be robust with respect to neuron failure;


cell death is a common phenomenon in the nervous system. See also the
discussion on redundancy in neural codes (e. g., Reich et al., 2001; Barlow,
2001, and references therein).

Further optimization principles may apply to specific systems. In mo-


tor control, for example, several criteria like maximal smoothness of the
hand trajectory or minimal torque-change have been proposed (Wolpert
and Ghahramani, 2000, and references therein).

2.2.2 Objectives in Research on Neural Coding


According to the scheme shown in Figure 1.3, the field of neural coding is in
search of neural correlates of sensory perception, motor actions, and mental
states. There is a vast empirical literature in this branch of neuroscience ranging
from observations in patients with brain lesions to results obtained with imaging
techniques to electrophysiological recordings in a huge number of those species
equipped with a nervous system. Theoretical neuroscience, on the other hand,
has largely focused on signal processing on the level of single neurons or neural
populations. This makes it possible to extract a few central topics for which it
seems possible to integrate experimental observations and quantitative modeling.

2.2.2.1 The Encoding of Single Objects


A large number of experimental and theoretical studies can be subsumed under
the headline of considering the representation of a single object. From a theoretical
point of view, very different sensory and motor situations are summarized in the
following simple framework: an object is described as a point in a D-dimensional
feature space, where D is its number of features. From an experimental point
of view, this includes studies on single features like the orientation of bars (e. g.,
Hubel and Wiesel, 1962) or the velocity of full-field stimuli (e. g., Bialek et al.,
1991), but also refers to entities composed of several features, for example prey
stimuli (Roth, 1987) or the 2-dimensional position of a rat in a maze encoded by
place cells (e. g., Wilson and McNaughton, 1993; Zhang et al., 1998a). Motor
actions like the movement of an arm in a certain direction (Georgopoulos et al.,
1986) also fall in this category.
I-16 CHAPTER 2. NEURAL CODING

Apart from the large number of experimental situations that fit the single-
object case, this issue is also important because it may serve as a testbed for
neural encoding strategies under various constraints; for a more detailed discus-
sion of the relevance of single-object coding, see Section 2.4.1.
Questions that are usually addressed are:
Which part of a neural response carries information about the given object?
That is, which code can be identified in the neural system?
How well can the object be localized, given a neural response? That is, what
is the encoding accuracy obtained with a single cell or a cell population?
Theoretical approaches to computing the resolution obtained by a population of
neurons are described in Section 2.4 for two different neural codes.

2.2.2.2 The Encoding of Multiple Objects and Complex Scenes


Multiple Objects. The encoding of two or more objects or even natural scenes
is likely to make demands on the system that are different from those for the
encoding of a single object. Typical questions in this context are
What discrimination thresholds are obtained by a population of neurons as
multiple stimuli are presented?
In how far is the visual system adapted to process natural, complex visual
scenes?
On the one hand, behavioral studies on the discriminational ability of stimuli
have been obtained in a large number of species ranging from amphibians (e. g.,
Ingle, 1968) to humans (e. g., Treisman, 1986; Dinse et al., 1994; Blaser et al.,
2000). Such studies are often motivated by an interest in the phenomena of at-
tention: typically, the object to be attended has to be presented along with one
or several distractors. On the other hand, there are a number of electrophysiolog-
ical results on neural behavior in the presence of multiple stimuli, mostly in the
context of attention or nonclassical receptive fields (e. g., Moran and Desimone,
1985; Heeger, 1992; Groh et al., 1996; van Wezel et al., 1996; Carandini et al.,
1997; Recanzone et al., 1997; Posner and Gilbert, 1999; Das and Gilbert, 1999).
There are, however, only very few experiments employing both electrophysiology
and psychophysics. Britten et al. (1992) combine the psychophysical discrimi-
nation of motion directions of random-dot stimuli with recordings from monkey
area MT. In this experiment, however, the different stimuli were presented se-
quentially rather than simultaneously; see Section 2.3.1 for further discussion.
More recently, Treue et al. (2000) considered the simultaneous perception of mul-
tiple directions of motion and recorded from single neurons in MT. The authors
suggest that information about the stimuli is contained in the overall shape of
the population activity profile.
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-17
OBJECTIVES AND METHODS

On the theoretical side, signal detection theory was used to account for dis-
crimination performance (e. g., Balakrishnan (1999); Verghese (2001); Tougaard
(2002); see Young and Calvert (1974); Green and Swets (1988); Ripley (1996);
Duda et al. (2001) for introductory texts). Discrimination thresholds for neu-
ral populations with overlapping receptive fields were considered in Snippe and
Koenderink (1992a,b) for uncorrelated and correlated neural activity, respec-
tively. Woesler (2001) considered the problem of object segmentation in the
salamander visual system. Zemel et al. (1998) employ Bayesian estimation theory
in a scheme for encoding and decoding distributions of stimuli. Eurich (2003:L)
extends a Fisher information formalism for computing the localization accuracy
of a single stimulus to include the simultaneous presentation of multiple stimuli;
see Section 2.4.4 for details.

Natural Stimuli and Complex Scenes. Although most electrophysiological


studies employ simple stimuli, the idea of using natural stimuli is not new. For
example, Lettvin et al. (1959) deployed in their experiments a delightful
exhibit [. . .] a large color photograph of the natural habitat of
the frog from a frogs eye view, flowers and grass. David Marr
expressed his concern about simple stimuli as follows: One of our wholly
new findings is that the so called centre-surround organization
of the retinal ganglion cells is all a hoax! It is nothing but a
by-product of showing silly little spot stimuli to a clever piece
of machinery designed for looking at complete views (Vaina, 1991;
cited after Martin, 2000).
Zetzsche (2002) stresses the fact that natural images form only a tiny fraction
of all possible images and that the visual system is specialized for this subset;
see also Ruderman (1994) for a review on the statistics of natural images. Phys-
iological properties of the visual system can be understood when looking at the
statistics of natural images and applying additional constraints for the neural
processing. For example, Olshausen and Field (1996a,b) (see also Simoncelli
and Olshausen (2001) for a review) derive simple-cell receptive field properties
by employing a sparseness constraint in a learning algorithm for natural scenes.
Likewise, Rohrbein and Zetzsche (2002) show that Webers law results from a
reduction of statistical dependencies among neurons when presenting natural im-
ages.

2.2.2.3 Dynamical Stimuli and Dynamics of Signal Processing

Neural responses usually appear structured in time. For example, neurons may
show spike frequency adaptation (Smith et al., 2001; Benda and Herz, 2003) or
bursting (Lisman, 1997). Non-uniform behavior in time may be observed for two
different reasons:
I-18 CHAPTER 2. NEURAL CODING

The stimuli themselves may change in time. This is true for most nat-
ural stimuli and has been considered in electrophysiological experiments
(e. g., Bialek et al., 1991; Zhang et al., 1998a; Borst and Theunissen, 1999;
Wessberg et al., 2000).

The signal processing itself may be time-dependent. For example, Pack


et al. (2001) simultaneously present two motion stimuli to alert monkeys;
the stimuli are locally ambiguous but globally unambiguous. The authors
observe that neural responses in area MT initially reflect predominantly the
ambiguous local motion features, but gradually converge to an unambiguous
global representation.

The two causes for non-stationary behavior may occur concurrently. For example,
Worgotter et al. (1998) present flashed stimuli to anaesthetized cats and record
responses from LGN neurons. They observe phasic bursts followed by tonic re-
sponses. Subsequently, LGN target cells in visual cortex show a fast narrowing
of receptive fields. Such physiological behavior may reflect both time-changing
stimuli and time-changing neural processing.
The temporal behavior of orientation tuning in macaque primary visual cor-
tex has been studied by Ringach et al. (1997a) using reverse correlation in the
orientation domain and later also by Gillespie et al. (2001); the latter find the
same properties of V1 cells (preferred orientation, tuning width) in early and late
reponses. In the auditory system, receptive field changes have been reported by
Edeline et al. (2001) who studied evoked responses in cortical cells of guinea-
pigs during natural sleep. Spatiotemporal receptive field properties in the cat
and monkey visual system are studied by Dinse et al. (1990) and Bredfeldt and
Ringach (2002), respectively. See also Worgotter and Eysel (2000) for a review
article about the dynamics of receptive fields.
While the experiments cited in the previous paragraph are not accompanied
by investigations on neural coding, Victor (1999) reviews temporal aspects of
coding in the retina and LGN. Fairhall et al. (2001) examine the dynamics of a
neural code in the context of stimuli whose statistical properties are themselves
evolving dynamically; they find adaptation processes on time scales ranging from
tens of milliseconds to minutes.
Data analysis techniques for dynamic stimuli include reverse correlation (Hida
and Naka, 1982; Hida et al., 1983; Eckhorn et al., 1993; Ringach et al., 1997a,b;
Theunissen et al., 2001) and the reconstruction of stimuli (e. g., Zhang et al.,
1998a; Brown et al., 1998; Wessberg et al., 2000; Jakel, 2001; Bringmann, 2002;
Wiener and Richmond, 2003). For example, Zhang et al. (1998a) use Bayesian
techniques to reconstruct the path of a rat from the firing of its hippocampal
place cells; a considerable improvement of the reconstruction is obtained in a
two-step algorithm where information on the previous reconstruction step enters
the actual positional estimate.
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-19
OBJECTIVES AND METHODS

The representation of dynamic stimuli in particular if they change on fast


time scales requires appropriate neural codes. For example, single-neuron firing
rate estimation may be too slow to account for the rapid perception of visual
stimuli (Gautrais and Thorpe, 1998; Thorpe et al., 2001). See Section 2.3 and
the discussion of the various codes for fast processing (binary coding, temporal
codes, rank order coding).

2.2.2.4 Signal Processing and Mental States

In the last two decades, there has been an increasing interest in neural correlates
beyond simple sensory stimuli or motor actions. It is now well established that
cortical neural activity depends on mental states such as attention or the intention
to solve a certain task. To give an example, Toth and Assad (2002) have found
neurons in monkey lateral intraparietal area that encode the color of a stimulus
only if color is a feature that is relevant for a task at hand; otherwise, color
sensitivity is virtually absent in that brain area. The correlation between mental
states and neural activity is reflected in the extended scheme of Figure 1.3.

Attention. Attention comprises all mechanisms that are used by the brain
to select and modulate behaviorally relevant information for further processing
(Chun and Wolfe, 2001). The selection process involves an enhancement of the
neural representation of the attended stimulus and a suppression of distractor
stimuli (e. g., Kastner et al., 1998). The neural ensemble representing the at-
tented stimulus will therefore be better visible for subsequent processing stages
(Desimone and Duncan, 1995).
A mainstream interpretation of the function of attention is the grouping of
single features to coherent objects (e. g., Treisman and Gelade, 1980; Wolfe
and Bennett, 1997). However, attending an object also increases the spatial
resolution (Yeshurun and Carrasco, 1998, 1999) and accelerates the rate of visual
information processing (Carrasco and McElree, 2001). In the context of such
observations, the traditional view on attention has recently been complemented
by statistical and informational aspects (Dayan et al., 2000; Verghese, 2001).
A number of electrophysiological studies reveal effects of the attentional state
on the response properties of neurons in visual cortex. Earlier work reports the
influence of attention only on later stages of the visual pathway (e. g., Moran
and Desimone, 1985; Maunsell, 1995; Treue and Maunsell, 1996; for a review, see
Groh et al., 1996). Motter (1993), however, and more recent publications find
attentional modulations also in the early visual system (e. g., Luck et al., 1997;
Roelfsema et al., 1998; Reynolds et al., 1999; reviews are given by Posner and
Gilbert, 1999; Olson, 2001; Treue, 2001). The observed attentional effects include
a change in neural synchrony (Steinmetz et al., 2000; Fries et al., 2001) and an
increase of the gain of tuning curves (McAdams and Maunsell, 1999; Treue and
I-20 CHAPTER 2. NEURAL CODING

Trujillo, 1999). A sharpening of tuning curves has also been suggested (Lee et al.,
1999).
Modelling studies referring to the statistical and informational aspects of at-
tention include Bayesian (Dayan and Zemel, 1999) and Fisher information com-
putations (Nakahara et al., 2001; Eurich, 2003:L); the latter is further described
in Section 2.4.4. In a current project, Eurich and Freiwald (2001) employ multi-
cell recordings in monkey visual cortex and reconstruction techniques to identify
neural correlates of object-based attention and to relate them to psychophysical
behavior.

The research area of understanding mental phenomena on the level of neural


populations through empirical studies and modeling is still in its infancy. Apart
from the topic of attention mentioned above there are two further issues which
should be briefly mentioned because they may allow for a combined experimental
and modeling access.

Consciousness. The first topic is the search for neural correlates of conscious-
ness which has been addressed in numerous publications in philosophy and neu-
roscience (e. g., Chalmers, 1995; Shear, 1997; Metzinger, 2000). More recently,
Crick and Koch (2003) have suggested a framework for the neural basis of con-
sciousness whose elements can be tested empirically. Predictions include the
existence of explicit representations (i. e., neural representations of objects by a
small number of neurons) and competing coalitions (assemblies) of neurons.
Accompanying neural modeling may elucidate mechanisms for such phenomena
and contribute to the question under which conditions humans become conscious
of their actions and perceptions.

Neural Correlates of Abstract Rules. The second issue is the question of


how the brain integrates its current perception and previous experiences to make
decisions. A step in this direction is the discovery of neurons which encode abtract
rules (Hoshi et al., 1998; White and Wise, 1999; Asaad et al., 2000; Wallis et al.,
2001). For example, Wallis et al. (2001) trained monkeys to release a lever if two
arbitrary visual stimuli match (match rule). Alternatively, the lever had to
be released whenever two stimuli did not match (non-match rule). Neurons in
prefrontal cortex could be identified that selectively fire as the monkey performs
either of the rules, irrespective of the stimuli.
Further experimental and modeling studies in this direction may bridge the
gap between connectionism and the classical rule-based Artificial Intelligence (AI)
approach to cognition. Furthermore, they may allow for an approach to neural
decision-making, elucidating the mechanisms how mental decisions and precisely
timed motor commands result from a distributed signal processing that involves
the concurrent activity of millions of neurons.
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-21
OBJECTIVES AND METHODS

2.2.3 Methods
2.2.3.1 Introduction
A large number of empirical and theoretical methods have been developed and
used to characterize the relation between neural behavior and sensory stimuli,
motor stimuli, and mental states.

Empirical methods. The overview of Figure 2.1 (left) shows a spectrum of


empirical approaches ranging from electrophysiology (e. g., Kettenmann and
Grantyn, 1992), neuroanatomy (e. g., Nolte, 2002; Hanaway and Gado, 2002;
Kruger, 1994), neuropharmacology (e. g., Nestler et al., 2001), noninvasive meth-
ods like EEG (ElectroEncephaloGraphy, e. g., Nunez, 1981, 1995), and fMRI
(functional Magnetic Resonance Imaging; e. g., Buxton, 2001) to psychophysics
(e. g., Spillmann and Werner, 1990; Fahle and Poggio, 2002, for visual psy-
chophysics) and behavioral studies (Camhi, 1984; Carew, 2000) including clinical
observations (e. g., Gazzaniga et al., 1998).
Some empirical methods investigate neural foundations of perception and
action on a rather coarse level. Functional Magnetic Resonance Imaging, for
example, employs local changes in blood oxygenation due to neural activity
(hemodynamic response); the fMRI signal therefore has a poor temporal reso-
lution. Its spatial resolution is about 1 mm2 (which is small compared to the
spatial resolution of the EEG signal), i. e., fMRI responses still result from the
activity of millions of nerve cells. The noninvasiveness of fMRI and other meth-
ods (such as PET, Positron Emission Tomography) makes them valuable tools
for localizing functions especially on the cortical surface and investigating spatio-
temporal signal pathways in the brain; the literature on these topics is vast. 2
For the questions raised in Section 2.2.2, however, a finer spatio-temporal scale
must be adopted that considers the electrical activity of single neurons or neural
populations in greater detail. In most studies, the spiking behavior is correlated
with the internal world of Figure 1.3 under the assumption of one of the neural
codes described in Section 2.3. The primary empirical method for this kind of
research is electrophysiology , i. e., the extracellular or intracellular recording of
the electrical activity of single or multiple cells (see Kettenmann and Grantyn
(1992) for an introduction into the methodology). The most favorable technique
is the recording in vivo since it allows for correlating neural behavior with action
and perception in the intact animal (e. g., Hubel and Wiesel, 1962; Britten et al.,
1992; Wiggers et al., 1995b; Rieke et al., 1997; Stanley et al., 1999; Wallis et al.,
2001). Additional data may be obtained from slice recordings (e. g., Traub and
2
A curious example of a functional localization has recently been provided by Azari et al.
(2001) who study neural correlates of religios experience and indeed identify frontal and parietal
cortical areas where religious and non-religious subjects show different activities during the
recitation of Psalm 23.
I-22 CHAPTER 2. NEURAL CODING

Methods

Empirical Approaches Theoretical Approaches

Electrophysiology (For electrophysiological data


- Single-cell recordings
- Multi-cell recordings } in combination with behavioral
data)
Neuropharmacology Data Analysis Modeling
tion Ev Mo
Neuroanatomy n struc
Reco alu del
- Lesion studies ati
on

Non-invasive Methods Statistical Analysis


- EEG, MEG, TMS
- Imaging Techniques

Psychophysics

Behavioral & Clinical Studies


}
Figure 2.1: Overview of methodology for the investigation of neural coding. Em-
pirical approaches include a large number of methods on different time and length
scales. Here we consider the level of single cells and cell populations where elec-
trophysiology yields much insight in coding strategies. Theoretical neuroscience
at this level comprises data analysis, statistical analysis of neural responses, and
neural modeling. For details, see text.

Miles, 1991) or even cell cultures (Murray, 1965; Mains and Patterson, 1973;
Dichter, 1978; Potter and DeMarse, 2001).
In order to quantify correlations between neural dynamics and animal or hu-
man perception and behavior, electrophysiological recordings have to be comple-
mented by measurements of sensory performance or accuracy of motor control.
This is done in behavioral experiments including psychophysics. An ideal ex-
perimental setup is of course the combined behavioral and electrophysiological
measurement; this has been achieved only in a few cases such as Georgopoulos
et al. (1986); Britten et al. (1992); Zhang et al. (1998a); Wessberg et al. (2000).
In most cases, electrophysiology and behavior are obtained from separate exper-
imental setups from the same species (Roth, 1982; Wiggers et al., 1995b; Eurich
et al., 1995:L) or even from different species; for example, electropyhsiology in an
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-23
OBJECTIVES AND METHODS

animal species may be combined with human psychophysics (Dinse et al., 1994,
1995; Eurich et al., 1997b; Dicke et al., 1999).

Theoretical methods. Here we focus on theoretical approaches employed to


address neural activity on the single cell level or on the level of small neural
populations (i. e., in the order of 102 neurons). Most approaches consider the
fact that neural responses appear noisy : A repetitive presentation of identical
stimuli or a repetitive series of motor actions will usually result in different neural
responses, i. e., different spike trains; in addition, most neurons show a background
activity , the production of action potentials also in the absence of stimuli or
motor actions (e. g., Tomko and Crapper, 1974; Burns and Webb, 1976; Dean,
1981; Softky and Koch, 1992, 1993)3 . Such irregularities dictate most of the
current methodology in evaluating neural responses irrespective of the nature
and interpretation of the noise.4
Figure 2.1 (right) divides the theoretical methodology in three subcategories:
data analysis, statistical analysis, and modeling. Data analysis is concerned with
the evaluation of the responses of single neurons or neural populations. For single
neurons, a standard procedure is the measurement of the average firing rate for
different values of a stimulus parameter. Such data may be extremely compressed
by providing a single number, for example the orientation index, quantifying
the response variation for different stimuli (e. g., Girman et al., 1999). A more
frequently used characterization of neural response properties is obtained through
tuning curves: the average neural firing rate as a function of some continuous
stimulus parameter. For example, a typical tuning curve of visual neurons for
the orientation of bars or full-field gratings can be described by a von Mises
function (Swindale, 1998) which resembles a Gaussian but is defined on a circle.
The interpretation of the functional relevance of tuning properties, however, is not
straightforward: in the biological literature, the stimulus for which the response
is maximal is considered to be the optimal stimulus, the stimulus to which
the cell is tuned; indeed, the cell produces most action potentials and therefore
spends a maximal amount of energy when responding to this stimulus. On the
other hand, Fisher information analysis suggests that the optimal stimulus is
located at the maximum slope of the tuning curve because at this position, the cell
3
Note, however, that precise neural responses have also been found. For example, Mainen
and Sejnowski (1995) find reliable spike trains upon the electrial stimulation of cortical neurons
with an irregular current input; DeWeese and Zador (2003a,b) suggest a binary neural code
in cortical neurons which show a precise response with a single spike in most cases. Also, the
issue of noise depends on the interpretation of the neural response: If a rate code or a code
depending on spike timing is assumed, a neural response may be considered noisy; if a binary
code is assumed, the same response may be interpreted as practically noise-free.
4
The noise in neural responses has been attributed to the single-neuron dynamics (Lass and
Abeles, 1975; Croner et al., 1993) and to network effects (Shadlen and Newsome, 1998; Arieli
et al., 1996) and may (Mainen and Sejnowski, 1995; Harris and Wolpert, 1998; Cecchi et al.,
2000) or may not (Croner et al., 1993) depend on the input or the task at hand.
I-24 CHAPTER 2. NEURAL CODING

is most sensitive to changes in the stimulus (see Figure 2 in Eurich and Pawelzik
(2000); Fisher information is further described in Sections 2.2.3.4 and 2.4.3.1).
The concept of a tuning curve has been extended to include the spatio-temporal
neural behavior (Hida and Naka, 1982; Hida et al., 1983; Eckhorn et al., 1993;
Ringach et al., 1997b; Theunissen et al., 2001).
In the case of multi-neuron reponses, the concept of tuning curves becomes
unwieldy: it is neither convenient nor sufficient to provide a list of tuning curves
without further discussing the consequences of the combined neural activities
for perception and action or for the internal signal processing. A framework
frequently employed in the last 20 years is that of neural reconstruction (Figure
2.2): Neural responses (usually those of a whole population; but see Bialek et al.
(1991); Borst and Theunissen (1999) for single-neuron reconstructions) are sup-
posed to encode a sensory stimulus or a motor action, and mathematical methods
are employed to decode these responses, that is, to infer the presented stimulus
or the motor action from the seqences of spike trains of the neurons under con-
sideration. The reconstructed stimulus or motor action can then be compared

Encoding
Figure 2.2: The idea of neu-
Neuron #

Decoding, ral reconstruction. See text


Reconstruction for explanations.
0 1.5
Time (s)

to the original stimulus or motor action, respectively. This allows for hypothe-
ses about neural codes and the performance of the neurons under consideration.
In this sense, reconstruction can be regarded as a data analysis method that is
capable of integrating the activity of multiple neurons. For a brief discussion of
the interpretation of this group of methods see Section 2.3.4.
Further approaches for data analysis include detailed investigations of the
correlations in population responses motivated by the question if synchronous
spiking mediates the binding of features into objects (Eckhorn et al., 1988; Gray
et al., 1989). See for example Perkel et al. (1967); Gray et al. (1989); Gruen et al.
(2001a,b); Gutig et al. (2001); for pitfalls in the detection of synchronous spike
events, cf. Brody (1999).
Statistical analysis of neural encoding strategies, the second subcategory in
Figure 2.1 subsumes various methods to study encoding properties of neurons or
neural populations. These methods may include the evaluation of real data such
that there is an overlap with data evaluation techniques; this does especially hold
for reconstructions as outlined above. In many cases, however, responses of hy-
pothetical neural populations are considered to find advantageous physiological
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-25
OBJECTIVES AND METHODS

properties in the light of the questions raised in Section 2.2.2. For example, the
measure of Fisher information from classical statistics allows for analytical inves-
tigations of the sensory resolution obtained by the concerted firing of neurons;
cf. Section 2.2.3.4 and references therein.
The last subcategory of theoretial methods shown in Figure 2.1 is neural mod-
eling which is defined here as the modeling of neurons or neural populations as
dynamical systems with the goal to evaluate model performance with respect to
issues of neural coding. For example, Herzog et al. (2003:L) model cortical neural
populations and identify neural correlates of the visibility of a vernier element in
psychophysical experiments. Such dynamical models will be the topic of Chap-
ter 3; the issue of combining neural dynamics and neural coding will be briefly
discussed in Chapter 4.
The following sections consider a few theoretical methods used to address
questions like those listed in Section 2.2.2. Section 2.2.3.2 gives an example of a
new approach to estimated tuning curves from noisy data; Sections 2.2.3.32.2.3.5
describe frequently used approaches to analyze the activity of neural populations:
vector-based reconstruction methods, statistical estimation theory and informa-
tion theory, respectively.

2.2.3.2 Example from Data Analysis: Tuning Curve Estimation from


Noisy Data
Tuning curves are typically obtained in the following way: The response of a
neuron is repetitively recorded for a certain number of values of some continuous
stimulus paramter (for example, the orientation of a full-field grating), whereby
constraints like a limited recording time usually allow only for few repetitions
of each paramater value. Subsequently, the average firing rate is computed sep-
arately at each of the parameter values, corresponding to minimizing the least
square error. Finally, the mean values are used to interpolate the tuning curve,
often simply by linear interpolation (e. g., Girman et al., 1999). The described
procedure, however, yields tuning curves that are not robust. In particular, the
method faces two major problems:

For a single parameter value, the average firing rate may vary considerably
upon repetition of the same experiment. There are three reasons for this:
First, due to experimental difficulties such as positioning and maintaining
recording electrodes in place, the number of measurements in neural record-
ings is often very limited. Therefore, the statistical response properties have
to be inferred from a small set of data points, even if the neural system is
assumed to be static throughout the experiment. Second, the trial-to-trial
variability in the neural spike count is sometimes very large, i. e., there is
a considerable amount of noise in the system. Two examples for sets of
noisy data from rat visual cortex are shown in Figure 2.3. Third, a generic
I-26 CHAPTER 2. NEURAL CODING

neural responses ( # spikes)


25

20
200

15

10

100
0
0 360 0 360

stimulus orientation (degrees)

Figure 2.3: Responses of two neurons from area 17 of the rat to moving bar
stimuli. Dots indicate firing rates in single trials. Solid lines are tuning curves
computed according to the method described in Section 4. Left: A cell which
has a statistically significant tuning. Right: A cell where no significant tuning is
found. The wiggled curve is a polynomial of maximal degree as obtained from
the method developed in Etzold et al. (2002, 2003:L, in press); this function can
be simplified which results in a constant tuning function (straight line). Data by
W. Freiwald with kind permission; for the experimental setup, see Freiwald et al.
(2002:L).

feature of electrophysiological data is the occurrence of outliers, i. e., rare


events lying at a considerable distance from the other data. Often, such
data sets are simply discarded from further analysis, because researchers
feel that something has gone wrong in the experiment. However, outliers
may arise not only as a result of recording or transmission errors, but they
may reflect particular internal states of the neural system and therefore be
of functional significance. In most cases it is impossible to decide which
data are erroneous outliers and which are none.
The widely used sum of least squares method gives disproportionate at-
tention to outlying values and underweighs the influence of the remaining
data. This translates to a very high variability in the fitted tuning curves,
resulting in a low robustness.

A second factor contributing to a low robustness of the estimated tuning


curve is the standard 2-step fitting procedure described above. This method
is especially sensitive to variations in the samples if some parameter values
have been tested with only few presentations, giving rise to a high variability
in the tuning curve estimation at these points.

In Etzold et al. (2002, 2003:L, in press), these problems are addressed:


2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-27
OBJECTIVES AND METHODS

First, the least square measure is replaced by a rank method that gives
outlying values smaller weights without completely discarding them.

Second, the two-step procedure is replaced by a procedure which fits the


complete tuning curve from all responses to all parameter values in one
step. This removes uncertainties resulting from few repetitions in certain
stimulus conditions. The tuning curve is defined by Fourier polynomials.

While details of the method are described in Etzold et al. (2003:L, in press), only
a few results shall be given here. Figure 2.3 shows tuning curves fitted with the
new method from extracellular recordings in rat visual cortex. The cell on the
left is clearly tuned; the cell on the right hand side is first approximated by a
Fourier polynomial of maximal degree (corresponding to half of the number of
stimulus conditions; wiggled curve). The degree of the polynomial, however, can
be reduced using a criterion based on the distribution of residuals; in this case, it
can be reduced to the order zero (constant tuning), indicating that the cell does
not show statistically significant tuning.
The robustness of the method is demonstrated using artifically generated
data in combination with the bootstrap method. Figure 2.4A shows an ideal
tuning curve based on which three bootstrap samples are drawn according to a
mixture distribution. The left column of Figure 2.4B, C, D shows tuning curves
as obtained from a least-square fit, while tuning curves in the right column result
from the new algorithm. The variation in the fits on the left is clearly visible
while rank-based method yields stable results.
The one-step fitting procedure will be especially helpful in situations where
data are limited, stimulus repetitions are not equally distributed among stimulus
conditions, and where neurons tend to be very noisy as it is the case in rat primary
visual cortex (Freiwald, personal communication).

2.2.3.3 Vector-Based Reconstruction Methods


An early account of a vector-based measure, the so-called vector strength, was
introduced by Goldberg and Brown (1969) to quantify the degree of phase-locking
of neural firing to an external auditory stimulus. According to this approach, each
spike is considered to be a vector of unit length with a phase angle corresponding
to the phase of the stimulus at the time the spike occurs. The whole spike train
of length N is thus translated into a set of N vectors. The vector strength is
defined to be the length of the sum of all these vectors, divided by N . A vector
strength of 1 corresponds to perfect synchronization, whereas a vector strength
of 0 implies a random relation between stimulus (i. e., phase) and response.
Georgopoulos et al. (1986) employ a vector method, the population vector , to
reconstruct the direction of arm movement of monkey from the firing rates of
motor cortical neurons. In brief, a cosine tuning with some central direction ci
I-28 CHAPTER 2. NEURAL CODING

Figure 2.4: Comparison of tuning


A Ideal tuning curve
curves for artificial data, as fitted us-
neural responses (# spikes)
20 ing least squares and the rank based
method. Figure A shows a hypo-
10
thetical ideal
tuning curve t() =

5 + 5 sin 2j . We show three sets
0 360
stimulus orientations
of noisy data in Figs. B, C, D (18
B generated data and tuning curves orientation conditions with 20 repe-
titions each, depicted as stars; iden-
neural responses (# spikes)

20 20 tical data not shown separately).


Datawere obtained as ri (j ) = 5 +
10 10
5 sin 2j +i (j ). The noise xii (j )
was generated from a mixture dis-
0 360 0 360
C tribution consisting of 95 % Gaus-
neural responses (# spikes)

20 20 sian(0,3) and 5% Cauchy(0,1) dis-


tributed data. Three outliers with
10 10 values 45, 49, 58 spikes are not
shown and four outliers with values
0 360 0 360 36, 48, 52, 61 spikes are not shown in
D the graphics B and C, respectively.
neural responses (# spikes)

20 20 In the left column, tuning curves of


lowest degree are fitted (see Section
10 10
4) using the sum of least squares cri-
terion on the right, the same data
0 360 0 360 are used to fit tuning curves of low-
stimulus orientations stimulus orientations est degree using the rank-based al-
gorithm.

is assumed for cell i (i = 1, . . . , N ). For a movement direction x, the average


response of cell i is given by
ni (x) = ci ci x + bi , (2.1)
where bi is a stimulus-independent background activity, and ci is the maximal
response of the cell after subtraction of the background activity. The population
vector
XN
xPV = (ni (x) bi ) ci (2.2)
i=1
yields an estimate of the arm movement of the monkey as encoded by the pop-
ulation of N cells. Zhang et al. (1998a) show that the population vector is a
special case of a basis function method. The population vector used to be pop-
ular due to its simplicity and robustness of results (Georgopoulos (1995); for
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-29
OBJECTIVES AND METHODS

further details and examples on vector reconstruction see Salinas and Abbott,
1994; Vogels, 1990; Blum and Abbott, 1996). However, Foldiak (1993) points
out that this method requires a high number of cells to derive reliable estimates
(see also Zohary, 1992). Scott et al. (2001) demonstrate a bias of the population
vector in the reconstruction of reaching movements from the activity of neurons
in monkey motor cortex. Although the population vector method can be ex-
tended to include the length of the vector rather than its direction only (Abbott
and Blum, 1996; Blum and Abbott, 1996; Zhang et al., 1998a), Bayesian methods
are preferred today due to their superior performance (see Foldiak (1993); Zhang
et al. (1998a) for a discussion of this issue).

2.2.3.4 Statistical Estimation Theory

General setup. Neural coding can be treated as an estimation-theoretic prob-


lem. Introductions to estimation theory are given in Cover and Thomas (1991);
Kay (1993); OHagan (1994); Gelman et al. (1997); Stuart et al. (1999). The ap-
plication of estimation theory to neural systems is introduced in Sanger (1996);
Oram et al. (1998); Brunel and Nadal (1998); Eurich and Mallot (2001); Dayan
and Abbott (2001). Estimation theory is usually used in the context of rate
coding; cf. section 2.3.2.2. In this framework, the probabilistic nature of neu-
ral responses which can for example be found in cortical neurons (Softky and
Koch, 1992, 1993; Mainen and Sejnowski, 1995; Arieli et al., 1996) is treated
as follows.
Consider stimuli or motor actions parametrized by a scalar or a vector variable
x X, where X is a D-dimensional space referred to as the stimulus space; cf.
Section 2.4.1 for a more detailed motivation of this concept. If a single stimulus
x0 is repetitively presented, a neuron under consideration will respond with vary-
ing numbers of action potentials n(1) , n(2) , . . . within some time interval after
stimulus onset. These responses are cast into a probability distribution P (n; x 0 ).
This procedure has to be repeated for various values of the stimulus variable
x. The result is a parameter-dependent probability distribution P (n; x). For the
simultaneous recording of the activities of N neurons, P (n; x) is replaced by a
parameter-dependent multivariate distribution P (n1 , . . . , nN ; x).
For a given response characteristics P (n1 , . . . , nN ; x), a typical task is now to
estimate the stimulus x from a single measurement of spike counts n1 , . . . , nN ,
for example through maximum likelihood (Sanger, 1996) or Bayesian estimation
(Foldiak, 1993). Such functions will henceforth be referred to as estimators. The
estimated value will be denoted as x = x(n1 , . . . , nN ). T presentations of the
same stimulus x will yield different estimates x1 , . . . , xT due to different actual
spike counts.
The average deviation of the estimated stimulus from the true stimulus is
I-30 CHAPTER 2. NEURAL CODING

called the bias of the estimator S:


T
1X
bS (x) = lim xt x hx xi . (2.3)
T T
t=1

An unbiased estimator is an estimator for which bS (x) 0, i. e., on average, the


reconstruction method yields the correct value of any stimulus. In some cases
it is not possible to find an unbiased estimator. For example, tuning curves
with a rectangular profile do not allow for unbiased reconstruction of stimuli (see
Figure 2.5).

Figure 2.5: Three receptive fields in a 2-


dimensional stimulus space X. Consider a
2 binary neural response, i. e., a stimulus in-
side a receptive field yields a response 1
of the corresponding neuron, otherwise, its
1 reponse is 0. The two stimuli A and B
A therefore yield the same population response
B
3 (a 1 in neurons 1 and 3, a 0 in neuron
2). Any estimator operating on the neural
responses is therefore unable to distinguish
X between these two stimuli. As a result, the
bias bS (x) cannot vanish for both x = A and
x = B. That is, any estimator will be biased.

The mean square error of the estimator S is defined as


T
2 1X
(x) = lim (xt x)2 h(x x)2 i . (2.4)
T T
t=1

The mean square error (or its square root) is the usual measure of the encoding
accuracy of a neural population.

Bayesian estimation. Bayesian estimation presupposes a prior distribution of


stimuli, p(x), independent of the neural system.5 The parameter-dependent prob-
ability distribution P (n1 , . . . , nN ; x) is usually denoted as P (n1 , . . . , nN |x) and
interpreted as a conditional probability . Applying Bayes theorem yields
P (k1 , . . . kN |x) p(x)
p(x|k1 , . . . kN ) = . (2.5)
P (n1 , . . . , nN )
5
Here we consider
R a continuous stimulus space X; hence, p(x) is a probability density with
normalization X p(x) dx = 1. However, the formalism can also be formulated with a discrete
space and, correspondingly, a probability distribution P (x).
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-31
OBJECTIVES AND METHODS

p(x|k1 , . . . kN ) is called the posterior probability (density) because it reflects the


knowledge after the measurement of the neural response. The denominator
P (n1 , . . . , nN ) in equation 2.5 plays the role of a normalization and can be eval-
uated according to
Z
P (n1 , . . . , nN ) = P (n1 , . . . , nN |x) p(x) dx . (2.6)
X

The posterior probability is used to compute an estimate x of the stimulus. The


usual error measure is the Bayesian Mean Square Error (BMSE)

BMSE = E (x x)2 ,

(2.7)

whereby E(. . .) denotes the expected value over the joint distribution p(n1 , . . . , nN , x) =
P (n1 , . . . , nN |x)p(x).6 For the BMSE, the optimal choice of x is the mean of the
conditional distribution,
Z
x = x p(x|n1 , . . . , nN ) dx , (2.8)
X

i. e., it minimizes the mean square error (x)2 (see for example Rieke et al.,
1997). A convenient estimate which can be easily evaluated is the maximum of
the posterior distribution:

x = argmaxxX p(x|n1 , . . . , nN ) , (2.9)

referred to as the MAP (Maximum-A-Posteriori) estimator.


A Bayesian approach has been used in theoretical contributions on neural cod-
ing (e. g., Foldiak, 1993; Salinas and Abbott, 1994; Abbott, 1994; Zemel et al.,
1998) and neural dynamics (Koechlin et al., 1999). It has also successfully been
applied in electrophysiological experiments to reconstruct stimuli from the activ-
ity of neural populations (e. g., Zhang et al., 1998a; Oram et al., 1998; Freiwald
et al., 2002:L; Stemmann et al., 2003). As indicated above, such reconstructions
can be regarded as data analysis techniques making sense of the concerted firing
of large neural populations: results are obtained which can be easily interpreted
in terms of encoding quality.
There are several thing to consider and to bear in mind when using Bayesian
reconstruction:
6
Note that this error measure is different from the error defined in (2.4); here, the average
is computed for the joint distribution p(n1 , . . . , nN , x), whereas in classical estimation, the
average is computed with respect to the distribution P (n1 , . . . , nN ; x), i. e., for the spike count
variables for a given, fixed parameter x. Cf. Kay (1993).
I-32 CHAPTER 2. NEURAL CODING

The empirical determination of a multivariate parameter-dependent distri-


bution of neural responses, P (n1 , . . . , nN |x), will normally not be feasible
due to the limited amount of available data. Additional assumptions on
the distributions are therefore necessary. An immediate simplification is
the assumption of uncorrelated firing :
N
Y
P (n1 , . . . , nN |x) = P (ni |x) . (2.10)
i=1

Note that (2.10) refers to the conditional distributions, sometimes termed


noise correlation (Gawne and Richmond, 1993). This type of correlation is
different from correlations in the unconditioned distribution P (n1 , . . . , nN ),
also called signal correlation (Gawne and Richmond, 1993). Neurons with
overlapping tuning curves, for example,Qhave correlations in their (uncon-
ditioned) firing rates: P (n1 , . . . , nN ) 6= N
i=1 P (ni ). However, they may or
may not have correlations in the conditional distributions. A missing noise
correlation can be interpreted in a way that each neuron i (i = 1, . . . , N )
evaluates its distribution P (ni |x) (i. e., its spike statistics for a given av-
erage fi (x)) independently of all other neurons.
Even the univariate parameter-dependent distributions P (ki |x) are approx-
imated in most cases. To this end, the average number of spikes for each
stimulus x, i. e., the tuning curves
hni (x)i
fi (x) = (i = 1, . . . , N ) (2.11)

are empirically determined. Some noise model, usually a Poisson or a nor-
mal distribution, with a mean value f (x) is then employed for the recon-
struction.
What is the interpretation of the prior distribution p(x)? The simplest way
to deal with p(x) is to use the actual distribution of stimuli as presented
during an experiment. Alternatively, p(x) may formalize the naturally oc-
curring stimulus statistics as it perceived by an animal, or a distribution of
actually occuring motor actions. This case may be tested by reconstruct-
ing stimuli with a natural prior: the nervous system may be adapted to
process natural stimuli (see also the second paragraph in Section 2.2.2) and
thus yield a higher accuracy in this case. However, natural distributions of
stimuli are harder to obtain than a simple distribution of stimuli used in
the lab. A third, slightly different interpretation of the prior says that it
reflects the expectation of an animal to perceive certain stimuli or features.
In the latter two interpretations, if no further information about the stimuli
is available, one can use so-called noninformative priors (for a discussion,
see OHagan, 1994).
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-33
OBJECTIVES AND METHODS

Using the optimal estimator (2.8) implies the philosophy that frequently
occurring stimuli should be represented with a high accuracy while rarely
occurring stimuli may have a larger reconstruction error such that the
error is small if averaged over stimuli and noise distribution. However, this
may not be feasible in all cases: A rarely occurring stimulus, for example
the appearance of a predator, may nevertheless be relevant for an animal
in which case some attentional mechanism will be elicited. From a signal
processing point of view, such events could be considered by including an
additional weighting function in the reconstruction algorithm giving more
weight to stimuli that are considered to be relevant.

The results obtained from Bayesian estimation are superior to other meth-
ods such as vector decoding. Zhang et al. (1998a) applied several techniques to
reconstruct the path of a rat running in a maze from the firing of hippocampal
place cells. A straightforward Bayesian reconstruction with a flat prior as out-
lined above proved to be second-best; the best method was an extended Bayesian
scheme where additional information about the position of the rat entered the
computation.
Freiwald et al. (2002:L) document a collaboration of eight institutes to estab-
lish a new recording technique with micro-machined silicon probes for obtaining
spike data from large neural populations, to apply this technique to the rat visual
cortex, and to evaluate the data using estimation theory. The rat primary visual
cortex is not as well studied as corresponding structures in cats or monkeys (see,
for example, van der Togt et al., 1998; Girman et al., 1999), and cells are consid-
ered to be very noisy. We therefore used standard stimuli (whole screen black and
white gratings at various orientations ) to study the encoding accuracy of neural
populations in area V1. Most neurons turned out not to be tuned significantly,
but a few neurons showed direction preferences; Figure 2.6 shows an example.
A Bayesian analysis was subsequently performed; cf. Figure 2.7a for a con-
ditional probability distribution of the tuned cell response shown in Figure 2.6.
To yield an estimate of the orientation resolution obtained with a population of
tuned neurons of rat area V1, a well-tuned cell was mathematically duplicated,
and the resulting tuning curves were distributed over all orientations. A sub-
sequent Bayesian analysis employing the MAP estimator revealed that only few
neurons are sufficient to yield a good resolution of oriented gratings (Figure 2.7b).

An essential step in such a data analysis is a robust estimation of the neuronal


tuning curves which yields reliable results also for high noise and in the presence
of outliers. A method for this is described in Etzold et al. (2003:L, in press); see
Section 2.2.3.2.

Fisher information. Apart from the fact that the natural prior distribution of
stimuli, p(x), is usually not available, Bayesian techniques have the disadvantage
I-34 CHAPTER 2. NEURAL CODING

Figure 2.6: Tuning properties of a single


neuron recorded in rat primary visual cor-
tex. The upper graph shows firing rate as
a function of time and stimulus orientation
in a gray-scale code. Lighter colors indi-
cate firing rates higher than average, darker
colors firing rates lower than average. Be-
low, the PSTH is depicted which was com-
puted as the average firing rate over all stim-
ulus conditions (marginal distribution) of the
above plot. Shown is a time interval of
2800 ms. The stimulus was turned on at
1000 ms and switched off at 2000 ms. Stim-
ulus presentation leads to a steep increase of
firing rate (to about 20 Hz on average) for
about 100 ms. The tuning curve of the first
150 ms (shown as a polar plot on the lower
left) shows only a weak modulation which
is not statistically significant. A clear direc-
tional (orientational) tuning is only apparent
after the initial burst of activity. The tuning
curve of all spikes fired in the time period
between 1150 ms and 2000 ms after stimulus
onset shows two peaks around 70 90 and
250 270 of visual angle. In this phase,
both response enhancement and suppression
as compared to the pre-stimulus interval lead
to the cells tuning properties. Therefore, in
the PSTH averaged over all stimulus condi-
tions (middle) firing rates during stimulation
are only slightly higher than before stimulus
onset. Stimulus offset leads to a strong sup-
pression of activity in all stimulus conditions
tested, lasting for about 1 s (from Freiwald
et al., 2002:L).

to allow only for a very limited analytical access: results are usually obtained
through model simulation and subsequent evaluation. In contrast, Fisher infor-
mation, a measure from classical estimation theory, provides a general framework
which yields also an analytical access in some cases.
In brief, for a given characteristics of neural spike counts P (n1 , . . . , nN ; x),
i. e., for a multivariate distribution depending on a D-dimensional parameter x,
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-35
OBJECTIVES AND METHODS

(a) (b)

Square error (deg2)


Firing rate (Hz)

Grating motion direction (deg) Number of neurons

Figure 2.7: Bayesian reconstruction of stimulus orientation. (a) Conditional prob-


ability p( n |)(gray-scale coded with lighter colors indicating higher probability
values) of the responses of a single neuron to a given stimulus direction ; for illus-
tration, the neuron of Figure 2.6 was chosen. For each of 18 directions employed
in the experiment, a Poisson distribution with the same mean as the recorded
average response of the cell was used as an approximation. Note that the actual
stimulus orientation can be estimated with an acceptable error margin only at
high firing rates (approx. 20 Hz and higher). (b) Application of the Bayesian
reconstruction algorithm to assess the minimum square error of a population of
cells in the estimation of the presented stimulus. Due to the limited amount of
available data, a well-tuned neuron was duplicated to study the encoding accu-
racy as a function of population size. A maximum-a-posteriori (MAP) estimator
was used for this analysis, because this estimator is robust against noisy fluctua-
tions in the cells activity unless noise becomes overall dominant. (Adapted from
Freiwald et al., 2002:L).

the Fisher information is a (D D) matrix whose entries are defined to be




Jij (x) := ln P (n1 , . . . , nN ; x) ln P (n1 , . . . , nN ; x)
xi xj
2
ln P (n1 , . . . , nN ; x)
= ; (2.12)
xi xj
h. . .i denotes the expectation value with respect to the probability distribution
P (n1 , . . . , nN ; x), and ln() is the natural logarithm (Cover and Thomas, 1991;
Kay, 1993; Stuart et al., 1999).7
7
The issue of Fisher information has recently been discussed in Theoretical Physics. Frieden
(1998) introduces the Principle of Extreme Physical Information and states that Fisher informa-
tion, in combination with this principle, can be used to derive most fundamental equations of
Theoretical Physics such as the Klein-Gordon equation, the Dirac equation, the Einstein field
equation, and Maxwells equations. The debate around this topic is still going on; to date it is
not quite clear if Friedens approach actually gains new insight in the physical theories.
I-36 CHAPTER 2. NEURAL CODING

The significance of the Fisher information measure lies in the fact that it yields
a lower bound on the expected square estimation error for an arbitrary, unbiased
estimator , whereby the error is computed according to (2.4). This relation is
quantified by the Cramer-Rao inequality :
D
X
2
h(x x) i (J 1 (x))jj (2.13)
j=1

The minimal square error will henceforth be denoted as 2min ; Equation 2.13 can
then be written as
XD
2
min (x) = (J 1 (x))jj . (2.14)
j=1

A proof of the Cramer-Rao inequality can be found in Cover and Thomas (1991).
When applied to neural coding, the minimal square estimation error provides
a bound on the accuracy with which a stimulus can be reconstructed, given the
spike activity of a neural population. Fisher information has been applied in a
number of contributions (Paradiso, 1988; Seung and Sompolinsky, 1993; Brunel
and Nadal, 1998; Abbott and Dayan, 1999; Zhang and Sejnowski, 1999; Wilke
and Eurich, 1999; Pouget et al., 1999; Yoon and Sompolinsky, 1999; Eurich and
Wilke, 2000:L; Eurich et al., 2000:L; Karbowski, 2000; Wilke and Eurich, 2001:Lb;
Nakahara et al., 2001; Wilke and Eurich, 2001:La; Bethge et al., 2002; Wilke and
Eurich, 2002; Wu et al., 2002; Eurich, 2003:L), most of which deal with neural
encoding strategies.
There are two things that have to be taken into account when employing the
measure of Fisher information:
The existence of a lower bound min (x) does not imply the existence of
an estimator that reaches this bound, i. e., of a so-called efficient estima-
tor . Zhang et al. (1998a) compares Bayesian reconstruction results with a
Fisher information calculation in the encoding of the position of a rat by
hippocampal place cells. Despite several simplifying assumptions on the
neural firing, the Bayesian estimator achieves a value of twice the Cramer-
Rao bound min with only 25 neurons. A neural decoding mechanism based
on Bayesian principles might therefore extract the signal content in the neu-
ral spike events rather efficiently. A theoretical account of this issue is given
in Bethge et al. (2002) who faithom the validity of the Fisher information
measure, in particular with respect to the observation time window . A
basic result is that has to be sufficiently long (or equivalently, firing rates
have to be sufficiently high) for Fisher information to provide the inverse
of an existing efficient estimator.
Fisher information is concerned with unbiased estimators only. There is an
extension to biased estimators (cf. for example Cover and Thomas, 1991, p.
2.2. TASKS, CONSTRAINTS, OPTIMIZATION PRINCIPLES, I-37
OBJECTIVES AND METHODS

335). However, in this case, the bound includes the bias of a given estimator
and its derivate; therefore, is is no longer as universal as in (2.13) which
refers to arbitrary unbiased estimators. If biased estimators are taken into
account, estimation errors may fall below this Cramer-Rao bound.

Sections 2.4.3 and 2.4.4 discuss neural encoding strategies for a single object and
for multiple objects, respectively, under the assumption that Fisher information
yields a good measure of the available signal content of spike activity. It turns out
that simple dichotomies like small vs. broad tuning are by no means sufficient
to characterize the performance of a neural population.

2.2.3.5 Information Theory

Apart from statistical estimation theory, information theory (Shannon, 1949;


Cover and Thomas, 1991) is frequently applied when it comes to quantifying the
relation between sensory or motor stimuli and neural responses; for introductions
and reviews, see Dayan and Abbott (2001, chapter 4) Bialek and Rieke (1992);
Rieke et al. (1997); Borst and Theunissen (1999); Buracas and Albright (1999);
Alkasab et al. (1999); Hertz and Panzeri (2003).
Only three years after Shannons invention of information theory, MacKay and
McCulloch (1952) attempted to compute the information transmission capacity
of single spiking neurons and obtained a rather high estimate of 5001000 bits/s.
In the 1970s, Eckhorn et al. laid the foundation for much subsequent work on
measuring transmitted information by calculating the rate at which neurons in
the cat LGN carry information about a train of visual flashes (Eckhorn and Popel,
1974, 1975); they found rates from 1060 bits/s. This work was later extended to
other stimuli and neural systems (Eckhorn et al., 1976; Eckhorn and Popel, 1981;
Eckhorn and Querfurth, 1985). Bialek and Zee (1990); Bialek et al. (1991); Rieke
et al. (1993); Borst and Theunissen (1999) used information-theoretic measures
to analyze the reconstruction of motion stimuli from the activity of the blowfly
H1 cell. Alkasab et al. (1999) applied information theory to the problem of the
encoding of olfactory stimuli that cannot be parametrized as easily as visual
stimuli.
Information theory is employed also in theoretical approaches to neural cod-
ing, for example in single neurons (Johnson, 1996; Deco and Schurmann, 1998),
neural populations (Johnson et al., 2001), and synaptic transmission (Manwani
and Koch, 2001). Panzeri et al. (1999); Panzeri and Schultz (2001); Schultz and
Panzeri (2001) expand the entropy of a spike train as a Taylor series in the time
window of measurement to investigate the relationship between temporal, corre-
lational, and rate coding. DeWeese and Meister (1999) suggest a new measure for
the information per spike that allows for an accumulation of information obtained
from subsequent spike events.
I-38 CHAPTER 2. NEURAL CODING

2.3 Neural Codes


To address the topics introduced in Section 2.2.2, a certain neural code is usually
assumed in the evaluation of empirical data and in theoretical modeling. The
outcome will depend on the chosen code in almost all cases. It is therefore
necessary to gain an overview of the putative neural codes that have been put
forward during the last decades. Section 2.3.1 considers the encoding with single
neurons whereas Section 2.3.2 is concerned with codes based on the activity of
neural populations which is the scheme that is assumed today to be realized
in large parts of the brain and in many species. Section 2.3.3 briefly mentions
substrates for other codes, graded potentials and neuromodulation. The issue of
neural codes is finally discussed in Section 2.3.4 by trying to find an answer to
the question of which of the proposed candidates for neural codes might be the
true one.

2.3.1 Single-Neuron Codes


The idea that single neurons encode for perceptually significant events goes back
to Horace Barlow who discovered on-off retinal ganglion cells in the frog. In
Barlow (1953), he wrote: When feeding, its attention is attracted by
its prey, which it will approach, and finally strike at and swallow.
Any small moving object will evoke this behaviour, and there is
no indication of any form discrimination. In fact, on-off units
seem to possess the whole of the discriminatory mechanism needed
to account for this rather simple behaviour . . . [and] it is difficult
to avoid the conclusion that the on-off units are matched to
this stimulus and act as fly detectors. Later, he laid down his neuron
doctrine in greater detail (Barlow, 1972). The concept was continued and refined
in a large number of neuroethological studies in amphibians. Lettvin et al. (1959)
identified several electrophysiological classes of retinal ganglion cells in the frog
and assigned them different tasks in the prey capture behavior. One of the classes
was later referred to as bug detector because the object that best excites these cells
is a buglike moving object.8 Kupferman and Weiss (1978) introduced the notion
of a command neuron, a neuron whose activity in response to a stimulus is both
necessary and sufficient to elicit a given behavior. An extended review of these
concepts and the corresponding behavioral and electrophysiological experiments
in toads is given by Ewert (1984). These ideas were later replaced by concepts
focusing on simultaneous responses of several classes of neurons and networks
thereof (Ewert, 1987, 1989); for further empirical studies and a critical assessment
of the detector concept, see Roth (1987, 1994).
8
Lettvin also coined the famous term grandmother cell for a neuron which specifically re-
sponds to a particular object for example, the grandmother in the visual field.
2.3. NEURAL CODES I-39

Barlows neuron doctrine, however, is still pursued. In their comparison of


neuronal and psychophysical performance in discriminating motion directions of
stochastic motion stimuli, Britten et al. (1992) came to the conclusion that the
sensitivity of most neurons in monkey area MT are very similar to the psy-
chophysical sensitivity. They suggest that psychophysical decisions are likely to
be based on a relatively small number of neural feature detectors. More recently,
Crick and Koch (2003) postulated so-called explicit representations, i. e., small
set of neurons [. . .] that respond[s] as a detector for that fea-
ture, without further complex neural processing, as a prerequisite
for conscious perception.
The cited examples of representations based on single neurons or small neural
populations in the vertebrate brain suggest a more detailed description and dis-
cussion of the notion of single-neuron codes. First, the specific behavior observed
in some neurons is very likely to be the result of a complex signal processing
involving large neural populations in multiple subcortical and cortical areas (for
a hierarchical computational model of cortical processing, see Riesenhuber and
Poggio, 1999). That is, neural information processing is by no means based on
feature detectors only but neurons resembling feature detectors merely represent
one stage of the signal processing in some cases. A second argument along this
line is that cell death will normally not lead to the loss of perception of particu-
lar objects which suggests that their neural correlates will not exclusively consist
of the activity of feature detectors. A more likely concept is that of parallel
representations in the sense that objects or motor actions have multiple instan-
tiations in the brain. For example, the MT neurons which serve as discriminators
for movement direction in Britten et al. (1992) have a counterpart in the motor
system where neural populations control the movement of the eyes for indicating
the observed movement direction. These neurons also represent the visual input
which could be validated by neural reconstruction techniques. It is unclear to-
day which representations are necessary and sufficient for conscious perception.
Third, from a more philosophical point of view, the number of objects in the
world that we are potentially able to categorize and recognize is far too large
to be represented by feature detectors specialized in single objects. For a more
detailed discussion of this issue, see Roth (1994). An overview of the history of
feature detecors can be found in Martin (1994, 2000).

2.3.2 Population Codes Based on Spike Activity


Today, most experimental and theoretical studies assume that signal processing
including the discrimination and recognition of objects involves large numbers of
nerve cells which are active concurrently or sequentially in time (for overviews
over this topic, see for example Deadwyler and Hampson, 1997; Kristan Jr and
Shaw, 1997; Pouget et al., 2000; Victor, 1999).
The idea of population coding has developed parallel to that of the feature
I-40 CHAPTER 2. NEURAL CODING

detector. An early account of this issue is given by Sherrington (1940): The


brain region which we may call mental is not a concentration
into one cell, but an enormous expansion into millions of cells.
Where it is a question of mind the nervous system does not inte-
grate itself by centralization upon one pontifical cell. Rather
it elaborates a million-fold democracy whose each unit is a cell
(cited after Martin, 1994).
Although population codes and distributed neural processing are intensively
studied in the theoretical literature (e. g., McCulloch and Pitts, 1943; Hinton
et al., 1986; Baldi and Heiligenberg, 1988; Abbott, 1994; Sanger, 1996; Eurich and
Schwegler, 1997:L; Eurich et al., 1997:L; Zhang and Sejnowski, 1999; Eurich and
Wilke, 2000:L; Wilke and Eurich, 2001:Lb), experimental proof of the encoding of
stimuli by populations of neurons is still difficult to obtain. (Heiligenberg, 1987)
described a population coding scheme for the electrosensory system of weakly
electric fish. Likewise, data analysis and modeling (e. g., Eurich et al., 1995:L,
1997:L) including neural reconstructions (e. g., Brown et al., 1998; Stanley et al.,
1999; Zhang et al., 1998a; Wessberg et al., 2000; Freiwald et al., 2002:L) suggest
that population codes are used in the nervous system. However, a comparison of
the signal content of single neurons vs. multiple neurons has been accomplished
only in a few cases. Meister et al. (1995); Meister (1996); Warland et al. (1997);
Meister and Berry II (1999) consider the encoding of visual stimuli by retinal
ganglion cells. The authors find evidence for a distributed coding scheme in
the sense that concurrent simultaneously occurring spikes encode for information
not contained in the activity of the single neurons.9 A similar phenomenon was
observed in the cat lateral geniculate nucleus (Dan et al., 1998).
There are several putative codes which have been investigated in the last
decades.

2.3.2.1 Binary Coding


Binary coding was employed in the first article on neural networks by McCulloch
and Pitts (1943). In this model, neurons were assumed to be logical threshold
elements, networks of which can realize Boolean functions. A binary coding
scheme, however, is not restricted to this case. Biologically, various kinds of
neural behavior can be interpreted as a binary response:

A neuron without background activity starts firing as an appropriate stimu-


lus is presented. For example, tectal cells in salamanders show this behavior
(the measurement of receptive fields is therefore quite simple in these ani-
mals, Wiggers et al., 1995b). Similarly, Lewen et al. (2001) report that the
9
See, however, Nirenberg et al. (2001) who find that retinal ganglion cells act largely as
independent encoders; in their study, 90% of the information about presented stimuli can be
obtained if the concerted cell signalling is ignored.
2.3. NEURAL CODES I-41

motion-sensitive H1 neuron of the fly shows a binary response to natural


stimuli at a high background light intensity.

A neuron which has background activity (as do, for example, most cortical
neurons) responds with a significant rapid increase in its firing rate. A
typical case is a phasic burst shortly after stimulus onset (e. g., Ahmed et al.,
1998 and Lisman, 1997 for an account of bursting behavior in neurons).

A neuron fires a single action potential as a response to a stimulus; this be-


havior has recently been found in the auditory system (DeWeese and Zador,
2003a,b). See also the recent review by Reinagel (2001) who describes the
response of visual neurons to natural stimuli as peaky and sparse.
The coding accuracy of a population of binary neurons was briefly discussed
by Hinton et al. (1986) for receptive fields in infinite k-dimensional spaces. More
detailed computations and applications to the visual and the somatosensory sys-
tem are given in Wiggers et al. (1995a); Eurich et al. (1996); Eurich and Schwegler
(1997:L); Eurich et al. (1997:L); Eurich and Wiggers (1997); Eurich et al. (1997b).
Bethge et al. (2003) emphasize that the binary tuning of neural responses is op-
timal for rapid signal transmission; see also Panzeri et al. (1996).
Binary coding is further described in Section 2.4.2 in the light of the resolution
obtained by a population of neurons.

2.3.2.2 Rate Coding


Rate coding, proposed as early as 1928 by Adrian, refers to the idea that the neural
firing rate, i. e., the number of action potentials fired in a certain time interval, is
relevant for the signal processing in the nervous system. From a theoretical point
of view, two different meanings are accociated with the notion of a rate:
First, a rate is defined to be the firing rate of a single neuron. Although a
number of studies consider the corresponding encoding with single neurons
(see Section 2.3.1), most approaches in the last decade assume that a popu-
lation of neurons encodes for stimuli or movements. In the context discussed
here, it is then assumed that each member of the population responds with a
certain firing rate. For example, in models employing statistical estimation
theory (Section 2.2.3.4), the multivariate distribution P (n1 , . . . , nN ; x) is
evaluated, where ni (i = 1, . . . , N ) is the number of spikes observed within
a time interval . Independent firing as given by (2.10) is often assumed.

Second, the notion of a rate may also refer to a population rate, i. e., the
number of spikes fired by a given set of neurons within a certain time inter-
val. This quantity contains less information than the set of single-neuron
rates in the first case, because the spikes are not attributed to specific neu-
rons. However, the dynamics of population rates can be treated analytically
I-42 CHAPTER 2. NEURAL CODING

(e. g., Gerstner, 2000; Fourcaud and Brunel, 2002) and therefore yields an
access also for the area of neural coding.
Rate coding may be realized in motor cortex (e. g., Evarts, 1967), but also in
other cortical areas, for example, in the motion-sensitive visual area MT (Britten
et al., 1992; Shadlen and Newsome, 1998). The latter argue that cortical neurons
are unlikely to transmit information in the temporal pattern of spike discharge,
based on observations that neural responses exhibit a large variability (see also
Softky and Koch, 1992, 1993). Oram et al. (1999) argue that exactly timed
spike triplets and quadruplets observed in monkey LGN and V1 do not carry any
information beyond that already contained in the firing rate.
Major objections against a rate code are based on the fact that it takes a long
time to reliably estimate a single number, the rate, from a neural response, and
that the estimate is obtained with many energy-consuming spikes (Gautrais and
Thorpe, 1998). The temporal constraint can be evaded by coding with a popula-
tion of neurons; this requires, however, a large number of neurons. For example,
to obtain an estimate as precise as 100 10 Hz in 10 ms from neurons firing
according to a Poisson statistics, no less than 281 redundant and independent
neurons are needed (Gautrais and Thorpe, 1998).
Shadlen and Newsome (1998) consider networks of balanced excitation and
inhibition and suggest that neurons transmit information through rates in en-
sembles of 50 100 neurons, providing reliable rate estimates in 10 50 ms, i. e.,
in one interspike interval. Rapid state switching has also been found in other
studies of balanced networks (Tsodyks and Sejnowski, 1995; van Vreeswijk and
Sompolinsky, 1996). Likewise, in his theoretical account of the dynamics of a
homogeneous infinite population of spiking neurons, Gerstner (2000) finds an im-
mediate change in the population activity upon changing the input. These results
suggest that rate coding may be feasible also under time constraints.

2.3.2.3 Temporal Coding: Structure of Neural Spike Trains


Apart from the simple measure of the neural firing rate, higher order statistics in
the neural spike trains and the exact timing of spikes may be relevant for the en-
coding of stimuli. Like in the previous paragraph, two cases can be distinguished:
First, the structure in single spike trains may carry information whereas neurons
are uncorrelated with each other. Second, correlations may occur also between
neurons.

A prerequisite for the existence of a temporal code is the neurons ability to pro-
duce spike discharges with a precise timing. Cortical spike activity may show
a large variability including spontaneous discharges; however, the cause of this
behavior is not completely clarified. Arieli et al. (1996); Tsodyks et al. (1999)
link evoked cortical responses and spontaneous activity to the underlying func-
tional architecture and the ongoing activity in the neighborhood of the neurons
2.3. NEURAL CODES I-43

under consideration. This leaves the possibility that the neural behavior which
appears noisy to an observer may reflect more deterministic dynamical processes
in the cortical tissue. Mainen and Sejnowski (1995) studied the reliability of spike
timing in single neurons upon current injection: whereas spike times appeared
variable for constant current input, the reliability increased dramatically upon
the injection of noise-like currents that resemble the large number of synaptic
inputs of a neuron in vivo. In this case, the standard deviation in spike timing
was less than 1 ms. A similar accuracy of spike timing was found in retinal gan-
glion cells of rabbits and salamanders (Berry et al., 1997). Thus, neurons seem
to be able to reliably produce action potentials. This is further corroborated
by the discovery of physiological processes whose dynamics depends on an exact
spike timing: spike-dependent Hebbian plasticity (e. g., Bi and Poo, 1998, 1999;
Egger et al., 1999; Bi and Poo, 2001) and dynamic synapses (e. g., Markram and
Tsodyks, 1996; Tsodyks and Markram, 1997; Tsodyks et al., 1998).
A prominent example of a system where the timing of signals plays a ma-
jor role is the auditory system of barn owls. Based on an idea put forward by
Jeffress (1948), the horizontal sound localization is accomplished by the evalua-
tion of phases differences of sounds arriving at the two ears. Phase-locked action
potentials run down axons that serve as delay lines to arrive at an array of postsy-
naptic coincidence detectors. Those cells that become active due to synchronous
input from both ears code for the position of the sound. This principle was later
validated in the barn owl (Konishi, 1986; Carr and Konishi, 1990) and seems
to be realized also in other sensory systems such as electrosensation in weakly
electric fish and echolocation in bats (Konishi, 1991; Carr, 1993; Carr and Fried-
man, 1999).10 Theoretical papers on delays in neural networks and their putative
adaptability were published by Tank and Hopfield (1987); Gerstner et al. (1996);
Eurich et al. (1999:L).
Spike timing has been shown to play a role also in other systems. Studies
revealed that the timing of individual spikes can represent the time structure of
rapidly varying stimuli (Bialek et al., 1991; Borst and Theunissen, 1999) but also
encodes nontemporal properties, such as the location of a stimulus applied to a
single whisker in rats (Panzeri et al., 2001; Petersen et al., 2001). The authors
find that the time of the single neurons first poststimulus spikes is crucial for
encoding stimulus location.
Panzeri and Schultz (2001) recently described a unified approach to temporal
coding and rate coding by expanding the mutual information between spike count
and stimulus as a Taylor series in the experimental time window. The connection
between rate coding and temporal coding was also addressed by Ahissar et al.
(2000) and Mehta et al. (2002); the latter propose a mechanism based on oscilla-
tory inhibition to transform a rate code into a temporal code. Similarly, Hopfield
10
Jeffress model, however, does not seem to apply to the mammalian auditory system
(McAlpine and Grothe, 2003).
I-44 CHAPTER 2. NEURAL CODING

(1995) suggested a computational model for stimulus representation by action


potential timing mediated by subthreshold membrane potential oscillations.
Reconstruction methods beyond a firing rate that is evaluated over a long time
interval have been suggested by Brown et al. (1998); Jakel (2001); Bringmann
(2002); Wiener and Richmond (2003); these are either based on firing rates in
very short time windows or even on the occurrence of single spikes.

The second type of temporal coding employs correlations between neurons; here,
we consider noise correlations11 with an accuracy of about 1 ms.
Several studies have found noise correlations in a variety of neural systems, for
example in monkey areas IT (Gawne and Richmond, 1993), MT (Zohary et al.,
1994; Cardoso de Oliveira et al., 1997), somatosensory cortex (Steinmetz et al.,
2000), motor cortex (Lee et al., 1998; Hatsopoulos et al., 1998; Maynard et al.,
1999), and parietal cortex (Lee et al., 1998), as well as in cat LGN (Dan et al.,
1998) and visual cortex (DeAngelis et al., 1999). In cultured cortical neurons,
Tateno and Jimbo (1999) find an enhancement in the reliability of correlated
spike timings upon external stimulation.
The functional role of these correlations is not completely understood so far.
Some investigations indicate that correlations depend on the attentional state of
the animal (Cardoso de Oliveira et al., 1997; Steinmetz et al., 2000). Dan et al.
(1998) use information theory to analyse the activity of LGN cells upon white
noise input; they found that considerably more information can be extracted
from two cells if temporal correlations between them are considered. Maynard
et al. (1999) classified the direction of arm movements from the concerted firing
of motor cortical neurons and also found a significant improvement as compared
to the case of independent encoders.
Another suggestion relating correlated neural activity to the encoding of ob-
jects was put forward by Singer (e. g., Gray et al., 1989; Engel et al., 1991; Singer,
1993; Singer and Gray, 1995; Roelfsema et al., 1997) and Eckhorn (e. g., Eckhorn
et al., 1988, 1989, 1990; Frien et al., 1994). According to this so-called temporal
correlation hypothesis, neurons encoding for the same object synchronize at zero
phase lag, whereas neurons encoding for different objects have a nonzero phase
lag. The temporal correlation hypothesis is a proposed solution of the binding
problem. An extended discussion of this issue is published as a series of review
articles in the journal Neuron, introduced by Roskies (1999).
For theoretical considerations on the correlated activity of neural populations,
see the discussion in Section 2.4.3.3.

11
See Section 2.2.3.4 for the distinction between signal correlations and noise correlations.
2.3. NEURAL CODES I-45

2.3.2.4 Rank-Order Coding

Another encoding scheme proposed by Thorpe and Gautrais (1997, 1998) is mo-
tivated by studies demonstrating a high speed of processing in the human visual
system (Thorpe et al., 1996). In these experiments, unknown photographs were
briefly (i. e., for 20 ms) presented to subjects who had to categorize the pictures
as to whether they contained an animal or not. During the trials, event-related
potential were recorded demonstrating that differences in the potentials between
the animal vs. no-animal cases were visible a mere 150 ms after stimulus onset.
The authors suggest that a great deal of visual processing must have been com-
pleted before this time. Delorme et al. (1999, 2000) trained monkeys to perform
similar categorization tasks (animal vs. no-animal and food vs. no-food) based
on both color and black-and-white pictures; the same task was also done by hu-
man observers. The measured accuracy in monkeys which have reaction times
between 250 ms and 300 ms and are faster than humans do not depend on
color information, just as for the fastest humans. The accuracy impairment with
achromatic stimuli increased progressively in human subjects with longer reaction
times. The interpretation of these findings is that fast categorization of natural
stimuli, i. e., the first wave of processing in the visual system, is color blind.
The fast neural processing poses a problem for the rate coding scheme (Thorpe
and Gautrais, 1997, 1998; Gautrais and Thorpe, 1998; Thorpe et al., 2001); the
evaluation of a firing rate would either take too much time or require large neural
populations. For example, to obtain an estimate as precise as 100 10 Hz in
10 ms needs 281 redundant and independent neurons, which seems a high cost
for the transmission of a single analog number. Instead, the autors suggest a code
based on the order in which spike from different neurons arrive at a postsynaptic
layer. Such a code would be very efficient in terms of transmitted information
(Thorpe and Gautrais, 1998). Moreover, it is robust with respect to the exact
spike timing, as long as the order of signals is conserved. Van Rullen et al. (1998);
Van Rullen and Thorpe (2001) model face processing with a feedforward neural
network employing the rank order code; detection and localization of human faces
in natural images succeeds with only one spike per neuron. The order of spike
arrivals can be evaluated by neural populations. So far, however, it is not clear
how a neural system can self-organize to establish the code: the rank order is
global information whereas neural adaptation rules are local.

2.3.3 Further Encoding Schemes

The literature on neural coding has largely focused on codes based on the spike
activity of neurons. However, there are further processes in neural tissue that are
likely to contribute to signal processing.
I-46 CHAPTER 2. NEURAL CODING

Subthreshold Neural Activity. Spike production in nerve cells is the result


of membrane potential dynamics. In a single nerve cell, many postsynaptic po-
tentials occurring at different times and positions in the dendritic tree and on
the soma are integrated to eventually yield superthreshold signals at the axon
hillock, resulting in action potentials. The subthreshold dynamics can therefore
be regarded as a computational process whose outcome is the occurrence of action
potentials at certain times. Signal processing in dendritic trees has been consid-
ered in Mel (1992); Fromherz and Gaede (1993); Zador et al. (1995); Agmon-Snir
and Segev (1996); Single and Borst (1998). For example, Mel (1992) suggest that
the spatial ordering of afferent synaptic connections onto the dendritic arbor is a
possible biological strategy for pattern information storing during learning; the
authors also show that pattern discrimination is possible based on the properties
of NMDA channels. Fromherz and Gaede (1993) study the realization of the
Boolean XOR function by arborized neurons. Single and Borst (1998) ascribe
the computation of image velocity to dendritic integration in blowfly neurons. A
comparison between signal encoding by analog membrane potentials and series
of action potentials in so-called HS neurons of the blowfly shows that more in-
formation about the stimulus is contained in the former signal (Haag and Borst,
1998). For reviews on dendritic signal integration and information processing,
see Mel (1994); Yuste and Tank (1996); Koch (1997).
Subthreshold processing need not be confined to single cells. For example,
there exist types of neurons which do not produce any spikes, for example hori-
zontal and bipolar cells in the retina (Kandel et al., 1991). Graded potentials in
spiking as well as in non-spiking cells can be directly passed on to neighboring
cells via electrical synapses (Galarreta and Hestrin, 2001) or via chemical graded-
potential synapses (Juusola et al., 1996). de Ruyter van Steveninck and Laughlin
(1996) estimated the rate of information transfer at graded-potential synapses in
the blowfly to be 1650 bits/s, five times the highest rates measured in spiking
neurons. From an electrical engineering point of view, Sarpeshkar (1998) argues
that a mixture of analog and digital computation (i. e., signal processing with
graded potentials and action potentials, respectively) yields the highest efficiency
in terms of energy consumption.

Neuromodulation. There are modes of communication that neurons use to


transmit information besides the fast processes of neurotransmission. Other types
of communication can be classified as neuromodulatory , where instead of convey-
ing excitation or inhibition, the signal from one neuron changes the properties
of other neurons or synapses (e. g., Zimmermann, 1993; Katz, 1999). Such pro-
cesses do not occur on fast (millisecond) time scales but typically on longer time
scales. The physiological properties mentioned in this section graded responses
and neuromodulation have not yet been systematically explored with respect
to their information processing functions. They might play a larger role in future
2.3. NEURAL CODES I-47

accounts of signal processing in the nervous system.

2.3.4 Which Code is Realized in the Nervous System?


In view of the various neural codes which have been proposed, the question arises
which of these is actually realized in the nervous system. The answer to this
question is fourfold.

First, one has to be very careful in interpreting successfully identified positive


correlations in the schemes of Figure 1.1 or Figure 1.3. In practice, the physio-
logical or anatomical property which correlates with either item on the left hand
side of the schemes is called a code. However, the notion of a code usually in-
cludes the simple fact that an encoded message for example, a visual stimulus
encoded in the firing rate of a neural population is also decoded. The question
is: where and what decodes the stimulus? The act of decoding will by no means
take place in the naive sense that electrical or chemical brain activity is trans-
lated back to the original quality of the stimuli that elicited it. One may at
best talk about decoding in the sense that information is transmitted to other
brain areas and eventually translated into motor activity or into chemical or mor-
phological structures that serve as a memory. To date, it is not at all clear if a
stimulus encoded in, say, the early visual system is decoded in the latter sense by
later stages of the visual system. This could perhaps be tested by estimating the
signal content contained in the response of a neural population (for example,
the localization accuracy of a visual stimulus) and comparing it to the signal
content in the postsynaptic neural population. One may also study the mutual
information between subsequent neural structures. Such investigations, however,
are very difficult, last but not least due to the strong feedback in neural tissue
which yields no unique direction of information flow.
As a matter of fact, the notion of neural coding is used with two different
meanings:
In a broader sense, the notion of neural coding is used whenever a neural
correlate of an entity on the left hand side in Figure 1.1 or Figure 1.3 has
been identified. In particular, the framework of reconstructing stimuli from
neural responses (Section 2.2.3) includes the idea that it is not necessarily
some instance in the nervous system which performs the decoding: in the
first place, it is the researcher who decodes the neural response. This is the
view that is usually adopted here.
In a narrower sense, a neural system encodes a stimulus if other instances
in the nervous system somehow decode and use the encoded message. As
suggested above, this is very hard to show. The mere existence of a corre-
lation between a stimulus and a neural response is not sufficient to prove
the existence of a code in this narrower sense: the correlation may be a
I-48 CHAPTER 2. NEURAL CODING

functionally insignificant by-product of signal processing in the system un-


der study (Andreas Kreiter, personal communication). Cf. Chapter 4 for a
brief discussion of biologically plausible decoding with neural networks.

The second answer to the above question goes in the direction of scientific method-
ology. A meaningful theoretical model employs as few parameters as possible in
order to identify those ingredients which are necessary for the phenomena to be
explained. In other words, in good scientific practice models are as simple as
possible; they apply to a given set of observations while necglecting other aspects
of the system under study. As a consequence of this methodogical approach,
the model cannot be said to represent (or even to be identical with) the system
in every respect. The model rather yields a minimalistic description, while the
true system may well be more complex with respect to the phenomena under
consideration but also to further phenomena not captured by the model. As an
example, consider the binary code described above. We used this code to account
for the observation that the receptive fields of neurons in the salamander optic
tectum are large (average diameter > 40 ) and yet allow for a high accuracy in
localizing prey (Eurich et al., 1995:L; Wiggers et al., 1995a,b; Eurich and Schwe-
gler, 1997:L; Eurich et al., 1997:L; Eurich and Wiggers, 1997). What follows from
these studies is the fact that a binary neural response is sufficient to account for
the observed electrophyiology and behavior. It does not state that the neurons
are actually binary: first, the biological neurons are of course more complex than
the model neurons; second, the all-or-none response might not be evaluated by
subsequent neural structures in the salamander brain (see the first answer above);
third, high sensory resolution may well be achieved also with other, more complex
neural codes.

The third aspect to be considered in view of the large variety of possible neural
codes is that different neural systems may use different codes. The experimental
studies which have inspired the codes were conducted in different species, for
example blowflies (Bialek et al., 1991), salamanders (Wiggers et al., 1995b; Meis-
ter et al., 1995), turtles (Wilke et al., 2001:L), electric fish (Heiligenberg, 1987),
rats (Zhang et al., 1998a; Freiwald et al., 2002:L), cats (Sengpiel et al., 1995;
Oram et al., 1998), macaque monkeys (Georgopoulos et al., 1986; Scott et al.,
2001; Wessberg et al., 2000), and humans (Thorpe et al., 1996; Delorme et al.,
1999). Likewise, data were obtained from different systems including the retina
(Meister et al., 1995; Wilke et al., 2001:L), midbrain visual and auditory areas
(Wiggers et al., 1995b; Knudsen and Konishi, 1978), hippocampus (Zhang et al.,
1998a) visual cortex (Sengpiel et al., 1995), motor cortex (Georgopoulos et al.,
1986; Scott et al., 2001; Wessberg et al., 2000), and the fly visual system (Bialek
et al., 1991; Rieke et al., 1997) which is not homologous to the vertebrate visual
system. One cannot expect to find the same principles of signal processing in
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-49

such diverse systems. For example, in the blowfly visual system, a single neuron,
the so-called H1 neuron, encodes for rigid horizontal movements over the entire
visual field (see Rieke et al., 1997, and references therein). On the other hand,
neural codes for sensory stimuli or motor actions in vertebrates are likely to be
based on large populations of neurons in most cases (e. g., Georgopoulos et al.,
1986; Eurich et al., 1995:L; Dan et al., 1998; Maynard et al., 1999; see, however,
Britten et al., 1992 for a contrary argumentation). When deciding about a neural
code, one must therefore look carefully at the specific anatomical and physiolog-
ical properties of the system under study.

Finally, neural behavior is so complex that it is not unlikely that a neural sys-
tem uses various codes simultaneously . For example, the binary response (firing
vs. non-firing or large activity vs. background activity) may be used to
infer the position of an object in the visual field whereas the firing rate gives
information about its properties (color, spatial frequencies, etc.). The precise
timing of the spikes may in addition distinguish the object from other, simulta-
neously presented, objects. Such an encoding scheme seems to be very efficient
because it will be fast (several aspects of the scene can be evaluated simultane-
ously) and energy-efficient (various codes are hidden in a single response). I want
to call this type of encoding palimpsest coding .

2.4 The Encoding Accuracy of a Single Object


2.4.1 The Notion of a Single Object
In this section, we consider theoretical studies on the resolution obtained by a
population of neurons encoding for a single object. Single objects have been
intensely studied experimentally as well as theoretically in the last decades. The
relevance of this topic manifests itself in the following reasons:
First, the investigation of neural responses upon presentation of a single ob-
ject corresponds to the analytical approach in the sciences. Properties of an
unknown neural system are usually studied with simple probes before moving on
to more complex situations and adopting an integrative point of view.12 Such
simple probes are either single objects or white-noise stimuli (cf. Dayan and
Abbott, 2001 for the latter). Second, the notion of a single object as it is intro-
duced below comprises a variety of different sensory stimuli and motor actions
including single features, objects composed of multiple features, spatial position,
and motor commands for movement direction. That is, the concept of the sin-
gle object subsumes a large number of experiments. Third, a single object can
12
In neuroscience, however, one can also find the opposite approach. For example, the statis-
tics of natural images (Ruderman, 1994) can be used to account for filter properties of simple
cells in primary visual cortex (e. g., Olshausen and Field, 1996a,b).
I-50 CHAPTER 2. NEURAL CODING

be easily described mathematically as a point in D-dimensional spaces allowing


for computations on the encoding properties of neural populations. Fourth, the
single-object case may be regarded as a testbed for theoretical studies on encod-
ing strategies under the various constraints listed in 2.2.1. Although in the real
world the single object is an exception, it nevertheless yields valuable information
on mechanisms of neural signal processing which will be relevant also under more
realistic conditions. Fifth, the mathematical formalism for single objects can be
extended to include several objects (Eurich, 2003:L); cf. Section 2.4.4.

For the purpose of a theoretical description of neural encoding properties, we


shall adopt the following very general definition of the notion of an object. First,
it can be understood in the sense of single feature. Prominent empirical ex-
amples are the orientation of bars or full-field gratings encoded in the cat visual
system (Hubel and Wiesel, 1962; Ferster et al., 1999; Freiwald et al., 2002:L), the
velocity of full-field stimuli encoded by the fly H1 neuron (Bialek et al., 1991;
Borst and Theunissen, 1999), the fundamental frequency of a sound (Miller and
Sachs, 1984; Schreiner and Langner, 1988; Palmer and Winter, 1992), or the
azimuth of a sound source represented in the auditory system of barn owls (Sul-
livan and Konsishi, 1984; Carr and Konishi, 1990). Second, the notion of an
object refers to an entity composed of several features. For example, Grusser
and Grusser-Cornehls (1976) and Roth (1982) study the responses of cells in the
amphibian visual system upon presentation of artificial stimuli of different shape
and size. Gawne (2000) investigates the simultaneous coding of orientation and
contrast by complex cells in monkey primary visual cortex. The direction and
the position of a stimulus also fall into this category: they are composed of two
and three coordinates, respectively. For example, Eurich and Schwegler (1997:L)
and Eurich et al. (1997:L) consider direction localization and distance estimation,
respectively, in tongue-projecting salamanders. On the motor side, neural corre-
lates of movement direction (given by two coordinates) can also be interpreted as
the encoding of an object; cf. for example the literature on vector reconstruction
(Georgopoulos et al., 1986; Salinas and Abbott, 1994; Scott et al., 2001).
The reason to subsume these different stimuli and motor actions under the
term of an object is that it allows for a common theoretical description. Neural
coding is studied in the so-called stimulus space

X S m Rn , (2.15)

each dimension of which describes a single feature parametrized by a scalar vari-


able xi (i = 1, . . . , m + n). X is a Cartesian product of an m-torus and an
n-dimensional hyperplane; the former accounts for variables taking values on a
circle (such as the orientation of a stimulus), and the latter describes variables
on the real axis (such as a frequency). In this space, an object is characterized
by a vector of D m + n variables, x = (x1 , . . . , xm , xm+1 , . . . xm+n ).
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-51

The above-mentioned single-feature objects are described by 1-dimensional


stimulus spaces, whereas compositions of features are formalized by elements of
higher-dimensional stimulus spaces. For example, circular visual stimuli (disks)
moving at different velocities can be described by a 2-dimensional stimulus space
where the first axis gives the radius of the stimulus and the second axis its velocity.
Some stimulus properties like position or color require more than one dimension. 13

While an overview of the literature on single-object coding has been given under
various viewpoints in Sections 2.2 and 2.3, we will now focus on the issue of the
encoding accuracy of a neural population. Section 2.4.2 addresses this problem by
using a model of binary coding, whereas Section 2.4.3 investigates properties of
firing rate coding using the statistical measure of Fisher information. In Section
2.4.4, the latter formalism is extended to include multiple objects.

2.4.2 Binary Coding


Motivation. Human observers and many animal species show a high accuracy
in object localization. Tongue-projecting salamanders (Bolitoglossini), for exam-
ple, are able to localize mites of 0.5 mm size at distances of 15 20 cm and snap
at them with high success rates (Roth, 1987; Deban et al., 1997). Barn owls (Tyto
alba) determine the direction of the sound of a prey with an accuracy of a few
degrees (Konishi, 1983, 1993). Human subjects have a small two-point discrimi-
nation threshold for tactile stimulation on the finger which even decreases after
applying a paired peripheral tactile stimulation according to a Hebbian scheme
(Dinse et al., 1994, 1995, 1997).
Neural processing pathways are very different in these sensory systems. For
example, in the visual and somatosensory system, the receptor surface (i. e., the
retina and the skin, respectively) are two-dimensional, allowing for the trans-
mission of a two-dimensional feature map to subcortical cortical centers (Kandel
et al., 1991). The avian auditory system is organized in a different way. Azimuth
and elevation of sound sources are initially processed separately and converge
in the midbrain (Konishi, 1986; Carr and Boudreau, 1991; Konishi, 1991, 1993).
Despite these differences, in all of these cases central neurons involved in ob-
ject localization have a common physiological property: their receptive fields are
large compared to the sensory resolution observed in behavioral experiments. By
a receptive field of a neuron we mean the subset of the sensory space in which
an appropriate stimulus elicits a reaction in the corresponding neuron. Tectal
neurons in the tongue-projecting salamander Hydromantes italicus have a mean
receptive field diameter of 41 , with a minimum of 10 and a maximum near 360
(Wiggers et al., 1995b, Figure 2.8a). The receptive fields of auditory neurons
13
Cf. also Zemel and Hinton (1995) for concepts of abstract spaces for the description of
neural activities.
I-52 CHAPTER 2. NEURAL CODING

in the Nucleus mesencephalicus lateralis dorsalis of Tyto alba range from 23 to


unrestricted in elevation and have a mean diameter of 25 in azimuth (Knudsen
and Konishi, 1978; Konishi, 1993, Figure 2.8b). In the case of tactile stimulation,
electrophysiological recordings have been obtained in rat primary somatosensory
cortex. Neurons encoding for tactile stimulation of the hindpaw react to a large
percentage of the corresponding skin surface. After applying a paired periph-
eral stimulation protocol similar to that used for human subjects, these receptive
fields show a significant increase in size, resulting in a higher receptive field over-
lap (Dinse et al., 1995). These findings suggest that large receptive field neurons
may well be involved in object localization, a task which has often been ascribed
to small receptive field neurons exclusively (e. g., Grusser-Cornehls, 1984; Gail-
lard, 1985).14

Direct computation of the local resolution. In the context of a project on


the salamander visual system, Eurich and Schwegler (1997:L) employ a binary
coding paradigm to compute the encoding accuracy obtained by a population
of neurons with overlapping receptive fields. The choice of the binary code is
motivated by the fact that tectal neurons in salamanders have almost no spon-
taneous activity and start firing abruptly as a stimulus enters the receptive field
(W. Wiggers, personal communication). Besides, a binary neural response can
be regarded as the simplest possible mechanism for the encoding of an object.
Figure 2.9 shows the basic setup. For each x X, a response is obtained
from those neurons in a given population whose receptive fields overlap at x. The
system can distinguish between two points if they are seperated by a receptive
field boundary . The resolution of the system corresponds to the distance between
receptive field boundaries.
More formally, computations are performed for a sensory space X = S 2 , i. e.,
for a sphere, to model the direction localization of an object. The positions of
the receptive fields of a population of N neurons are described by a receptive field
density (x) with the properties

(x) 0 (2.16)
Z
(x) dx = N, (2.17)
X

14
The phenomenon that broadly tuned receptors allow for a high resolution is a well-known
phenomenon in vision called hyperacuity : human observers can determine the position of an
object with a precision that corresponds to only a fraction of the diameter of a photoreceptor
(e. g., Wachtler et al., 1996). Another example is color vision: we can distinguish roughly 2
million colors with only three types of broadly tuned color-sensitive photoreceptors; publications
on this issue date back to von Helmholtz (1867).
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-53

(a) 0.06 Hydromantes italicus:


0.05 Tectum neurons TO1
(Wiggers et al., 1995a)

w( )
0.04

0.03

0.02

0.01

0.0
0 20 40 60 80 100 120 140 160 180

2 (deg)
(b)
Tyto alba:
MLD neurons
(Knudsen and Konishi, 1978))

Figure 2.8: Examples for large receptive fields in the visual and the auditory
system. (a) Distribution w(%) of receptive field sizes of neurons of class T1 in the
optic tectum of the tongue-projecting salamander Hydromantes italicus; % is the
half angular field diameter (adapted from Wiggers et al., 1995b). (b) Sketch of
receptive fields in the midbrain MLD nucleus of the barn owl, Tyto alba (adapted
from Knudsen and Konishi, 1978).

Figure 2.9: A 2-dimensional sensory


space, X = R2 , is covered with three cir-
cular receptive fields. A stimulus mov-
ing along a path yields different re-
sponses in the corresponding neural pop-
ulation depending on the actual set of
overlapping receptive fields. The subsets
which can be resolved (shown in different
grey tones) are smaller than the recep-
tive fields (adapted from Eurich, 2003).
I-54 CHAPTER 2. NEURAL CODING

and, for arbitrary A X,


Z
(x) dx # (receptive fields in A) , (2.18)
A

where the approximation results from the fact that the number of real receptive
fields is integer while the integral on the left side takes continuous values. The
resolution Ae (x0 ) at some point x0 X is defined to be
1
Ae (x0 ) = (2.19)
Le (x0 )

where Le (x0 ) is the density of receptive field boundaries at x0 . The subscript e


refers to a particular direction (tangential to the path ) for which the resolution
has to be determined.
The task is to compute Le (x0 ) for a given density of receptive fields, (x).
This is done by integrating (x) over the whole sphere X. Two things have to
be considered:
For each x X, the density of receptive field centers (x) has to be taken
into account for the density of boundaries at x0 .

In addition, the receptive fields centered at an arbitrary position x have a


size distribution w(%|x) satisfying the condition
Z
w(%|x) d% = 1 (2.20)
0

for arbitrary x S 2 . For a point x separated from x0 by an angle %,


only those receptive fields with an angular diameter of 2% contribute with
a boundary at x0 . This gives another factor in the integral.
The final result for the density of receptive field boundaries on the sphere is
Z Z2
Le (x0 ) = R sin% (x(, %))w(%|x(, %))| cos | dd% . (2.21)
0 0

The angle parametrizes a circle of angular diameter 2% centered at x0 ; cf.


Figure 5 in Eurich and Schwegler (1997:L).
Equation (2.21) supports the empirical observation by demonstrating that
large receptive fields yield a high sensory resolution. As an example, consider a
great circle on a sphere. An arbitrary point x0 is represented by the firing
of N = 100 neurons whose receptive fields have an identical angular diameter of
2%; let the receptive fields be uniformly distributed on the sphere. Figure 2.10
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-55

shows the angular resolution (i. e., the resolution in direction localization) as
a function of the receptive field half angular diameter %. The resolution has an
optimal value for % = 90 , i. e., for receptive fields covering half of the sphere! The
resolution in this case is 1.8 . A simple explanation of this observation is that
these large receptive fields have the largest possible receptive field boundaries (the
boundaries are great circles) yielding a maximal density of boundaries, which in
turn corresponds to an optimal resolution.
20
Angular resolution
for N=100 neurons
15 Figure 2.10: Angular resolution (%)
as a function of the receptive field half
a(r) (deg)

10 angular diameter % for a population of


N = 100 neurons. The receptive field
5 centers are uniformly distributed on the
sphere (adapted from Eurich and Schwe-
0 gler, 1997:L).
0 50 100 150 200
r (deg)

Application to the salamander visual system. Eurich et al. (1997:L) apply


the described formalism to physiological data from the salamander visual system
to obtain the accuracy of direction localization. The distribution of receptive
field sizes shown in Figure 2.8a is complemented by data on the distribution
of receptive field centers (Figure 2.11). From these data, a continuous density
100

80 a

60
(deg)

40

20

-20

-40
-150 -100 -50 0 50 100 150

(deg)

Figure 2.11: Receptive field centers obtained in the electrophysiological experi-


ments described in Wiggers et al. (1995b). For the theoretical analysis, mirrored
data were also used to compute a density of receptive field centers (adapted from
Eurich et al., 1997:L).

(x) of receptive field centers is computed using a kernel-based method (Bishop,


I-56 CHAPTER 2. NEURAL CODING

1996). In addition, a location-dependent distribution of receptive field sizes,


w(%|x) can be estimated. This distribution considers the fact that the center
of the visual field contains smaller receptive fields whereas large receptive fields
(with a diameter up to 180 ) occur mainly in the laterally. The integral (2.21) is
then evaluated numerically.

1000 1
900

800
Figure 2.12: Resolution of direction lo-
700
calization along the horizontal visual di-
rection as obtained from the electrophys-
600 2
iological data using the binary coding
500
scheme. For details, see text (adapted
400
from Eurich et al., 1997:L).
-150 -100 -50 0 50 100 150

Results are shown in Figure 2.12. N e () is the angular resolution for the
horizontal direction e as a function of the horizontal angle , whereby = 0
corresponds to the center of the visual field. Curve 1 considers the distribution of
receptive field positions as shown in Figure 2.11 and assumes identical receptive
fields with a diameter corresponding to the mean of the distribution in Figure 2.8a
for all x S 2 . As expected, the resolution reaches its optimum in the middle of
the visual field. The corresponding value is 385 . That is, with N neurons an
angular resolution of about 385 /N is achieved, because the density of receptive
field boundaries is proportional to N . In addition, curve 2 employs the estimate of
w(%|x) as described above and therefore the particularly large receptive fields in
the lateral visual field. The calculations reveal the following result: the existence
of large receptive fields in the periphery of the visual field leads to a lateral
increase in resolution compared to the situation of curve 1, where all receptive
fields are assumed to have the same size. These findings suggest that the large
receptive field neurons which are found in urodeles as well as in anurans (Grusser-
Cornehls and Himstedt, 1976; Roth and Himstedt, 1978; Grusser-Cornehls, 1984)
contribute to the localization of small objects such as prey and are no mere
predator detectors signaling the emergence of large objects somewhere in the
visual field (e. g., Ewert, 1984).
Conversely, the number of neurons necessary to yield the angular resolution
observed in behavioral experiments can now be estimated. We assume that the
salamander can localize a 0.05 cm sized prey at the distance of the maximal
protraction length of the tongue, which is 5 cm in Hydromantes italicus (Roth,
1976; Wiggers, 1991). This corresponds to an angular resolution of 0.57 . Curve 2
in Figure 2.12 yields an angular resolution per neuron of about 450 at = 0.
Thus, the necessary number of neurons is N 450/0.57 790. This is in good
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-57

agreement with the empirically estimated number of 2,000 descending neurons in


both brain hemispheres (U. Dicke, personal communication).

Neural network modeling of the salamander visual system. In addition


to the direct computation of the resolution obtained by a population of neurons,
Eurich et al. (1995:L); Wiggers et al. (1995a); Eurich et al. (1997:L); Eurich and
Wiggers (1997) developed neural network models for the visually guided prey
capture in salamanders. In the so-called Simulander I model, the direction lo-
calization and the corresponding orienting movement are modeled, whereas the
Simulander II model extends these investigations to include depth perception and
the projection of the tongue. The scientific goal is twofold: First, the large, bi-
nary receptive field neurons were implemented in a neural network environment
to test the results on the visual resolution described above. Second, the informa-
tion on the position (i. e., direction and distance) of an object are hidden in the
activities of the neural population. From a mathematical point of view, this type
of encoding is not as straightforward as, for example, an encoding in a triple of
spatial coordinates. The network models should demonstrate that a decoding
of the object position is possible in the form of simple motor commands.
The Simulander I network is shown in Figure 2.13. In brief, it is composed
of 300 neurons. The input layer consists of 100 neurons and is situated in the
optic tectum; the model neurons are binary units with receptive field sizes and
receptive field centers according to the empirical data shown in Figs. 2.8a and
2.11, respectively. The output of this layer is conveyed to 100 inhibitory interneu-
rons and 100 motor neurons modeled as sigmoidal units; these neurons represent
populations in the salamander brainstem. In addition, the interneurons inhibit
the motor neurons. Each motor neuron sends its output to one of four neck
muscles which contracts according to a simple population scheme, resulting in a
movement of the salamander head. The model dynamics is as follows: a small
object (resembling a prey with a diameter of 6 mm) moves irregularly in the
salamander visual field. At each time step, a certain population of tecal neurons
becomes active and in turn activates the brainstem neurons. This leads to a
muscle contraction and to a head movement of the salamander.
The network is sparsely connected to reduce the number of free parameters.
Synaptic weights and muscle strengths are adapted with an evolution strategy to
minimize the mean square angle between the salamander head direction and the
direction of the prey.
As a result, the salamander is capable of following the prey with its head. The
head axis intersects the prey in approx. 8692% of the simulation time, depending
on the actual result of the parameter optimization process. This demonstrates
the feasibility of the binary encoding scheme for direction localization and also
shows that a neural network can easily translate the information encoded in the
neural activity into motor commands. In information-theoretic language, the
I-58 CHAPTER 2. NEURAL CODING

1 3

Visual field
2 4

rostral
Optic Tectum
2 x 50 Tectum neurons

caudal
1:1
Medulla oblongata /
Medulla spinalis

... ... ... ... 4 x 25 Interneurons

1:1 1:1

... ... ... ... 4 x 25 Motorneurons

Muscles
Pop.code
M. intertransversi
M1 M3 capitis superiores
Pop.code

M2 M4 M. recti cervicis

exctatory parts
inhibitory parts

Figure 2.13: The Simulander I network for the orienting movement of salamanders
(from Eurich et al., 1995:L).

decoding takes place only at the level of the muscles while the internal signal
processing is exclusively based on the activities of unspecific neurons.
Artificial lesion experiments in the network demonstrate the importance of
large receptive field neurons as compared to neurons with smaller receptive fields.
In addition, the model correctly reproduces the behavior of monocular salaman-
ders (see Eurich et al., 1995:L for details).

The Simulander II model (Eurich et al., 1997:L; Eurich and Wiggers, 1997) in-
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-59

vestigates depth perception and the activation of the projectile tongue to catch
prey at the correct distance.
Depth perception in urodeles and anurans is mainly due to the evaluation of
binocular disparity, whereas mechanisms such as accomodation, vergence, and
motion parallax are either absent or play only a minor role (e. g., Ingle, 1976;
Collett, 1977; Roth, 1987). In tongue-projecting salamanders, binocular neurons
in the rostral optic tectum receive direct input from both retinae; part of them
project to the brainstem and spinal cord and activate motor networks responsible
for the control of the projectile tongue.
For simplicity, a binocular receptive field is defined as the overlap of the two
monocular receptive fields of a binocular neuron. A suitable object in this re-
gion of the visual field elicits a strong reaction in the neuron. From the size
distribution of monocular fields (Figure 2.8) it becomes clear that binocular re-
ceptive fields also tend to be large. The question arises whether an exact distance
determination can be achieved at all with these physiological properties.
Figure 2.14 shows the two-dimensional projections of three typical binocular
receptive fields; they occupy a high percentage of the binocular visual field. A
discrimination of small regions of space is nevertheless possible with an ensemble
coding. This becomes clear from the following qualitative example. Consider
the small rhombus V in Figure 2.14. An object in the visual field is positioned
within V if and only if the neuron which corresponds to receptive field 1 fires,
but the neurons which correspond to receptive fields 2 and 3 do not fire. This
suggests that a non-firing neuron conveys as much information as a firing one:
the symbols 0 and 1 are of equal relevance in information theory.

Figure 2.14: An example of the dis-


V crimination of a small region of space,
V , with large binocular receptive fields
1
(dark grey). V is characterized by the
fact that it lies within receptive field
1 but not within receptive fields 2 and
3. The light grey shading indicates the
binocular visual field. (From Eurich
2 3 et al., 1997:L.)

The Simulander II network is similar to the Simulander I network. It consists


of 144 binary tectum neurons with large binocular receptive fields, 12 inhibitory
sigmoidal interneurons, and 12 sigmoidal motor neurons. The interneurons in-
hibit the motor neurons. The motor neurons innervate two muscles, the Sub-
arcualis rectus 1 and the Rectus cervisis profundus, for the protraction and the
I-60 CHAPTER 2. NEURAL CODING

retraction of the tongue, respectively. After training the network with an evolu-
tion strategy, the network successfully activates the tongue. Apart from a small
region close to the maximal protraction distance, prey is caught successfully in
80 100% of the presentations. If object are far away, the tongue projection is
inhibited. Only for prey slightly out of reach of the tongue, a larger percentage
of errors occurs a prediction for a behavior experiment.
The Simulander model demonstrates that a simple binary encoding scheme
can explain accurate object localization, and that the evaluation of information
contained in the neural activity is a straightforward process in a neural network.

Application to the rat somatosensory system. In Eurich et al. (1997b),


the binary coding formalism for the computation of the sensory resolution is
applied to the rat somatosensory system.
As mentioned above, Dinse et al. (1994, 1995, 1997) determined the two-point
discrimination threshold for tactile stimulation of the fingers of human subjects.
Subsequently, a paired peripheral tactile stimulation (motivated by a Hebbian
scheme) was applied for 2 or 6 hours. After the stimulation protocol, the dis-
crimination performance was better: the threshold decreased from 1.370.21 mm
to 1.120.22 mm. In order to investigate the neural mechanisms for the increased
performance, a similar stimulation protocol was applied to the hindpaw of rats,
and the receptive field sizes of neurons in primary somatosensory (SI) cortex
were determined. It turned out that field sizes increased from 45(4.6) mm2 to
79.9(8.2) mm2 due to the external stimulation (Dinse et al., 1995, 1997). That
is, the adult sensory cortex shows plasticity, a phenomenon that can be found
also in other sensory modalities (Weinberger, 1995; Das, 1997).
Eurich et al. (1997b) adapt the computation of the density of receptive field
boundaries to the geometry of the rat hindpaw and are able to predict an increase
in sensory resolution after the stimulation protocol (Dicke et al., 1999). 15 This
result is in agreement with the psychophysical experiments in humans.16
A subsequent model (Dicke et al., 1999) explains the physiological effects
of paired peripheral tactile stimulation and single-point peripheral stimulaton in
terms of a neural network. In brief, the enlargement of receptive fields in the case
of the paired stimulation is ascribed to modifications of the feedforward neural
connections (rather than the intracortical connections), because only these yield

15
Although the experimental setup employs a discrimination task, the model is concerned
with the localization accuracy of a single object the discrimination threshold was considered
to be equivalent to the resolution obtained. In a more recent approach (Eurich, 2003:L) the
theory of population coding is extended to include the simultaneous presentation of multiple
objects; this yields the possibility to study the discrimination experiments from a new point of
view.
16
A similar determination of sensory thresholds in rats could not be performed so far; a
quantitative comparison is therefore not possible.
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-61

a corresponding increase in sensory resolution.17 Likewise, the model shows an


enlargement of the cortical area encoding for the stimulated skin surface. All
plastic changes are reversible as observed in the experiments (Dinse et al., 1995,
1997). The model also correctly describes a negative result in the case of the
single-point stimulation: receptive fields remain unchanged.
A hypothesis resulting from the described combined empirical and theoretical
work is that repetitive sensory stimulation leads to plastic changes in cortical and
subcortical centers to allow the sensory resolution to be adapted to the task at
hand.

The assumption of a binary code has proven successful in describing the local-
ization accuracy of a population of neurons with overlapping receptive fields. In
neurophysiological terms, the occurrence of a neural response in a binary scheme
can manifest itself as any spike activity (while the neuron is silent in its other
state) or as a significant enhancement of its firing rate (while there is only a small
background activity in its other state). For example, a typical neural response
consists of a phasic burst followed by a tonic response (spike frequency adap-
tation, e. g. Connors et al., 1982; Ahmed et al., 1998; Smith et al., 2001). The
investigations on binary coding suggest that the initial phasic response may not
only signal the presence of a stimulus but could as well encode some of its prop-
erties. In fact, Bethge et al. (2003) have shown that binary coding is superior to
rate coding if time is a critical constraint in the signal processing. According to
the idea of palimpsest coding, additional information about the stimulus may be
transmitted through the firing rate, spike timing, etc. during the phasic response
or the tonic component of the neural response.

2.4.3 Rate Coding


Our discussion about the encoding accuracy of a neural population using binary
coding has focused on the question if small or large receptive fields yield a bet-
ter encoding accuracy; this was considered also for other encoding schemes (e. g.,
Baldi and Heiligenberg, 1988; Lehky et al., 1990; Snippe and Koenderink, 1992a).
In the following, we will study single-object encoding with rate codes using the
statistical tool of Fisher information. These investigations will allow for a discus-
sion beyond a simple dichotomy of small vs. large receptive fields, an issue which
has also been raised by Pouget et al. (1999). Topics include a more detailed
discussion of the role of tuning widths; noise properties of neural responses; and
noise correlations in the neural population (Eurich and Wilke, 2000:L; Eurich
et al., 2000:L; Wilke and Eurich, 2001:Lb, 2002, 2001:La). Throughout the dis-
17
Another neural network model for plastic changes in somatosensory cortex (Joublin et al.,
1996) deals only with reorganization due to intracortical microstimulation. For this case, the
authors employ a modification of the intracortical connections. It is not known, however, if
intracortical microstimulation is accompanied by an increase in discrimination performance.
I-62 CHAPTER 2. NEURAL CODING

cussion we will assume a parameter regime (concerning population size, maximal


firing rates, and time intervals for spike counts) where Fisher information is a rea-
sonable measure of population accuracy (see the second part of Section 2.2.3.4
for a discussion).

2.4.3.1 Tuning Widths


Introduction. Zhang and Sejnowski (1999) employ Fisher information to ob-
tain a lower bound on the encoding accuracy of a single stimulus x described as
a point in a D-dimensional stimulus space X. Their goal is to investigate the
depence of the minimal square estimation error on the neural tuning width . As
in all other publications before, the tuning is assumed to be radially symmetric,
i. e., tuning curves f (x) are given by

|x c|
f (x) = F , (2.22)
2

where F is the mean peak firing rate, c X is the center of the tuning curve f (x),
and () is a function of a scalar variable with maxz (z) = 1. For a constant
density of tuning curves, (c) const, and uncorrelated neural activity (2.10)
the elements of the D D Fisher information matrix are given by
(
0 if i 6= j
Jij (x) = D2
(2.23)
K (F, , D) if i = j ,

where K (F, , D) is a function depending on the shape but not the width of
the (identical) tuning curves. The result suggests that for D 3 features broad
tuning yields a better encoding than small tuning.18
The issue of neural tuning, however, is much richer than the result (2.23)
obtained from the considerations in Zhang and Sejnowski (1999).

Asymmetric tuning and relevant vs. non-relevant features. Eurich and


Wilke (2000:L) study the case of tuning curves that are not radially symmetric.
Recall that each dimension of the feature space X corresponds to some feature
of a stimulus. These features may describe different physical quantities such as
position, velocity, contrast, color, etc. Radially symmetric tuning curves, which
had been considered in all publications so far, do not make sense in the light of
this setup. In particular, we are interested in a neural encoding strategy for a
subset of features: which tuning properties yield a good resolution for this subset
if other features are less relevant?
18
Interestingly, the exponent is off by one for noiseless binary coding, i. e., broad tuning is
advantageous already for D 2 (Hinton et al., 1986; Eurich, 1991).
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-63

Equation (2.22) is replaced by asymmetric receptive fields


D !
X (xi c(k) )2
i
f (k) (x) = F
(k)2
=: F , (2.24)
i=1
i2

(k)2 (k) (k)2 (k)2


where i := (xi ci )2 /i2 for i = 1, . . . , D, and (k)2 := 1 + + D for
neuron k. The population Fisher information is computed according to (2.12).
For uncorrelated firing (2.10) it is given by a sum of single-neuron Fisher in-
formation matrices. In addition, we assume a constant distribution of tuning
curves, (x) const, for which the population Fisher information becomes
independent of x, and the off-diagonal elements vanish (Zhang and Sejnowski,
1999). In this case, the diagonal elements Ji := Jii are given by
D
Q
k
k=1,k6=i
Ji = DK (F, , D) , (2.25)
i
where K (F, , D) is a function that is independent of the width of the tuning
curves; see Eurich and Wilke (2000:L, p. 1521). For a diagonal matrix, Ji is the
inverse of the minimal square estimation error of feature i, 2i,min .
The result (2.25) can be interpreted as follows. Assume that the population
of neurons encodes stimulus dimension i accurately, while all other dimension
are of secondary importance. First, one notes that the encoding accuracy of
feature i depends not only on the tuning width in the i-th direction, i , but
also on all other tuning widths, k (k 6= i). Furthermore, for a high resolution
of feature i, the following encoding strategy is advantageous: neurons should
show small tuning for feature i and broad tuning for all other features. This can be
interpreted as follows: Figure 2.15 compares two neural populations encoding for
a two-dimensional stimulus with unimodal tuning curves. Elliptic curves indicate
effective receptive fields, i. e., they enclose areas of X containing a given, large
fraction of a neurons total Fisher information (see Eurich and Wilke, 2000:L for
details). Both populations have the same tuning with for feature 1 which has to
be encoded accurately but a different tuning width for feature 2. Population A is
narrowly tuned for feature 2. As a consequence, a single stimulus is located only
in few receptive fields resulting in a poorer estimate as compared to population
B which is broadly tuned for feature 2, i. e., which encodes for the stimulus with
a larger active population of neurons.

Narrow tuning and optimal tuning width. For i 0, (2.25) yields a


Fisher information Ji , i. e., a vanishing minimal square reconstruction error.
This result is invalid and must be attributed to the fact that the condition (x)
const breaks down: very small tuning widths yield a small, if any, receptive
field overlap, and the stimulus is no longer encoded by a sufficient number of
I-64 CHAPTER 2. NEURAL CODING

x2 x2
A B
x 2,s x 2,s

x1,s x1 x1,s x1

Figure 2.15: Population response to the presentation of a stimulus characterized


by parameters x1,s and x2,s . Feature x1 is to be encoded accurately. Effective
receptive field shapes are indicated for both populations. If neurons are narrowly
tuned in x2 (A), the active population (shaded) is small (here: Ncode = 3).
Broadly tuned receptive fields for x2 (B) yield a much larger population (here:
Ncode = 15) thus increasing the encoding accuracy (adapted from Eurich and
Wilke, 2000:L).

neurons and spikes. To assess tuning properties for this case, we define a minimal
square estimation error for feature i, averaged over a distribution (x) of stimuli:
Z Z
h2i,min i dxD (x) J(x)1 ii .

:= dx1 . . . (2.26)

An eigenspectrum analysis of the Fisher information matrix shows that h2i,min i


diverges for i 0 as expected. On the other hand, the error also increases for
large i which can be inferred from (2.25). As a consequence, there exists an
optimal tuning width for the case that only a single feature i has to be encoded
with high accuracy. An example is shown in Figure 2.16.

Hidden dimensions. A situation which is frequently encountered empirically


is that of incomplete knowledge of the neural behavior: usually, an observer
will study a neural population with respect to a number of well-defined stimulus
dimensions. However, he will normally be unable to find all relevant dimensions,
or he will not be able to find a suitable parameterization for certain stimulus
properties. We assume that only d D of the D total dimensions
Q are known.
d 2
The Fisher information (2.25) can be written as Ji = X j=1 j /i , where
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-65

Figure 2.16: Example for the stimulus-averaged squared minimal encoding er-
ror for Poissonion spiking and Gaussian tuning curves arranged on a reg-
ular 2-dimensional grid with spacing . Dashed curves: Numerical result
h21,min i/(2 F 1 1 ) as a function of 1 / for different values of 2 / (from
top to bottom): 2 / = 0.25, 0.5, 1, 2. Solid lines: analytical broad-tuning re-
sult, i. e. h21,min i/(2 F 1 1 ) = (1/J1 )/(2 F 1 1 ) from (2.25). The inset shows
Gaussian tuning curves of optimal width, opt 0.4 (adapted from Eurich and
Wilke, 2000:L).
Q
D
X := DK (F, , D)
j=d+1 j is an experimentally unknown constant. If the
neurons tuning curves have been measured in more than one dimension, X can be
eliminated by considering the tuning widths as a relative measure of information
content,
2i,min i2
Pd 2 = Pd 2
, (2.27)

j=1 j,min j=1 j
where i is one of the known dimensions 1, . . . , d. Equation (2.27) is independent
of the unknown (or experimentally ignored) stimulus dimensions d + 1, . . . , D,
which are to be held fixed during the measurement of the d tuning widths. On
the basis of the accessible tuning widths only, it gives a relative quantitative
measure on how accurately the individual stimulus dimensions are encoded by
the neural population.
Equation (2.27) states that if i2 is small compared to the sum of the squares
of all measured tuning widths, the population activity allows an accurate recon-
struction of the corresponding stimulus feature: the population contains much
I-66 CHAPTER 2. NEURAL CODING

information about xi . A large ratio, on the other hand, indicates that the popula-
tion response is unspecific with respect to feature xi . As an example, consider the
different pathways of signal processing which have been suggested for the visual
system (Livingstone and Hubel, 1988). The model states that information about
form, color, movement, and depth is in part processed separately in the visual
cortex. Based on the physiological properties of visual cortical neurons, (2.27)
yields a quantitative assessment of the specificity of their encoding with respect
to the abovementioned properties, i.e., our method provides a test criterion for
the validity of the pathway model.

Non-uniform neural populations. The results on the effect of tuning curves


without radial symmetry are extended in Eurich et al. (2000:L) and Wilke and
Eurich (2001:Lb) to include non-uniform neural populations. Most theoretical
investigations had focused on populations with identically tuned neurons (but see
Eurich et al., 1997:L). Experimentally, however, one usually observes distributions
of tuning widths (cf. for example Figure 2.8). Also, neurons specialize in certain
features: there exist neurons that are called visual, auditory, somatosensory, etc.
A study on tuning widths should include such basic properties.
(k) (k)
To consider a variation of tuning widths, the tuning widths 1 , . . . , D of
each neuron k are drawn from a distribution P (1 , . . . , D ). We now define a
population Fisher information which is averaged over the distribution of tuning
widths P (1 , . . . , D ). For a discrete population of N neurons, it is given by
N Z
(k)
X
hJij (x)i = d1 . . . dD P (1 , . . . , D ) Jij (x; 1 , . . . , D ) , (2.28)
k=1

where the index refers to the average over P (1 , . . . , D ). Introducing the


usual constant density of tuning curves, (x) const, into (2.28), the
average population Fisher information becomes
(
0 D QD E if i 6= j
hJij i = l=1 l
(2.29)
DK (F, , D) 2
if i = j .
i

This result generalizes the results obtained above. Figure 2.17 visualizes four
distributions P (1 , . . . , D ) for D = 2. These will be discussed in the following,
using the general case of D features. Distribution (a) is given by
D
Y
P (1 , . . . , D ) = (i );
i=1

all neurons have radially symmetric tuning curves of width . The average pop-
ulation Fisher information (2.29) yields the Fisher information (2.23) obtained
by Zhang and Sejnowski (1999), whereby the tuning width is given by .
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-67

Figure 2.17: Visualization of differ-


(a) (b) ent distributions of tuning widths for
D = 2. (a) Radially symmetric tun-

2 ing curves. The dot indicates a fixed


, while the diagonal line symbolizes a
variation in discussed in Zhang and
Sejnowski (1999). (b) Identical tuning
(c) (d) curves which are not radially symmet-
ric; these are discussed in Eurich and
Wilke (2000:L). (c) Tuning widths uni-
2 b1 formly distributed within a small rect-
b2
angle. (d) Two subpopulations each of
which is narrowly tuned in one dimen-
1 1
sion and broadly tuned in the other di-
rection (from Eurich et al., 2000:L).

Case (b) has different tuning widths for the different features:
D
Y
P (1 , . . . , D ) = (i i ), (2.30)
i=1

where i denotes the fixed width in dimension i. For i = j, the average population
Fisher information (2.29) reduces to (2.25) (Eurich and Wilke, 2000:L), whereby
the tuning widths are given by j (j = 1, . . . , D).
Case (c) employs the encoding of a stimulus with distributed tuning widths;
we consider the distribution
D
Y 1 bi bi
P (1 , . . . , D ) = i ( i ) ( i + ) i , (2.31)
b
i=1 i
2 2

where denotes the Heaviside step function. Equation (2.31) describes a uniform
distribution in a D-dimensional cuboid of size b1 , . . . , bD around ( 1 , . . . D ). A
straightforward calculation shows that in this case, the average population Fisher
information (2.29) for i = j becomes
( 2 " #)
QD 4
l 1 b i bi
hJii i = DK (F, , D) l=12 1+ +O . (2.32)
i 12 i i

A comparison with (2.25) yields the astonishing result that an increase in b i results
in an increase in the i-th diagonal element of the average population Fisher infor-
mation matrix and thus in an improvement in the encoding of the i-th stimulus
feature, while the encoding in dimensions j 6= i is not affected. Correspondingly,
I-68 CHAPTER 2. NEURAL CODING

the total encoding error can be decreased by increasing an arbitrary number of


edge lengths of the cube. The encoding by a population with a variability in the
tuning curve geometries is more precise than that by a uniform population. This is
true for arbitrary D.
In (d), finally, we study a fragmentation of the neural population in D sub-
populations that differ by their tuning properties (Wilke and Eurich, 2001:Lb).
Starting from a uniform tuning width for all neurons and features, the i-th
tuning width in the i-th subpopulation is set to , with a parameter > 0. The
other tuning widths of the subpopulationQD are adjusted such that the receptive
field volume in stimulus space k=1 k remains constant. Mathematically, this
formation of subpopulations corresponds to the tuning width distribution
D
" #
1 X Y
j 1/(1D) .

P (1 , . . . , D ) = (i ) (2.33)
D i=1 j6=i

Note that, for = 1, the population is uniform: all neurons have tuning width
for all stimulus dimensions. For 6= 1, the population is split up into D
subpopulations; in subpopulation i, i is different from the other tuning widths
j (j 6= i). The Fisher information for this population is given by

(D 1)2D/(D1) + 1
Jii . (2.34)
D2
It does not depend on i because of the symmetry in the subpopulations. The
-dependent factor that quantifies the change of encoding accuracy as compared
to the uniform population is plotted in Figure 2.18 for different values of D. It
turns out that the uniform case = 1 is a local minimum of Fisher information:
Uniform tuning properties yield the worst encoding performance within the model
class. Any other value of , i. e., any unsymmetrical receptive field shape for the
subpopulations, leads to a more precise stimulus encoding. The increase of Fisher
information for smaller reflects the better encoding with specialized subpopu-
lations for smaller i , as discussed above. In this limit, the i-th subpopulation
exclusively encodes feature i. For 1, on the other hand, the i-th subpopu-
lation encodes all features but xi . The Fisher information loss that results from
decreasing the corresponding tuning widths is compensated by the increase of i .
The fact that the approximation (2.34) diverges for 0 and for
is due to the idealization of assuming an infinite population size and an infinite
stimulus space. Under these assumptions, the tuning curves will be strongly de-
formed if approaches extreme values, but the number of tuning curves that
cover a given point in stimulus space always remains constant. In a finite neural
population, on the other hand, the coverage decreases for large or small for
two reasons. First, for some features the tuning curves become narrower, so that
neighboring neurons in these directions cease to encode the stimulus. Second,
in the other directions, where the tuning curves become broader, no new tuning
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-69

Fisher Information J/J(=1)


D=2
D=3
4 D=4
D=10

0
0 1 2 3 4 5
Subpopulation Asymmetry

Figure 2.18: Fragmentation into D subpopulations. Fisher information (2.34)


increases due to splitting of the population into D subpopulations as the splitting
parameter deviates from 1 (from Wilke and Eurich, 2001:Lb).

curves reach the stimulus position since the corresponding tuning curve centers
would lie outside the boundaries of the population. Consequently, Fisher infor-
mation eventually decreases as 0 or if the number of neurons is
finite. The value of at which this effect sets in depends on the initial tuning
widths : The larger , the sooner a considerable fraction of receptive fields will
lie outside the stimulus space. A second effect that is expected is that values
for that are too large or too small will lead to unrepresented gaps in stimulus
space, thus yielding a breakdown of encoding quality by the mechanism described
above. In contrast to the first effect, this breakdown occurs sooner if is smaller,
since, at given , this implies a less dense coverage of stimulus space with tuning
curves. These arguments are illustrated by an example in Figure 2.19. The ini-
tial deviation from the analytical curve can be attributed to the first effect; it is
strongest for large . The drop of Fisher information of the finite population at
large/small is due to unrepresented areas in stimulus space: it is most severe
for the smallest value of . Thus, our analysis leads to the prediction that for a
finite-size neural population and a bounded stimulus space, there is an optimal
level of specialization in terms of tuning curve non-uniformity.

2.4.3.2 Noise Model

In this section, we study the effect of the noise model, i. e., the deviations from the
mean spike count in individual stimulus presentations, on the encoding accuracy
of a neural population.
I-70 CHAPTER 2. NEURAL CODING

15

Fisher Information J/J(=1)


Analytical
= 0.05
= 0.075
= 0.1
10

0
0,1 1 10
Subpopulation Asymmetry

Figure 2.19: Formation of subpopulations in a finite-size neural population. An-


alytical approximation (2.34) (solid) and mean Fisher information of N = 1000
neurons uniformly distributed in the stimulus space [0, 1]2 for = 0.05 (dashed),
= 0.075 (dot-dashed), and = 0.1 (dotted). Fisher information was calculated
at the center of the stimulus space and averaged over 5105 choices of the tuning
curve positions. The apparent symmetry is due to the fact that in equation (2.33),
is equivalent to 1 for D = 2 (from Wilke and Eurich, 2001:Lb).

Several empirical studies (e. g., Dean, 1981; Lee et al., 1998; Gershon et al.,
1998; Maynard et al., 1999) have yielded the result that the logarithm of the
spike count variance is a linear function of the logarithm of the mean spike count.
Theoretically, additive noise (Abbott and Dayan, 1999; Yoon and Sompolinsky,
1999) and multiplicative noise (Abbott and Dayan, 1999) have been employed
next to Poissonian noise to describe neural spiking statistics. Both the empirical
results and the theoretical models are captured by the mean-variance relation of
fractional power law noise of the k-th neuron,

Vark (x) = afk (x)2 , (2.35)

with two (possibly -dependent) parameters a and yet to be specified. The


choice = 1/2 and a = yields the mean-variance relation of the Poisson
distribution (proportional noise), and the cases = 0 and = 1 are referred
to as additive and multiplicative noise, respectively.19 Even though Poissonian
spike count statistics are frequently used to describe neural responses, empirical
data often show substantial deviations from the Poissonian case, especially at high
19
Note that eq. (2.35) is only valid for large mean spike counts, whereas for small spike counts,
the spike count probability must approach proportional noise (e. g., Panzeri et al., 1996).
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-71

spike counts (Softky and Koch, 1992, 1993; Rieke et al., 1997; Gershon et al., 1998;
Lee et al., 1998). Hence, there are aspects to neuronal noise that are not captured
by a simple Poissonian spike count distribution. However, there are no clear ideas
on what effect it may have on the encoding properties of a neural population. The
following paragraphs discuss the dependence of Fisher information on the mean-
variance relation exponent with respect to two properties of neural firing: the
existence of a stimulus-independent background activity, and the peak activity F .
Whereas details of the computations can be found in Wilke and Eurich (2001:Lb)
and Wilke and Eurich (2001:La), we will focus on the results here.

Background noise. In this section, we assume that the spike count vector
n = (n1 , . . . , nN ) can be treated as a continuous random variable and specify the
probability density function p(n; x) to be

1 1 T 1
p(n|x) = p exp (n f ) Q (n f ) , (2.36)
(2)N det Q 2

where Q = Q(x) is a covariance matrix,


Q(x) := (n f (x))(n f (x))T .

(2.37)
For uncorrelated noise, as it is considered in Section 2.4.3.2, Q = Q(x) is a
diagonal matrix whose diagonal elements are given by Qkk (x) = Vark (x) from
eq. 2.35.
Instead of tuning functions of the type (2.24), now consider tuning functions
given by
fk (x) = F ( (k)2 ) = F + F (1 ) exp (k)2 /2 ,

(2.38)

where (k)2 is defined as in (2.24). The parameter measures the level of stimulus-
unrelated background activity at given maximum firing rate F ; it ranges from
zero-baseline firing ( = 0) to completely stimulus-independent firing ( = 1). As
an example, a Gaussian tuning curve is depicted in Figure 2.20 along with the
variance according to the mean-variance scaling of = 0; 1/2; 1.
For sufficient tuning curve overlap and a constant distribution of tuning
curves, the Fisher information for the model population is given by
Q !
D
i
J = 1 Pi=1 D 2
K (F, , D) . (2.39)
D i=1 i

Assuming independent Poissonian type firing statistics ( = 1/2) at high spike


counts, F 1, and Gaussian tuning curves as in (2.38) the function K (F, , D)
can be approximated by

F D/2 1
K (F, , D) = (2) 1 + Li1+ D 1 , (2.40)
D 2
I-72 CHAPTER 2. NEURAL CODING

Mean response
Additive (flat) noise
150 Proportional noise
Multiplicative noise

Response / Hz
100

50

0
4 3 2 1 0 1 2 3 4
Stimulus /

Figure 2.20: Example of a tuning curve and the variability of a model neuron.
Mean response (solid) +/ standard deviation. The mean, i. e. the tuning curve,
follows a Gaussian centered at the preferred stimulus with F = 100 Hz and a
baseline firing level of 10 Hz ( = 0.1). The shape of the response variance
depends on the noise model. Arrows indicate stimuli for which the neurons
Fisher information is maximal (from Wilke and Eurich, 2001:Lb).

where Lin (z) := k n


P
k=1 z /k is the polylogarithm function.
The plot of this function in Figure 2.21 demonstrates that the baseline ac-
tivity (as measured by ) deteriorates the encoding accuracy as expected. An
interesting finding is that the degree to which this is the case strongly depends on
the number of stimulus features D for Poissonian firing. For large D, background
activity appears to be disastrous for the encoding: At D = 10, for example, a
noise level of = 0.1 already leads to a 92% decrease of the Fisher information.
The underlying reason for this behavior is that, for high dimensionality, most of
the volume of the receptive field falls into the border region of the tuning curve
( 1), where the majority of spikes belong to the background activity.20
The situation is different for additive noise ( = 0). In this case, K (F, , D)
in (2.39) is given by
Z
4F 2 D/2 (1 )2
K (F, , D) = 2 d 0 ( 2 )2 D+1 . (2.41)
s D 1 + D2

0

Equation (2.41) shows that the -dependence for this kind of firing rate variance
is always proportional to (1)2 , regardless of the dimension D, see Figure 2.21.21
20
A qualitatively similar dependence on and D results for other tuning curve shapes,
e. g. cos2 -tuning.
21
A similar equation can be derived for correlated neural activity (Wilke and Eurich, 2001:La);
the dependence on and D remains the same for uniform noise correlation.
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-73

D=1

Fisher Information J()/J(=0)


0.8 D=2
D=3
D=10
0.6 Additive Noise

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
Baseline Activity Level

Figure 2.21: Total Fisher information J as a function of baseline activity level


for different numbers D of encoded features. The curves have been normalized
to unity at = 0 in order to make them comparable. Curves for D = 1, 2, 3, 10
were obtained according to eq. (2.40) assuming independent Poisson neurons; the
dot-dashed curve results from the additive noise model, eq. (2.41) (from Wilke
and Eurich, 2001:La).

Further discussion of this issue can be found in Wilke and Eurich (2001:Lb, p.
164).
In Section 2.4.3.1 we discussed why neurons may specialize in a few stimulus
features rather than being sensitive for all kinds of stimuli. Next to the exponen-
tially growing number of neurons required to cover a high-dimensional stimulus
space and the positive effect of a fragmentation in subpopulations, the suscepti-
bility to background noise for non-additive noise could be another reason for this
type of neuronal organization.

2.4.3.3 Noise Correlations


The above considerations focused on the case of uncorrelated neural activity
according to (2.10). Encoding properties may change dramatically if noise cor-
relations exist. However, there has been a controversial discussion on the role
that correlated variability may play in population coding, since the exact type of
correlation assumed greatly influences the theoretical predictions.
If the readout consists of a simple averaging over the population output, pos-
itive correlations pose severe limits on the encoding accuracy. It was therefore
suggested that correlations in neural systems are harmful in general (Zohary et al.,
1994; Shadlen and Newsome, 1998). However, the situation is more complicated
I-74 CHAPTER 2. NEURAL CODING

if a code is considered in which the neurons have different preferred stimuli. In


this case, the effect of correlations critically depends on the structure of the co-
variance matrix. For example, uniform positive correlations across the population
increase the achievable encoding accuracy (Abbott and Dayan, 1999; Zhang and
Sejnowski, 1999), while correlations of limited range can have the opposite ef-
fect (Johnson, 1980; Snippe and Koenderink, 1992b; Abbott and Dayan, 1999;
Yoon and Sompolinsky, 1999). In the following, a general correlation model is
employed: we consider a covariance matrix that incorporates both uniform and
a limited-range noise correlations and therefore interpolates between the cases
discussed in the literature (Wilke and Eurich, 2001:Lb, 2002).
More specifically, let the spike count statistics again be given by (2.36). The
covariance matrix (2.37), however, is no longer diagonal but takes the form

|c(k) c(l) |

Qkl (x) = kl + (1 kl ) q + b exp afk (x) fl (x) (2.42)
L

for k, l = 1 . . . , N . In the following, we will discuss several cases.

Uniform correlations. First, consider a uniform correlation strength q 0


without limited-range contributions, i. e., b = 0 in (2.42): Each neuron is pos-
itively correlated with all others. In this case, correlations always increase the
coding accuracy regardless of the values of the other system parameters (Wilke
and Eurich, 2001:Lb, p. 166). Intuitively, this result can be understood by con-
sidering the possibility to subtract the noise if the correlations are strong (Abbott
and Dayan, 1999). Hence, for uniform correlation strength, the mean-variance ex-
ponent does not lead to new effects as compared to independent neurons. This
is not the case for limited-range correlations, as will be shown in the following.

Limited-range correlations. Next, we assume pure limited range correlations


of decay length L: q = 0 and b = 1 in (2.42) such that

|c(k) c(l) |

Qkl (x) = kl + (1 kl ) exp afk (x) fl (x) . (2.43)
L

Note that the free parameter allows the study of different variance-mean rela-
tions, just as in Section 2.4.3.2. = 0 corresponds to additive noise, = 1/2 to
Poisson-like noise, and = 1 to multiplicative noise. For D = 1, this situation
can be treated analytically (Wilke and Eurich, 2001:Lb, 2002).
An important point concerning limited-range correlations was recognized re-
cently by Yoon and Sompolinsky (1999): Adding neurons to a population with
given correlation range does not improve Fisher information beyond some limiting
value. In consequence, the encoding capacity of populations with limited-range
correlations appears to be severely limited. Our calculation shows, however, that
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-75

Figure 2.22: Effect of limited-range correlations on the encoding accuracy of a


neural population. The plot shows Fisher information as a function of number
of neurons N for different power law exponents . The population is limited
to about 50 degrees of freedom only in the additive noise case (from Wilke and
Eurich, 2002).

this conclusion is limited to the case of additive Gaussian noise (see Figure 2.22).
For noise models other than additive noise, Fisher information increases linearly
with the number of neurons (i. e., with the density of tuning curves), even in
the presence of limited-range correlations. Since the empirically found values for
are in the vicinity of 1/2, this result casts doubt on the conclusion that limited-
range correlations generally impose an upper bound on the encoding accuracy for
large populations.

Interaction of uniform and limited-range correlations. A new issue


that can be studied with the general covariance matrix (2.42) concerns the inter-
actions between limited-range and uniform correlations. A qualitative prediction
in this context was given by Oram et al. (1998). They argued that for neurons
with positive signal correlation, i. e., for overlapping tuning curves, negative noise
correlation should be favorable because this would allow for a better averaging.
At the same time, neurons with negative signal correlation should be positively
correlated to avoid reconstruction errors. Hence, a situation that should be es-
pecially disadvantageous is the simultaneous presence of positive uniform and
limited-range correlations, yielding high noise correlations for the neurons with
positive signal correlation. On the other hand, negative limited-range correlations
paired with positive uniform correlations should increase the encoding accuracy
of the population.
I-76 CHAPTER 2. NEURAL CODING

To test this prediction, the population Fisher information was calculated for
a fixed positive uniform correlation level (q = 0.5) as a function of sign and
strength of an additional limited-range correlation contribution, as specified by
the parameter b in (2.42). The result is shown in the inset of Figure 2.23. For
positive short-range correlations, the encoding becomes worse, while it improves
for negative short-range correlations.22

Figure 2.23: Uniform correlation strength q required to compensate the loss


of Fisher information due to limited-range correlations as a function of =
exp(1/(L)). Parameters were = 1/2, = N/(10), F = 100, = 0.1,
D = 1, b = 1 q. Inset: Fisher information for a population with both uniform
and limited-range correlations as a function of the coefficient of the limited-range
contribution, b, for fixed q = 1/2. For b > 0, limited-range correlations add to the
uniform part, while for b < 0, limited-range correlations are negative and weaken
the correlations locally. Parameters were N = 100, = 10/, D = 1, = 1/2,
L = 0.03, F = 100, = 0.1 (from Wilke and Eurich, 2001:Lb).

The trend that uniform and limited-range correlations counteract holds for
a large range of values of the correlation length L. Figure 2.23 demonstrates
this result: The plot shows the level q of uniform correlations that is necessary
to compensate the negative effect of limited-range correlations of given range ,
where = exp(1/(L)). This quantity was determined by choosing a fixed
and increasing the level of uniform correlations q, while simultaneously decreasing
22
An analytical approximation is given in Wilke and Eurich (2001:Lb, p. 168); it is plotted
as the dashed line in the inset of Figure 2.23.
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-77

the strength of limited-range correlations (b = 1 q), until the Fisher informa-


tion reached the value obtained in the uncorrelated case (q = 0, b = 0). The
initial increase with reflects the growing loss of Fisher information with that
has to be compensated. Eventually limited-range correlations also increase the
Fisher information since for large values of L, the case of uniform correlations is
approached: The curve approaches zero again for large .

Noise correlations with jitter. A conspicuous feature of experimentally ob-


tained correlation coefficients is their large and seemingly random variability
within a population. For example, correlation coefficients ranging from 0.9 to
0.9 were found in a population of MI neurons (Maynard et al., 1999), and a
similarly large span of values was reported in monkey MT (Zohary et al., 1994).
Despite the fact that this large variability appears to be a general finding in
studies of correlations in neural systems, there have been no attempts to relate
it to the populations encoding performance. In the following, the effect of non-
uniform correlations is studied by using a covariance matrix (2.37) with entries
that contain a stochastic component,

Qij (x) = qij afi (x) fj (x) i, j = 1, . . . , N . (2.44)

The diagonal elements are fixed at qii = 1 for i = 1, . . . , N , whereas the non-
diagonal elements qij for i < j are drawn from a Gaussian distribution with
mean q and standard deviation s, while the symmetry of Qij requires qji = qij .
Equation (2.44) corresponds to a variant of the Gaussian deformed ensemble
known from random matrix theory (Brody et al., 1981). The smallest eigenvalues
of the covariance matrix Q yield the largest contribution to Fisher information.
Random matrices have eigenvalues that are distributed around the eigenvalues of
the corresponding mean matrix. Thus, it is expected that eigenvalues smaller
than those of the corresponding covariance matrix without jitter occur in the
case of noisy correlations, leading to an increase of average Fisher information.
This is confirmed by expanding the inverse of the covariance matrix in terms
of s2 and neglecting terms of higher order. It turns out, that for D = 1, the
correction term is always positive (see Wilke and Eurich, 2001:Lb, p. 170 and
Appendix A.5). Hence, for a given mean correlation coefficient q, a population of
neurons with noisy correlation coefficients performs better on average than one
with uniform correlations.
This effect is demonstrated by a numerical simulation in Figure 2.24, where
the mean Fisher information of 105 covariance matrices is plotted as a function
of the standard deviation of the correlation coefficient. Figure 2.24 also shows
that the range of validity of the expansion is quite small. This is due tothe N -
dependence of the smallest eigenvalue, which is given by min = 1 q 2 N s in
this case (Wilke and Eurich, 2001:Lb, Appendix A.5). For N = 50 and q = 0.5,
this implies that the mean Fisher information must diverge around smax = (1
I-78 CHAPTER 2. NEURAL CODING

Figure 2.24: Variability in the correlation strength increases coding accuracy.


Mean Fisher information (solid) and analytical formula (dashed) as a function
of standard deviation s of the correlation coefficient distribution. N = 50, 10 5
samples (from Wilke and Eurich, 2002).


q)/ 4N 0.035, explaining why the quadratic approximation becomes invalid
as soon as s = 0.015. The numerical calculations indicate that the increase in
Fisher information is even more pronounced than predicted by the expansion
formula.
The result that correlations with jitter increase the Fisher information can be
interpreted as follows. A zero eigenvalue in the covariance matrix indicates the
existence of a linear combination of single-neuron activities that is completely
noiseless. This situation can of course not be achieved in realistic systems. Thus,
there must be limits to the choice of the covariance matrix in biological systems.
However, the form of these restrictions remains unclear, making a systematic op-
timization of the covariance matrix by theoretical analysis impossible. One may
therefore view the introduction of jitter in the correlation coefficients as a means
of randomly exploring the space of covariance matrices, which, according to the
above, yields a better average encoding accuracy than remaining on the subspace
of uniform correlation strength. The range of validity of this argument is limited
by the fact that the matrices eventually leave the admissible set.

Section 2.4.3 listed a multitude of parameters that influence the encoding accu-
racy of a neural population: the number of encoded features, tuning widths for
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-79

different features, the maximal firing rate, the noise model, background activity,
various kinds of noise correlations and their interaction, etc. Even subtle effects
like introducing a non-uniformity in the tuning widths or the correlation coef-
ficents may improve the sensory resolution. These results suggest that a large
number of empirical data are required to make reliable predictions for the signal
content of a population of neurons. Also, it is practically not feasible to argue
with optimal coding strategies as long as the biological constraints and the physi-
cal quantities that are (putatively) optimized are not completely identified. This
is even more true as long as the neural code is not identified: the current results
were obtained under the simple assumption of a rate coding.

2.4.4 The Case of Multiple Objects


The investigations of the last two sections focus on the encoding of a single
(sensory or motor) object. In a natural situation, however, neural activity will
normally result from multiple objects or even complex sensory scenes. In partic-
ular, the encoding of multiple stimuli has recently been studied in the context of
attention experiments; these require the presentation of at least one distractor
along with the attended stimulus. Electrophysiological data are now available
demonstrating effects of selective attention on neural firing behavior in various
cortical areas (e. g., Moran and Desimone, 1985; McAdams and Maunsell, 1999;
Treue and Trujillo, 1999, see also the corresponding paragraphs in Section 2.2.2).
Such experiments require the development of theoretical tools which deviate from
the usual practice of considering only single stimuli in the analysis. Zemel et al.
(1998) employ an extended encoding scheme for stimulus distributions and use
Bayesian decoding to account for the presentation of multiple objects. Simi-
larly, Bayesian estimation has been used in the context of attentional phenomena
(Dayan and Zemel, 1999).

Compound space. Here, we extend the Fisher information analysis of an ob-


ject in a D-dimensional feature space to include the case of multiple stimuli (Eu-
rich, 2003:L). Tuning curves for the presentation of a single object will henceforth
be denoted by f1 (x), where the subscript refers to the single object. The behavior
of a neuron upon presentation of multiple objects, however, cannot be inferred
from tuning curves f1 (x). Instead, neurons may show nonlinearities such as the
so-called non-classical receptive fields in the visual area V1 which have attracted
much attention (e. g., Knierim and van Essen, 1992; Sillito et al., 1995). For M
simultaneously presented stimuli, x1 , . . . , xM , the neuronal tuning curve can be
written as a function fM (x1 , . . . , xM ), where the subscript M is an indicator of
the number of stimuli it refers to. The domain of this function will be called the
compound space of the stimuli.
In the following, we consider a specific example consisting of two simulta-
neously presented stimuli, characterized by a single physical property (such as
I-80 CHAPTER 2. NEURAL CODING

orientation or direction of movement). The resulting tuning function is there-


fore a function of two scalar variables x1 and x2 : f2 (x1 , x2 ) = hr(x1 , x2 )i =
hn(x1 , x2 )i/ . Figure 2.25 visualizes the concept of the compound space.

f2(x1,x2)
f1(x)

x'' x2
x'
x' x'' x
x1

Figure 2.25: The concept of compound space. A single-stimulus tuning curve


f1 (x) (left) yields the average response to the presentation of either x0 or x00 ; the
simultaneous presentation of x0 and x00 , however, can be formalized only through
a tuning curve f2 (x1 , x2 ) (right); from Eurich (2003:L).

In order to obtain a partially analytical access to the encoding properties of a


neural population, we will furthermore assume that a neurons response f2 (x1 , x2 )
is a linear superposition of the single-stimulus responses f1 (x1 ) and f1 (x2 ), i. e.,
f2 (x1 , x2 ) = kf1 (x1 ) + (1 k)f1 (x2 ) , (2.45)
where 0 < k < 1 is a factor which scales the relative importance of the two
stimuli. Equation (2.45) is consistent with neural behavior observed in area
17 of the cat upon presentation of bi-vectorial transparent motion stimuli (van
Wezel et al., 1996) and in areas MT and MST of the macaque monkey upon
simultaneous presentation of two moving objects (Recanzone et al., 1997); Heeger
(1992); Carandini and Heeger (1994) proposed a normalization model of neural
responses that is also described by this equation.23
The consideration of a neural population in the compound space yields tuning
properties and symmetries which are very different from those in a D-dimensional
single-stimulus space:
First, the tuning curves have a different appearance. Figure 2.26a shows a
tuning curve f2 (x1 , x2 ) given by (2.45), where f1 (x) is a Gaussian,
(x c)2

f1 (x) = F exp ; (2.46)
2 2
F is a gain factor which can be scaled to be the maximal firing rate of
the neuron. f2 (x1 , x2 ) is not radially symmetric but has cross-shaped level
curves.
23
The method of applying estimation theory in compound space can be employed for arbitrary
nonlinearities in the neural firing; however, the analysis will then be largely numerical.
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-81

f2(x1,x2)
f1(x)
1.2
1
x2
0.8
c x
0.6
0.4
0.2

8 (c,c)
6 8
6
x2 4

(a) 2 2
4
x1 (b) x1

Figure 2.26: (a) A tuning curve f2 (x1 , x2 ) in a two-dimensional compound space


given by (2.45) and (2.46) with k = 0.5, c = 5, = 0.3, F = 1. (b) Arrangement
of tuning curves: The centers of the tuning curves are restricted to the diagonal
x1 = x2 . The cross is a schematic cross-section of the tuning curve in (a); from
Eurich (2003:L).

Second, a single-stimulus tuning curve f1 (x) whose center is located at x = c


yields a linear superposition whose center is given by the vector (c, c) in the
compound space. This is due to the fact that both axes describe the same
physical stimulus feature. Therefore, all tuning curve centers are restricted
to the 1-dimensional subspace x1 = x2 (Figure 2.26b). The tuning curve
centers are assumed to have a distribution in the compound space which
can be written as

0 if c1 6= c2
(c1 , c2 ) = .
(c) if c1 = c2

The geometrical features in the compound space suggest that an estimation-


theoretic approach will yield encoding properties of neural populations which are
different from those obtained from the presentation of a single stimulus.

Fisher information. For the described example of a 2-dimensional compound


space and independently spiking neurons with Poisson statistics, Fisher informa-
tion a 2 2 matrix can be readily worked out; cf. Eurich (2003:L) for details.
Instead, we will discuss two examples.

Example 1: Symmetrical tuning. Consider the symmetrical case k = 1/2


in (2.45) the receptive fields of which are given in Figure 2.26a. Figure 2.27 shows
the minimal square estimation error for x1 , 21,min (), as obtained from the first
I-82 CHAPTER 2. NEURAL CODING


diagonal element of the inverse Fisher information matrix; 2 is the distance of
the point (x1 , x2 ) from the diagonal. Due to the symmetry, 21,min () is identical
to the minimal square error for x2 , 22,min (). The estimation error diverges as

Figure 2.27: Minimal square estimation error for stimulus x1 or x2 . Solid line:
F = 1; dotted line: F = 1.5. In both cases, k = 0.5, = 1, = 1, = 1; from
Eurich (2003:L).

0. This can be understood as follows: For k = 1/2, the Fisher information


matrix is symmetric and can be diagonalized. The eigenvector directions are

1 1 1 1
~v1 = ~v2 = . (2.47)
2 1 2 1

Correspondingly, the diagonal Fisher information


matrix yields a lower bound
for the estimation errors of (x1 + x2 )/ 2 and (x2 x1 )/ 2, respectively.
The
results are shown in Figure 2.28. The estimation error for (x1 + x2 )/ 2 takes
a finite value for all %. However, the estimation error for (x2 x1 )/ 2 diverges
as 0. This error corresponds to an estimation of the difference of the two
presented stimuli. The Fisher information for (x2 x1 )/ 2 can be regarded as a
discrimination measure which takes the simultaneous presentation of stimuli into
account. As expected, a discrimination becomes impossible as the stimuli merge.

Example 2: Attending one stimulus. Electrophysiological recordings in


monkey area V4 suggest that upon presentation of two stimuli inside a neurons
receptive field, the influence of the attended stimulus increases as compared to the
unattended one (Moran and Desimone, 1985). In our framework, this situation
can be considered by increasing the weight factor of the attended stimulus in the
linear superposition (2.45). Here we study the case k = 0.75 corresponding to at-
tending stimulus x1 . The resulting tuning curve shows characteristic distortions
as compared to the symmetrical case k = 0.5 (Figure 2.29a). The Fisher infor-
2.4. THE ENCODING ACCURACY OF A SINGLE OBJECT I-83

20
1.5
(a) direction (b) direction
x1+x2 15
x2-x1
1/2 1/2
2 2
(r)

(r)
1 10
emin

emin
2

2
5

0.5 0
-4 -2 0 2 4 -4 -2 0 2 4
r r

Figure
2.28: Minimal square estimation error for (a) (x 1 + x 2 )/ 2 and (b) (x2
x1 )/ 2. Solid lines: F = 1; dotted lines: F = 1.5. Same parameters as in
Figure 2.27; from Eurich (2003:L).

20
(a) f2(x1,x2) (b) direction
1.2 15
x2-x1
1/2
1 2
(r)

0.8
0.6 10
emin
2

0.4
0.2
5
8
6 8
6
x2 4 4
x1
0
-4 -2 0 2 4
2 2
r

Figure 2.29: Neural encoding for one attended stimulus. (a) Tuning curve (2.45),
(2.46) for k = 0.75, i. e., stimulus x1 is attended. All other parameters asin
Figure 2.26a. (b) Minimal square estimation errors for the direction (x 2 x1 )/ 2
resulting from a diagonalized Fisher information matrix. Solid line: k = 0.5 as in
Figure 2.28b; dotted line: k = 0.75. F = 1, all other parameters as in Figure 2.27;
from Eurich (2003:L).

mation analysis reveals that the attended stimulus x1 yields a smaller minimal
square estimation error than it does in the non-attention case k = 0.5 whereas
the minimal square error for the unattended stimulus x2 is increased (data not
shown). Figure 2.29b shows the minimal square error for the difference of the
stimuli, (x2 x1 )/ 2. The minimal estimation error becomes larger as compared
to k = 0.5. This result can be interpreted as follows: Attending stimulus x 1
yields a better encoding of x1 but a worse encoding ofx2 . The latter results in
the larger estimation error for the difference (x2 x1 )/ 2 of the stimulus values.
I-84 CHAPTER 2. NEURAL CODING

This can be interpreted as a worse discriminational ability: In a psychophysical


experiment, subjects attending stimulus x1 will have only a crude representation
of the unattended stimulus x2 will therefore yield a performance which is worse
as compared to the situation where both stimuli are processed in the same way.
This is a prediction resulting from the presented framework.

Section 2.4 has shown that studying neural responses upon presentation of a
single object (comprising the case of multiple objects as described in the com-
pound space) yields far-reaching results for neural encoding strategies. Although
conceptually simple, it allows the identification of system parameters that are
important in the neural signal processing.
Chapter 3

Neural Dynamics

3.1 Introduction
This chapter deals with the dynamical systems approach to phenomena of the
nervous system. It uses a terminology different from the one in the functional
approach: instead of talking about encoding, decoding, representation, discrimi-
nation, information, object formation etc., the language switches to notions like
transients, steady states, limit cycles, stability properties, noise-induced tran-
sitions and the like. Accordingly, the mathematical methodology changes from
statistical estimation theory and information theory to dynamical systems theory
and differential equations.
Historically, the brain as a scientific object has been subject to a large number
of contradicting ideas, theories and fallacies, much more than any other organ.
For example, historical brain metaphors tended to reflect the respective world
view that pervaded the society. This is true not only for the middle ages but also
for the 20th century: In the 1950s many people considered the brain to operate
like a computer1 (see the overviews by Florey, 1996 and Gould, 1996).
Even today a fundamental understanding of brain processes und functions is
lacking. What is the reason for this situation? An explanation lies in the fact
that the brain is a complex system. Several factors contribute to this:

Brain processes take place on many time scales ranging from sub-millisecond
conformation changes in proteins of the cell membrane to weeks or months
in plastic changes in neural tissue, for example after the loss of a limb.
The different dynamical processes are not separated but are highly inter-
twined. For example, according to Hebbs postulate (Hebb, 1949), learning
as manifested in long-term metabolic or structural changes results from
1
Tthe program of Artificial Intelligence (AI) contained the idea of building expert systems
that incorporated a large body of knowledge on more or less specific topics. This knowledge
was accumulated in the form of data and rules that combined them. The general belief in the
AI community was that the brain has similar principles of symbolic knowledge organization.

I-85
I-86 CHAPTER 3. NEURAL DYNAMICS

the correlated short-term activity of the neurons involved. It has taken


5 decades to prove that the Hebbian idea is indeed right (Seung, 2000). In
every modeling study, the serious question arises which level is appropriate
for the dynamical variables. Figure 3.1 (right) shows a qualitative overview
of brain processes on different time scales.

Likewise, the dynamics in the nervous system operates on length scales

Spatial Organization: Length Scales Temporal Organization: Time Scales

length (m) time (s)


0 5
10 body 10
sensorimotor control
-1 4
10 signal pathways, 10
e.g., cortical processing morphological changes

-2 3
10 cortical areas 10

-3 2
10 macrocolumns 10 neuromodulation
LTP / LTD

-4 1
10 cortical neurons 10

dendritic tree sensorimotor processing


-5 0
10 10
reaction time
visual processing
-6 -1
10 10 spike-timing dependent plasticity
dynamic synapses
neural processing
-7 synapses -2
10 10 dendritic processing
postsynaptic potentials
action potentials
-8 -3
10 cell membrane diameter 10 synaptic transmission
channel kinetics
-9 -7
10 pores in ion channels 10

Figure 3.1: Schematic overview of length scales (left) and time scales (right) in
the nervous system. The scale given on the left of each diagrams serves as a coarse
guide only: Structures and dynamics can vary greatly depending of the specific
system under consideration. LTP: Long-Term Potentiation, LTD: Long-Term
Depression.
3.1. INTRODUCTION I-87

ranging from nanometers (pores in ion channels) to meters (signal conduc-


tion in sensorimotor systems); see Figure 3.1 (left) for an overview. For
example, when observing regular firing in a neuron, the question arises
whether this dynamical behavior is an intrinsic neural property (pacemaker
neuron) or a property of the network in which the neuron is embedded;
see Dean and Cruse (2003) for a brief discussion and further references on
this issue. Another well-known example is the Hodgkin-Huxley model of
the action potential (Hodgkin and Huxley, 1952; Rinzel, 1990) consisting
of four differential equations and further equations for the state-dependent
parameters. These parameters can be understood in terms of ion channel
properties, that is, by considering structures that are at least two orders of
magnitude smaller.2 At each length scale, new structures and phenomena
arise such that a common, global description (like, for example, in turbulent
fluids where self-similarity is observed over several length scales) does not
seem possible.

Neural networks in the brain have a complex topology. Simple structures


that might yield an analytical access (nearest-neighbor coupling, all-to-all
connectivity) are usually not found. Instead, neurons typically form dense
local clusters and may have long-range connections in addition (e. g., Eysel,
1999 . The promising new theory of complex networks (e. g., Albert and
Barabasi, 2002; Bornholdt and Schuster, 2003) will probably give new in-
sight in the properties of such networks. A particularly conspicuous feature
of the connectivity in the brain is the high degree of feedback suggesting
that signal processing is by no means a feedforward process from sensory
input to motor output; cf. Figure 1.2 for an impressive example. The dy-
namics in such complex feedback systems is hardly investigated to-date.

Finally, processes in the nervous system are usually nonlinear, allowing for
analytical investigations only in rare cases. For example, the subthreshold
dynamics of integrate-and-fire neurons is linear, and a complete solution
theory is available (e. g., Tuckwell, 1988a, chapter 3). The firing thresh-
old, however, makes the system nonlinear and confines most investigations
to approximations or numerical methods. A rare (and therefore famous)
exception is the synchronization study by Mirollo and Strogatz (1990); typ-
ically, the existence and stability of steady states are proven in this article.
Even two-neuron systems modelled by integrate-and-fire neurons show a
rich dynamical behavior such as phase locking and multistability (Catsig-
eras and Budelli, 1992; Gomez and Budelli, 1996) and this behavior occurs
2
The Hodgkin-Huxley equations are an example also for dynamics on different time scales.
The dynamical variables can be separated in fast and slow ones which are subsequently merged
to obtain a simpler model, the so-called Fitzhugh-Nagumo neuron (Fitzhugh, 1961; Nagumo
et al., 1962).
I-88 CHAPTER 3. NEURAL DYNAMICS

although most complex phenomena of living neurons are neglected in the


neuron model.

In the light of the large range of time and length scales on which interactions
take place in the nervous system, the notion of neural dynamics as it is used here
includes not only the behavior of cells in a narrower sense but encompasses all
the different levels that elucidate phenomena in the nervous system.

Methods. A traditional approach to neural populations employs networks of


discrete model neurons. Examples are given in the two collections of classical
papers compiled by Anderson and Rosenfeld (Anderson and Rosenfeld, 1988;
Anderson et al., 1990): the first paper about neural networks by McCulloch
and Pitts (1943), Rosenblatts perceptron (Rosenblatt, 1958), Kohonen maps
(Kohonen, 1982), backpropagation (Rumelhart et al., 1986), Grossbergs adaptive
resonance theory (ART) in different versions (e. g., Carpenter and Grossberg,
1987), etc.; see also Hertz et al. (1991) for an introduction of such topics.
From a methodological point of view, however, such neural networks are disad-
vantageous as models of brain tissue in the following sense. Each neuron requires
the determination of certain system parameters such as its synaptic weights and
its membrane time constant. Consequently, a network model may have hundreds
or even thousands of parameters according to the usually large number of neu-
rons under consideration. The detailed empirical knowledge necessary to identify
the system parameters is usually not available. Such a model may be adjusted
(usually by varying the synaptic weights on a time scale that is slow compared to
the neural dynamics) to show very different dynamical phenomena. Conversely, a
given phenomenon may be reproduced by networks with a large variety of neural
properties, topologies, etc. Network models employing single neurons have there-
fore only a very limited predictive power and yield almost no insight into the
actual processes in the real neural tissue. Moreover, when having reproduced a
certain phenomenon in such a network model, the large complexity of the dynam-
ics usually prevents a thorough analysis of the model: The incomprehensibility of
the wetware has been replaced by the incomprehensibility of the software. Such
networks have limited explanative power.
The situation is different for the dynamical systems approach mostly in
continuous time which is defined here as employing the methodology of the
mathematical field of nonlinear dynamics; for readable textbooks, see Schuster
(1988) or Argyris et al. (1994). In most cases, systems are described by various
types of differential equations Such models have only few parameters which can
usually be interpreted in biological terms. Moreover, the dynamics may allow
for an identification of general mechanisms that govern a certain class of phe-
nomena. On the neural network level, population equations and field equations
are an appropriate means, see for example (Wilson and Cowan, 1972) for tempo-
ral and Wilson and Cowan (1973); Amari (1980) for spatio-temporal population
3.2. DYNAMICAL SYSTEM MODELS IN THEORETICAL NEUROSCIENCE I-89

models. But the dynamical system approach generally applies on all spatial and
temporal scales, for example in short-term synaptic dynamics (e. g., Tsodyks and
Markram, 1997) and Hebbian learning (Eurich et al., 1999:L). A brief overview
of the nonlinear dynamics approach for neural modeling is given by Arbib et al.
(1998, chapter 4).
A second, slightly different approach is located in statistical physics from a
topical and methodological point of view. It encompasses the study of systems
comprising a large number of interacting elements (particles, agents etc.) and
also strives for an understanding of general mechanisms. Typical phenomena
are self-organized criticality and stochastic resonance. They occur also in neural
networks.

In Section 3.2, a few topics in the field of neural dynamics will be highlighted.
Section 3.3 focuses on mechanisms in systems with delays and/or noise and inte-
grates my own research.

3.2 Dynamical System Models in Theoretical


Neuroscience
In this section, a few examples for dynamical system models of brain phenomena
are given. Due to the vast literature, a comprehensive overview is out of reach in
the context of this thesis. Textbooks about neural processing employing dynam-
ical systems are Tuckwell (1988a,b). They include the theory of the membrane
potential, cable theory for passive signal conduction in the dendritic tree, the
integrate-and-fire neuron, Hodgkin-Huxley theory, and stochastic neural activ-
ity. Several chapters of Dayan and Abbott (2001) also deal with dynamics, in
particular in single neurons and neural populations. For a tutorial on neural dy-
namics, see Pawelzik and Eurich (2000). The traditional field of neural networks
is introduced in Hertz et al. (1991) or Rojas (1993).
In the following Sections 3.2.13.2.3, the dynamics of single neurons, neural
networks and synapses will be described. Dynamics on the subneuronal level
(dendritic processing, action potentials) is mentioned in Section 3.2.1, whereas
the large-scale dynamics in sensorimotor loops is briefly discussed in connection
with delay in Section 3.3.

3.2.1 Single Neurons


A large number of neuron models has been suggested since the publication of the
first neural network paper in 1943 (McCulloch and Pitts, 1943). Such models dif-
fer dramatically in their complexity; the spectrum ranges from greatly simplified
caricatures useful for modelling large neural networks to compartmental models
I-90 CHAPTER 3. NEURAL DYNAMICS

described by thousands of differential equations to study the properties of single


neurons.

Models in discrete time. Among the simplest neuron models range those
used in neural networks that map an input to an output or whose dynamics is
defined on a discrete time axis. Figure 3.2 shows a schematic picture of such

x1

x2 w1
w2
w3 Figure 3.2: Model neuron for time-
x3 E f y
discrete networks. indicates a lin-
wn ear superposition of the inputs.

xn

a model. The neuron receives inputs x1 , . . . , xn from which the total excitation
e = w1 x1 + . . . + wn xn is computed. w1 , . . . , wn are synaptic weights, usually
modelled as real numbers (positive or negative for excitation or inhibition, re-
spectively). A transfer function f translates the excitation to the neural output
y. In the McCulloch-Pitts neuron (McCulloch and Pitts, 1943), neural outputs
y (and, consequently, also x1 , . . . , xn in a network) are binary; in this case, the
transfer function is a Heaviside function with some threshold excitation . In the
so-called analog neurons or sigmoidal neurons, y and x1 , . . . , xn take positive val-
ues, and the transfer function is some squashing function, for example the Fermi
function
1
f (e) = k(e)
(3.1)
1+e
with a gain parameter k. Instead of computing the excitation e by linear summa-
tion, multiplicative terms wj1 j2 ...jm xj1 xj2 . . . xjm may also be considered (so-called
sigma-pi units); cf. Brause (1991) for an introduction. Discrete-time models are
not used to study single-neuron properties but are preferably employed in the ma-
chine learning community and find applications in classical networks like Hopfield
networks and multilayer perceptrons (e. g., Hertz et al., 1991; Brause, 1991). In
such neural networks, however, the abovementioned problem of multi-dimensional
parameter spaces arises.

Models in continuous time. Important topics in theoretical neuroscience


consider the behavior of single cells and the time structure of activity in the ner-
vous system on a millisecond scale, for example in the context of neural bursting
behavior, synchronization, or spike-timing dependent plasticity. Models in these
3.2. DYNAMICAL SYSTEM MODELS IN THEORETICAL NEURO- I-91
SCIENCE

fields require a real time axis and the corresponding appropriate neuron models.
Such models are usually based on the nonlinear electric properties of cell mem-
branes; the dynamical variable for describing neural behavior is the membrane
potential (see Chapters 5 and 6 in Dayan and Abbott, 2001 for a systematic
textbook introduction to various neuron models in continuous time).
A basic distinction can be made between temporal and spatiotemporal neural
behavior. The former case concerns the modelling of a patch of membrane with-
out considering the spatial extent of the neural surface. The purely temporal
behavior also gives rise to simple neural models which neglect a spatial structure
(point neurons). In the latter case, the spread of excitation or inhibition in the
cell is studied in the form of passive conduction (in dendrites) or action poten-
tials (in dendrites and in the axon). The physiology and morphology of extended
cells is considered in so-called multi-compartment models. With respect to this no-
tion, purely temporal models are sometimes also referred to as single-compartment
models.

Models with purely temporal dynamics. In brief, the basic equation for
all single-compartment models is
dV Ie
cm = im + , (3.2)
dt A
where V is the membrane potential, cm is the specific membrane capacitance,
im is the membrane current per unit area, Ie is an electrode current, and A is
the total surface area of the cell (Dayan and Abbott, 2001). The different signs
on the right hand side of (3.2) are convention. According to the basic equation,
the membrance creates a capacitance due to the separation of charges carried
by ions like sodium and potassium. The dynamics of the membrane potential is
further determined by various membrane currents that can be modelled as Ohmic
currents or currents with voltage-dependent conductances.

The simplest model resulting from the model class defined by (3.2) was introduced
by Lapicque (1907); see Abbott (1999) for a historical account. It is now referred
to as the integrate-and-fire neuron. In this model, all voltage-dependent (active)
conductances are ignored such that the basic equation does not produce action
potentials but considers the subthreshold behavior only. Equation (3.2) results
in
dV V Ie (t)
+ = , (3.3)
dt C
where V is now the deviation of the membrane potential from the neurons resting
potential, C is the total membrane capacitance, and = rm cm = RC is the
membrane time constant, rm is the specific membrane resistance, and R is the
total membrane resistance. The subthreshold neuron behaves like an electric
circuit consisting of a resistor and a capacitor in parallel (Figure 3.3).
I-92 CHAPTER 3. NEURAL DYNAMICS

C R V(t) Ie(t) Figure 3.3: The equivalent circuit of a subthresh-


old integrate-and-fire neuron.

In order to incorporate firing behavior into the model, a firing threshold is


introduced. If, at some time ts , V exceeds the threshold, the neuron is said to
fire, and the membrane potential is reset to zero:

V (ts ) V (t+
s ) = 0, (3.4)

where V (t+s ) lim0,>0 V (ts + ). The firing threshold makes the neuron non-
linear, allowing for analytical results only in rare cases. For example, even a
simple oscillatory input Ie (t) = sin t yields spike events that cannot be written
down in closed form (Tuckwell, 1988a).

While the integrate-and-fire neuron (3.3,3.4) describes the simplest possible neu-
ron of the model class defined by (3.2), it can be equipped with additional features
to capture physiological properties in case they are important to account for a
system under study. Such features include the following.

A refractory period. During the absolute refractory period immediately after


a spike, the neuron cannot produce another spike for about one millisecond.
During the subsequent relative refractory period, a higher current input is
necessary to trigger a spike. Both effects can be modelled by introducing a
time-dependent firing threshold .

A spike-frequency adaptation, i. e., a decrease in the neural firing frequency


upon constant or repetitive input. See Benda and Herz (2003) for a recent
general framework to consider this feature.

Postsynaptic potentials. An incoming spike is often modelled as a delta func-


tion, resulting in a jump of the membrane potential V . A more elaborated
possibility is the use of input currents that produce membrane potentials re-
sembling excitatory or inhibitory postsynaptic potentials (IPSPs/EPSPs).
The input current is often modelled as an alpha function,

Ie (t) = ktet , > 0, (3.5)

with k > 0 and k < 0 for excitation and inhibition, respectively.

A neuron that incorporates both a refractory function and a synaptic kernel


3.2. DYNAMICAL SYSTEM MODELS IN THEORETICAL NEURO- I-93
SCIENCE

is the spike response model (Gerstner and van Hemmen, 1993; Gerstner,
1995). The synaptic kernel yields the neural response to incoming spikes
and usually contains a transmission delay. Gerstner (1995) shows that the
integrate-and-fire neuron is a special case of the spike response model.

More realistic model neurons consider voltage-dependent ionic membrane cur-


rents. Such models are referred to as conductance-based models. The most promi-
nent one is the Hodgkin-Huxley neuron (Hodgkin and Huxley, 1952; Rinzel, 1990)
derived from studies of the squid giant axon. In addition to a membrane cur-
rent with constant conductance, Hodgkin and Huxley consider voltage-dependent
sodium and potassium currents resulting in a total membrane current

1
im = (V EL ) + g K n4 (V EK ) + g Na m3 h(V ENa ) (3.6)
rm

in the basic equation (3.2). The first term describes the leak current, the second
and third terms are the potassium and the sodium current, respectively. EL ,
EK , ENa denote reversal potentials, and g K , g Na are maximal conductances. The
voltage-dependence enters through the variables n, m, h which are called gating
variables. They describe the probability of a channel subunit to be in a conforma-
tion that corresponds to an open channel. Each gating variable satisfies a Master
equation with voltage-dependent parameters; the exponent with which it occurs
corresponds to the number of subunits in the respective channel. For details of
the Hodgkin-Huxley model, see Johnston and Wu (1995).
The Hodgkin-Huxley model produces action potentials and captures the phys-
iology of some types of neurons (for example, action potentials in the giant axon
of the squid for which the theory was developed). Although it describes only
the production of action potentials in the first place, it is also used as a neuron
model.

The spike behavior of cortical neurons, however, is not properly captured by


the Hodgkin-Huxley model. Today, two types of neural models are distinguished
which differ by the following important property. Consider a neuron that is driven
by a constant input current. In the so-called type I neurons, the firing rate rises
continuously as a function of the electrode current above some threshold. In the
type II neurons, however, a discontinuous jump in the firing rate occurs at the
threshold current (see Figure 6.1 in Dayan and Abbott, 2001). The Hodgking-
Huxley model produces a type II response, while cortical neurons show type I
behavior. Connor and Stevens (1971); Connor et al. (1977) suggested a neural
model (known as the Connor-Stevens model) which considers another membrane
current in addition to the ones given in (3.6), the so-called A-current. It is associ-
ated with two further gating variables. The Connor-Stevens model is a standard
model for cortical neurons.
I-94 CHAPTER 3. NEURAL DYNAMICS

Further neuron models have been proposed. For example, Fitzhugh (1961);
Nagumo et al. (1962) introduced a simplification of the Hodgkin-Huxley neuron
by combining those dynamical variables that act on a similar time scale, resulting
in a two-dimensional model. Likewise, Abbott and Kepler (1990) perform a
stepwise reduction of the Hodgkin-Huxley dynamics, resulting in an integrate-
and-fire type model or in a binary neuron.

Models with spatiotemporal dynamics. The neuron models introduced in


the previous paragraph are point neurons ignoring the spatial structure of a nerve
cell. Neurons, however, do by no means have an equipotential surface but are
characterized by spatiotemporal signal conduction.
Dendritic processing is often characterized as passive (or diffusive) signal con-
duction.3 Mathematically, dendrites can be described as elongated cylinders of
radius a, and the spatiotemporal dynamics of the membrane potential is governed
by the one-dimensional cable equation

V 1 2 V
cm = a im + ie (3.7)
t 2arL x x

which is augmented by appropriate boundary conditions. x is the spatial variable,


rL denotes the intracellular resistivity, and ie is the electrode current per unit
area. Since the cable equation is linear, a solution theory is available. In fact,
a passive dendritic tree composed of cylinders with variable geometries can be
completely described in the sense that for a given input current at some point in
the dendritic tree, the membrane potential at any other dendritic position in the
future can be written down; cf. for example Abbott et al. (1991). For a thorough
account of this topic, see the commented compilation of papers of Wilfrid Rall
(Segev et al., 1995). Introductions to cable theory in neurons are given in Jack
et al. (1975); Tuckwell (1988a).

Spatially extended model neurons can be obtained as compartmental neurons


in the following sense. A neuron is divided into compartments, whereby each
compartment is described by a set of differential equations. The compartments
are subsequently affixed to each other via longitudinal resistances.
An early example is Ralls model neuron (Rall, 1960, see also Chapter 6 in
Tuckwell, 1988a for an extensive discussion). The neuron consists of a semiinfinite
cylinder described by (3.7) to which a somatic compartment is attached; the
soma dynamics is governed by the basic equation (3.2), i. e., it is treated as an
equipotential surface without spatial dynamics (lumped soma).
3
See, however, more recent results on action potentials propagating from the soma back into
the dendritic branches (Stuart and Sakmann, 1994; Markram and Sakmann, 1995; Johnston
et al., 1996).
3.2. DYNAMICAL SYSTEM MODELS IN THEORETICAL NEURO- I-95
SCIENCE

A more frequently employed model class uses the purely temporal dynamics
(3.2) exclusively in all compartments and connects them by longitudinal resis-
tances. An easy approach is the subthreshold integrate-and-fire dynamics (3.3).
One of the compartments (the somatic compartment) has a threshold condition
of the form (3.4) whereas the others are not active.
The number of compartments varies greatly according to the goal of the mod-
elling study. An important step is the use of two-compartment models (Rospars
and Lansky, 1993; Bressloff, 1995), because it is the simplest spatially extended
neuron and yields a behavior that is qualitatively different from the behavior of
an integrate-and-fire neuron. For example, an integrate-and-fire neuron has no
memory in the sense that a after a spike, the membrane potential is reset, and
information about the dynamics prior to the spike is lost. If a dendritic compart-
ment is attached, however, its membrane potential is not reset and reflects the
history of the neuron.
Some modelling studies strive for a detailed investigation of the signal pro-
cessing in individual neurons. A typical procedure is the exact reconstruction of
a living cell and a subsequent compartmentalization (in particular, of the den-
dritic tree), whereby each compartment is assigned parameters according to the
morphology and the physiology of the corresponding part of the cell. A com-
plete model may thus consist of thousands of compartments. Software packages
like GENESIS (Bower and Beeman, 1998, http://genesis.bbb.caltech.edu/
GENESIS/genesis.html) or NEURON (Hines and Carnevale, 1997, http://www.
neuron.yale.edu) support such an approach.

3.2.2 Networks
In neural network modelling, one may distinguish between dynamics in discrete
time and in continuous time. A second criterion is the modelling of individual
neurons versus the consideration of neural densities or neural fields. Traditional
neural network approaches (Hertz et al., 1991; Rojas, 1993) are mostly based on
single neurons in discrete time. Such models, however, suffer from the difficulties
described in Section 3.1: the large number of parameters does not allow for in-
ferences on biological network structures, and the networks are hard to analyze.
This is true also for new approaches where emphasis is laid on transient activi-
ties, so-called liquid state machines, (Maass et al., 2002) and echo state networks
(Jaeger, 2001a,b).
There are some exceptions to this rule. Hopfield networks (Hopfield, 1982,
1984) allow for an imprinting of stable fixed points of the dynamics which may
serve as memory patterns. Analyses are also possible in networks with a sim-
ple, uniform connectivity. Mirollo and Strogatz (1990) consider a network of
excitatory integrate-and-fire type oscillators whose time-continuous dynamics is
mapped to a time-discrete dynamics via a Poincare map. Their approach yields
an analytical proof that neurons show in-phase synchronization in finite time
I-96 CHAPTER 3. NEURAL DYNAMICS

for almost all initial conditions. Ernst et al. (1995, 1998); Timme et al. (2002)
extend the model with respect to inhibition and transmission delays. The re-
sulting network behavior is different from the case without delays: neurons form
clusters which are stable only for inhibitory interactions, whereas excitatory in-
teractions yield clusters which decompose and self-organize again with changing
members. More recently, Eurich et al. (2002:La) studied avalanches of spike ac-
tivity in uniform networks of integrate-and-fire neurons and were able to derive
a closed expression for avalanche size distributions in networks of arbitrary size;
see Section 3.3.2.1 for further details.

Alternatively, neural populations can be modelled with population equations,


without referring to single neurons. A distinction between purely temporal be-
havior and spatiotemporal behavior can be made.
Wilson and Cowan (1972) and Knight (1972a,b) give early accounts of the
temporal dynamics of neural populations. Wilson and Cowan (1972) study inter-
actions of excitatory and inhibitory populations. Starting from a detailed model
for the proportions of excitatory and inhibitory cells firing per unit time, they
apply a time coarse graining technique to arrive at a system two ordinary differen-
tial equations which allows for a rigorous phase-plane analysis. Knight (1972a,b)
considers a uniform population and relates the firing rate of a single neuron to
the population firing rate. More recent contributions (Brunel and Hakim, 1999;
Gerstner, 2000; Fourcaud and Brunel, 2002) strive for a rigorous approach to
population activity and an extension of the Wilson-Cowan formalism. Gerstner
(2000) derives an integral equation for the time evolution in homogeneous popu-
lations of the integrate-and-fire type. The focus is on fast changes of population
activity and the response to an arbitrary time-dependent input current. Brunel
and Hakim (1999) and Fourcaud and Brunel (2002) use a Fokker-Planck equation
for the distribution of membrane potentials in a neural population to characterize
oscillatory regimes and the influence of noise in synaptic input.
The spatiotemporal behavior of neural populations is treated in neural field
models (e. g., Wilson and Cowan, 1972; Amari, 1977, 1980) to study, for exam-
ple, pattern formation. The Wilson-Cowan model class is widely used today, in
particular in applications to cortical processing (see Ernst and Eurich, 2003 for a
recent review). An example is given in Chapter 4 where populations of inhibitory
and excitatory neurons are used to quantitatively explain psychophysical mask-
ing experiments (Herzog et al., 2003:L). Other pattern formation approaches are
in the tradition of the theory of excitable media; a brief review on this topic in
the context of neural networks is given in Section 3.3.2.2.

3.2.3 Synaptic Dynamics and Hebbian Learning


The notion of synaptic dynamics refers to mechanisms occurring on different time
scales. A coarse disctinction can be made between short-term dynamics and long-
3.2. DYNAMICAL SYSTEM MODELS IN THEORETICAL NEURO- I-97
SCIENCE

term dynamcis (cf. Figure 3.1). First, synapses are known to show a variability
on short time scales ranging from milliseconds to seconds (e. g., , Markram and
Tsodyks, 1996; see also Fisher et al., 1997; Zador and Dobrunz, 1997 for reviews).
Second, synaptic changes are often identified with learning 4 on time scales ranging
from minutes to hours, days, or longer.

Short-term dynamics. Whereas the short-term dynamics of synapses has


been known for some time (reviewed, for example, in Fisher et al., 1997), the
detailed interaction between successive incoming pulses has been studied only re-
cently (Markram and Tsodyks, 1996; Tsodyks and Markram, 1997; Varela et al.,
1997; Markram et al., 1998). In brief, intracortical synapses may show depression

Figure 3.4: Depression (A) and facilitation (B) of excitatory introcortical


synapses. Compiled from Markram and Tsodyks, 1996 (A) and Markram et al.,
1998 (B), cited after Dayan and Abbott (2001).

or facilitation. In a depressive synapse (Figure 3.4A), a train of action potentials


arriving at the synapse yields a series of postsynaptic potentials in the postsy-
naptic neuron whose amplitudes decrease until they reach a limit value. After
a recovery time of about one second, the postsynaptic potential has reached its
original amplitude. A facilitating synapse shows the opposite effect: subsequent
spike events yield an increase in the postsynaptic potential (Figure 3.4B). It is
important to note that the sweeps shown in Figure 3.4 are obtained by averaging
over 30 50 repetitions of the experiment with identical input spike trains.
A simple reservoir model (Tsodyks and Markram, 1997; Tsodyks et al., 1998)
accounts for the synaptic dynamics. Some amount of resource (for example,
a reservoir of neurotransmitter) can be partitioned in three states: effective,
inactive, and recovered. Each incoming action potential activates a fraction of
the recovered resource which is then activated and recovers with a time constant
800 ms. The model suggests that the release probability is an important
synaptic factor which regulates the amount of depression.
4
But see for example Eurich et al. (1999:L) where learning is also associated with a dynamics
of delays.
I-98 CHAPTER 3. NEURAL DYNAMICS

The existence of dynamic synapses may have far-reaching consequences for the
dynamics and signal processing in neural populations. From a dynamical point
of view, it provides new degrees of freedom resulting in a rich dynamical behavior
(e. g., Tsodyks et al., 1998). From a signal processing point of view, synapses
may serve as a memory device reflecting past activity of presynaptic neurons
(Maass and Markram, 2002).

Long-term dynamics. Plasticity in the nervous system has long been associ-
ated with the dynamics of synapses. Hebb (1949) formulated the following famous
postulate about learning: When an axon of cell A is near enough to
excite a cell B and repeatedly or persistently takes part in firing
it, some growth process or metabolic change takes place in one or
both cells such that As efficiency, as one of the cells firing B,
is increased. Self-organized modifications of the synaptic site have since been
the main candidate for adaptivity in the nervous system, although Hebb also
considered other possible physiological realizations including axonal or dendritic
growth. The Hebbian postulate triggered a worldwide research program which
has been going for more than 50 years (Seung, 2000).
In the 1970s, the phenomena of Long-Term Potentiation (LTP) and Long-
Term Depression (LTD) were discovered, i. e., the increase (LTP) or decrease
(LTD) in the amplitude of postsynaptic potentials following a concurrent strong
stimulation of the presynaptic and the postsynaptic cell; see Brown et al. (1988,
1990); Bliss and Collingridge (1993) for reviews. Whereas these investigations
revealed only a coarse picture of the conditions under which plasticity can be
observed, the precise temporal mechanisms on a single-spike basis have only re-
cently been discovered (e. g., Bi and Poo, 1998; Zhang et al., 1998b; Egger et al.,
1999; Bi and Poo, 1999). Basically, a critical time window (now referred to as the
learning window ) of about 50 ms around postsynaptic spike events exists during
which incoming spikes result in a change of the average postsynaptic potential.
For example, in Xenopus tectum neurons (Zhang et al., 1998b), action potentials
arriving prior to a postsynaptic event yield an enhancement of postsynaptic ac-
tivity while action potentials arriving later trigger a long-term depression. The
temporal conditions for LTP and LTD, however, may differ depending on the
system under study; a review is given in Abbott and Nelson (2000). Engert and
Bonhoeffer (1999) show that LTP is accompanied by the formation of new spines
on the dendrites of postsynaptic neurons, thus establishing the tight correlation
between structure and dynamics in the nervous system. Finally, Rioult-Pedotti
et al. (2000) convincingly demonstrate in a combined behavioral and electro-
physiological experiment in rats that the occurrence of LTP is indeed a neural
correlate of learning a conjecture which had never been shown before.
In theoretical studies, synaptic dynamics in time-discrete neural network mod-
els prevailed for a long time. Most types of such neural networks have two kinds
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-99

of dynamics: a neural dynamics on a fast time scale, and learning that is,
changes in the synaptic weights on a slow time scale. Hebbian learning is one
type of unsupervised learning; a typical learning rule is given by

wji (k + 1) = wji (k) + xi yj , (3.8)

where wji (k + 1) and wji (k) denote the synaptic weight between presynaptic
neuron i and postsynaptic neuron j at times k + 1 and k, respectively; is
a small learning rate; xi is the input to neuron j from neuron i; and yi is the
output of neuron j. A typical problem of Hebbian learning with a rule like (3.8) is
that weights may diverge; constraints like clipping or normalization of a neurons
synaptic weights have to apply to prevent this (cf. Miller and MacKay, 1994 for
a corresponding discussion). A review on Hebbian learning rules in discrete time
is given by Brown et al. (1990).
Theoretical investigations on Hebbian learning in continuous time are based
on the new experimental findings on learning windows. Since the first such learn-
ing study motivated by the auditory system of barn owls (Gerstner et al., 1996),
numerous investigations have been published with applications in coincidence de-
tection, sequence learning, and path learning in navigation (e. g., Abbott and
Blum, 1996; Eurich et al., 1999:L; Kempter et al., 1999; Mehta et al., 2000). For
an example, see Section 3.3.1.3.

Synapses are a convincing example of subcellular structures whose dynamics takes


place on many time scales, thus enriching and enabling processes in the nervous
system ranging from fast neural interactions to long-term memory formation.

3.3 Mechanisms in Systems with Delays and


Noise
This section deals with the dynamics of physiological and neural systems under
the influence of interaction delays and/or noise with an emphasis on the mech-
anisms on which observed phenomena are based. It integrates my own work in
this field.

3.3.1 Systems with Interaction Delays


Delays are a ubiquitous feature in physiological systems. Among the most promi-
nent examples is a class of systems now subsumed unter the notion of dynamical
diseases (Mackey and Glass, 1977; Mackey and an der Heiden, 1982; Glass and
Mackey, 1979; Guevara et al., 1983; Mackey and Milton, 1987; Glass and Mackey,
1988; Belair et al., 1995; Milton and Black, 1995). These are diseases charac-
terized by abnormal temporal organization (Glass and Mackey, 1988,
I-100 CHAPTER 3. NEURAL DYNAMICS

chapter 9). Such a physiological system is described as a dynamical system that


shows some behavior (usually homeostasis, that is, stable fixed point behavior)
under normal conditions. Due to pathological changes, parameters of the system
change, resulting in bifurcations which alter the dynamics qualitatively: fixed-
point behavior may be replaced by limit-cycle behavior (oscillations) or even by
chaotic behavior. One such parameter is a typical time delay characterizing a
recurrent feedback loop. As this delay increases, a Hopf bifurcation eventually
occurs resulting in a transition from steady-state behavior to oscillations; a fur-
ther increase in the delay may yield a series of bifurcations similar to a cascade
of period-doubling bifurcations in finite difference equations leading to chaotic
behavior (Mackey and Glass, 1977). Examples for such dynamical diseases in-
clude Cheyne-Stokes respiration (Mackey and Glass, 1977; Kryger and Millar,
1991) and hematological disorders like cyclical neutropenia (Haurie et al., 1998;
Bernard et al., 2003, and references therein). For an introduction to phenom-
ena associated with delays and the corresponding dynamical system modelling in
physiological and other biological systems, see for example an der Heiden (1979);
Hadeler (1979); Glass and Mackey (1988); Glass et al. (1988); Milton et al. (1989).
Mathematically, such systems can be described by delay differential equations, see
Kolmanovskii and Nosov (1986); MacDonald (1989); Hale and Lunel (1993) for
mathematical overviews of this topic.
In the nervous system, delays in single neurons result from synaptic transmis-
sion (movement of the vesicles to the presynaptic membrane after arrival of an
action potential, transmitter diffusion, rise of the excitatory or inhibitory post-
synaptic potential), from diffusive and active dendritic processing (Agmon-Snir
and Segev, 1993; Stuart and Sakmann, 1994), and from the conduction of action
potentials (especially in non-myelinated fibers); cf. Figure 3.5. Furthermore, de-
lays in signal processing are present in neural feedback loops and feedback is
an abundant feature of the neural organization of brain tissue (Felleman and Van
Essen, 1991)5 . For example, tongue-projecting salamanders have four represen-
tations of the visual field in each hemisphere of the optic tectum: two originat-
ing from ipsilateral and contralateral retinal projections (Wiggers et al., 1995b),
and two further indirect representations from recurrent tetcal-isthmo-tectal loops
(Wiggers and Roth, 1991). The latter have delays of 10 30 ms compared to the
direct projections. It has been suggested that salamanders use these delayed rep-
resentations for the computation of three-dimensional trajectories of fast-moving
prey objects (Wiggers and Roth, 1991).
A first computational consequence of transmission delays in the nervous sys-
tem has already been considered in Section 2.3.2.4: signals take considerable time
to reach higher cortical areas (for review, see Nowak and Bullier, 1997) such that
the fast image processing observed in psychophysical studies restricts possible
neural codes (e. g., Thorpe et al., 1996, 2001). Second, neural feedback systems
5
See also the footnote on page I-7.
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-101

1 Synapses
0.3
(a)

V(t) [mV]
1 (b)

2 0
0 t [ms] 15
2 Dendritic Tree
3

3 Axon

Figure 3.5: Delays arising in a single neuron. At the synapses (1), it is in par-
ticular the postsynaptic potential (EPSP or IPSP) that takes time to evolve.
The graph shows two EPSPs as modelled by alpha functions; EPSP (a) has a
higher amplitude than EPSP (b) and it is also faster. Signal processing in the
dendritic tree (2) is also associated with delays which may differ depending on
the morphology of the neuron. In the colliculus, for example, the passive spread
of postsynaptic potentials takes 5 25 ms (A. Schierwagen, personal communi-
cation). Spikes backpropagating into the dendritic take 3 4 ms to reach the
most distal branches in cortical pyramidal cells (Stuart and Sakmann, 1994). In
the axon, conduction velocities range from 0.1 20 m/s in unmyelinated fibers
to 100 m/s in myelinated fibers (Reichert, 1990) depending, for example, on the
diameter of the fibre and properties of the myelin sheath. The photo (taken from
Hsiao et al. (1984)) shows the cross section of the optic nerve of a cat; it suggests
that conduction velocities may differ from fibre to fibre.

with delays may show a rich dynamical behavior ranging from steady state be-
I-102 CHAPTER 3. NEURAL DYNAMICS

havior and limit-cycle oscillations to chaotic behavior and phenomena that can
be ascribed to the presence of both noise and delays. For example, the synchro-
nization of spike activity in neural populations, as modelled by integrate-and-fire
neurons, is fundamentally different in systems with delay (Ernst et al., 1995,
1998) as compared to networks without delay (e. g., Mirollo and Strogatz, 1990):
without delays, excitatory connectiviy results in an in-phase synchronization of
all oscillators whereas in the presence of delays, clusters of synchronous neurons
emerge and decay after some time; see also Timme et al. (2002). In the follow-
ing paragraphs, a few topics in the context of interaction delays will be briefly
discussed.

3.3.1.1 Delays in Sensorimotor Control Loops


Sensorimotor control loops are expected to have pronounced delays because the
whole peripheral and central processing takes place between a sensory stimulus
and the corresponding motor action. Few dynamical models, however, explicitly
consider delays (Ohira and Milton, 1995; Eurich and Milton, 1996:L; Tass et al.,
1996; Milton et al., 2000; Peterka, 2000; Yao et al., 2001; Cabrera and Milton,
2002; Bormann et al., in press).One reason why typical dynamical effects of de-
layed feedback may be less frequent than expected is the following: studies in
motor control hint at the existence of forward internal models, i. e., mental models
of the environment that predict the behavior of the body and the world from the
current sensory input (see Wolpert and Ghahramani (2000) for a review). Such
models could be exploited to circumvent sensorimotor delays and compute motor
actions as if a reaction was immediate. This holds of course only in predictable
situations: Unexpected disturbances will result in a delayed response. For pertur-
bations of upright stance, for example, see Nashner (1976, 1977); Nashner et al.
(1979); Nashner and Woollacott (1979); Moore et al. (1988); Woollacott et al.
(1988); Massion and Woollacott (1996).
A few experimental setups allow a sensorimotor control loop to be opened
in order to include some artifical sensory feedback as a result of the observed
motor behavior. For example, the sensory feedback can be further delayed to
probe the system dynamics. This technique has been successfully applied to
study the pupil light reflex (Longtin and Milton, 1988, 1989a,b; Longtin et al.,
1990), visuomotor tracking with a finger (Vasilakos and Beuter, 1993), and the
visually guided control of a target on a computer screen by moving a computer
mouse (Bormann et al., in press).

The effects of noise and delay in human postural sway. As an examle for
the effect of a delay in a sensorimotor feedback system when it is combined with
noise, we consider the control of simple upright stance in human subjects. Data
are obtained either from a force platform that records the body center-of pressure
(Collins and De Luca, 1993, 1995a; Collins et al., 1995c,d) or from an ultrasonic
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-103

device that records the position of a microphone attached to the body (Franke,
2003). Two examples obtained by the latter method are shown in Figure 3.6.

15 10

10
5

x,y [mm]
y [mm]

0
0
5
5
PSfrag replacements
PSfrag replacements 10
10
x
y
15 15
15 10 5 0 5 10 15 0 20 40 60 80 100 120
x [mm] t [s]

15 15

10 10

5
5
x,y [mm]
y [mm]

0
0
5
5
PSfrag replacements 10
PSfrag replacements
10 15
x
y
15 20
15 10 5 0 5 10 15 0 20 40 60 80 100 120
x [mm] t [s]

Figure 3.6: Two examples of postural sway movements of a 29-year-old healthy


male subject with open eyes. Each trial lasted for 120 s. Left: orbits in the
medio-lateral (x) vs. antero-posterior (y) plane; right: the corresponding time
series.

Collins and De Luca (1993, 1994, 1995b) use so-called stabilogram-diffusion


plots to characterize the body movement: The logarithmic plot of the two-point
correlation function
K(t) = h(x(t) x(t + t))2 i , (3.9)
where x refers either to the medio-lateral or to the antero-posterior direction;
the brackets indicate a time average along a single trajectory and t is the time
increment. Figure 3.7 shows a typical example.
The data can be interpreted in terms of a correlated random walk that has a
scaling law hKti t2H , where 0 < H < 1 is a scaling exponent (Mandelbrot
and Ness, 1968). Two-point correlations functions like the one shown in Figure 3.7
then suggest the presence of multiple scaling regions. A previous model Chow
and Collins (1995) which describes the human body as an elastic polymer pinned
to the upright position focuses on these scaling regions. However, the data have
more characteristic features like oscillatory components that are not captured in
this model.
In Eurich and Milton (1996:L), the body is modelled as an inverted pendulum
(Figure 3.8) subject to a time-delayed restoring force and in addition driven by
I-104 CHAPTER 3. NEURAL DYNAMICS

ln K( t)
1

-1

-3 -2 -1 0 1 2 3
ln t

Figure 3.7: A two-point correlation function, K(t), for a healthy young subject.
Vertical lines segment K(t) into different scaling regions: (from left to right),
H 0.86, H 0.29, H 0 (Chow and Collins, 1995). Experimental data were
supplied by and published with permission of J. Collins. From Eurich and Milton
(1996:L).

R
{ j
Figure 3.8: The body (mass m,
height R of the center of mass) as
an inverted pendulum.
{

Gaussian white noise (t):



mR2 + mgR sin = f ((t )) + 2d (t) . (3.10)
The model is one-dimensional: the movements in medio-lateral and in antero-
posterior direction can be regarded as independent of each other. Assuming an
overdamped motion, one obtains after rescaling and specification of a stepwise
restoring force6

x + 2D (t) + C if x(t ) < 1

x = x + 2D (t) if 1 x(t ) 1 . (3.11)

x + 2D (t) C if x(t ) > 1

The model has three parameters, a (rescaled) delay , a (rescaled) restoring force
C and a (rescaled) noise intensity D. There are two thresholds at 1 and 1
6
The dynamics are not sensitive with respect to the exact mathematical form of the restoring
force.
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-105

beyond which a control force is exerted; in the central region, the body is subject
to gravitation and noise only.
In the noiseless case D = 0, solutions of (3.11) can be calculated analytically.
The bifurcation diagram of the system is shown in Figure 3.9. Depending on

1.0 E

O3
O2
0.5
c

a
O1
b

0.0
1 2 3 4
C

Figure 3.9: Behavior of (3.11) in the absence of noise. Depending on and C,


three different types of oscillations (O1-O3) and escape behavior (E) occur; the
latter is interpreted as a fall of the subject. Bifurcation lines a , . . . , d can be
readily computed. The diagram holds for initial conditions chosen in the central
region between 1 and 1. See Eurich and Milton (1996:L).

and C, the system shows three different types periodic orbits two of which(O2
and O3) comprise both thresholds at 1 and 1. O1 is an oscillation around a
threshold, i. e., two co-existing orbits around the left and right sensory detection
threshold exist in the system (see Figure 4 in Eurich and Milton (1996:L)). For
sufficiently long delays and/or small restoring forces C, the motion is unbounded
which corresponds to the loss of stable upright stance. For D 6= 0, physiolog-
ical parameters D, and C can be found numerically by fitting the two-point
correlation functions to real data. The following results have been obtained:
Parameter values for the delay and the detection threshold in the step-
wise feedback function as obtained from the model closely match empirical
values.
For a single subject, the delay and the threshold are constant, while single
trajectories differ only in the noise intensities. This can be expected from
the physiological interpretation of the parameters: the delay and the detec-
tion threshold are fixed in a subject (or may change on a longer time scale)
while noise intensity (resulting from fatigue, the consumption of coffee etc.)
may varies on short time scales.
I-106 CHAPTER 3. NEURAL DYNAMICS

In all cases that were tested, the system seems to be in the regime of
oscillation O1 of the bifurcation diagram.

A mechanism suggested by our model is the following: The detection thresh-


old (or, more generally, a nonlinearity) and the delay in the sensorimotor control
loop result in two coexisting periodic orbits. Additive noise in the system yields
noise-induced transitions between these orbits.

Noise and delays in balancing tasks. Milton et al. (2000) and Cabrera and
Milton (2002) studied the balancing of sticks on the fingertip in human subjects.
The motion of the stick is typically composed of small-amplitude movements
(laminar phases) interrupted by larger-amplitude corrective movements. Both
the duration of the laminar phases and the power spectrum of deviations of the
stick from the vertical position hint at on-off intermittency as the underlying dy-
namical phenomenon (e. g., Platt et al., 1993; Heagy et al., 1994; Venkataramani
et al., 1996). A dynamical model with delay put forward in Cabrera and Milton
(2002) employs parametric noise (rather than additive noise as in the postural
sway) and locates the system close to a stability boundary in the corresponding
parameter space. The authors argue that fluctuations in the angle of the stick
resemble a random walk for which the mean value is approximately zero; i. e., the
stick is statistically stabilized.

Bormann et al. (in press) carry this line of research forward by introducing a
new experimental paradigm: the balancing of a target on a computer screen
by manual control of a computer mouse. This system is simpler than the stick
balancing because it operates in 2D. Furthermore, the artificial computer world
allows for an arbitrary manipulation of the dynamics of the target. For example,
additional delays can be introduced into the sensorimotor feedback loop to study
its effects on the dynamics. Finally, data are simpler to obtain just by recording
the mouse and target positions. A series of experiments suggests that stick bal-
ancing and mouse balancing are governed by the same statistics. A current issue
is the search for common motor control schemes and their underlying dynamical
mechanisms.

3.3.1.2 Delay Distributions


Most theoretical investigations in neuronal or other physiological feedback sys-
tems assume a single time delay 0 , i. e., the dynamics of some quantity x is
governed by a delay-differential equation of the form

x(t) = f (x(t), x(t 0 )) (3.12)

(e. g., Mackey and Glass, 1977; Mackey and an der Heiden, 1984; Milton et al.,
1990; Eurich and Milton, 1996:L; Bormann et al., in press). However, this case
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-107

may be oversimplified: current system changes may depend on the system be-
havior during a whole past time interval rather than on a single past instance
only. An important question is therefore if theoretical predictions change if delay
distributions are considered.
A straigtforward extension of (3.12) is a linear superposition of past instances,
Z
x(t) = f [x(t), x(t )( ) d ] , (3.13)
0

where f [] is a functional (indicated by the square brackets) which may of course


be nonlinear, and ( ) is a kernel function quantifying the dependence on the
past. Equation (3.12) is obtained from (3.13) for the special choice

( ) = ( 0 ) , (3.14)

where () is the Dirac delta function.


Bernard et al. (2001) consider a linear differential equation with distributed
delays, Z
x(t) = x(t) x(t )( ) d (3.15)
0
with , IR. A basic result is that distributed delays increase the stability of
the steady state of a delay-differential equation when compared to a discrete delay,
in the sense that the parameter regime for stable behavior increases. Distributed
delays are used in Hearn et al. (1998); Haurie et al. (1999, 2000) to account for
the dynamics of hematopoiesis in cyclical neutropenia in grey collies.

A hippocampal model. Eurich et al. (2002:Lb) investigate the dynamics of


a neuronal feedback system, the CA3basket cellmossy fibre system in the hip-
pocampus, under the influence of two effects:

Conduction delays are distributed according to a distribution in fibre ge-


ometries (esp. fibre diameters);

the actual distribution of conduction delay times depends on the activity of


the system, because the neuronal activation threshold depends on the fibre
diameter and therefore on the delay of the corresponding fibre (Jack et al.,
1975).

The stoichiometry of the inhibitory transmitter-receptor interaction yields so-


called mixed feedback, i. e., a non-monotonic gain function which is known to
produce complex behavior (see, for example, Milton et al., 1990). In its rescaled
form, the dynamics is governed by the equation
dv f
= (e v) , (3.16)
dt 1 + fn
I-108 CHAPTER 3. NEURAL DYNAMICS

where v is the rescaled membrane potential of an average neuron, e is a constant


input, , and n are constants, and the functional f in the inhibitory mixed
feedback term considers both effect mentioned above:
Z
f [v] = {v(t ) ( )} H(v(t ) ( ))( )d . (3.17)
0

In (3.17), H is the Heaviside step function, and is a kernel function like in (3.13)
adapted to the distribution of fibre diameters. The rescaled activation threshold
( ) for the generation of an action potential with a rescaled delay time yields
an activity-dependent delay distribution in the sense that the actual minimal
delay depends on the systems activity in the past.
Analytical calculations and numerical computations suggest that the system
shows a bistability in the physiologically plausible parameter regime: two coex-
isting fixed points corresponding to firing frequencies in the range of 10 25 Hz
and 30 100 Hz. They might correspond to the and frequencies, respectively
(Traub et al., 1999). In comparison with a similar model by Mackey and an der
Heiden (1984) which employs neither a distribution of delay times nor a state-
dependence thereof higher order bifurcations and a transition to chaos get lost
in the physiological regime as the variance of the distribution is progressively in-
creased. That is, the dependence of the dynamics on a whole past interval yields
a simpler behavior than the case of a singular delay time, just like in the linear
case mentioned above (Bernard et al., 2001).

Smooting with delay distributions. Thiel et al. (2002); Thiel and Eurich
(2002) and Thiel et al. (2003:L) study the dynamical effects of the width of the
delay distribution more systematically in various nonlinear systems. As an exam-
ple, consider again the negative feedback system (3.16,3.17) but with a constant
activation threshold, ( ) 0 . Figure 3.10 illustrates that the dynamics of the
system becomes simpler as the length of the past interval that determines the
current behavior increases. Starting from apparently chaotic behavior, trajecto-
ries become periodic, whereby the period number decreases as a function of the
width of the rectangular distribution ( ). The scenario resembles an inverse
period-doubling cascade. This phenomenon is further investigated in Figure 3.11,
where the period number of oscillations is grey-scale coded as a function of and
the gain of the feedback term. It is clearly visible that complex oscillations (or
even chaotic behavior) is restricted to feedback with delay distributions having a
small variance.
A comparable behavior can be found in the Mackey-Glass system (Mackey
and Glass, 1977) with distributed delays (Thiel et al., 2003:L). Finally, a transi-
tion from oscillatory to fixed point behavior occurs in a model for single-species
dynamics in population ecology. The population density N satisfies the delay-
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-109

4 =0.0 2 2
3 1 1

v(t-1)
()

v(t)
2 0 0
1 -1 -1
0 -2 -2
4 =0.2 2 2
3 1 1

v(t-1)
()

v(t)
2 0 0
1 -1 -1
0 -2 -2
4 =0.4 2 2
3 1 1

v(t-1)
()

v(t)

2 0 0
1 -1 -1
0 -2 -2
4 =0.6 2 2
3 1 1
v(t-1)
()

v(t)

2 0 0
1 -1 -1
0 -2 -2
0 1 2 0 5 10 -2 -1 0 1 2
t v(t)

Figure 3.10: Distributions of delay times (left) used to compute the membrane
potential v as a function of time (center) in simulations of the hippocampus
model with distributed delays. Time is measured in units of the mean time lag
m = 1 of the recurrent signals, the interval shown corresponds to 1 s in real time.
Additionally, phase portraits v(t m ) as a function of v(t) are shown (right),
obtained by two dimensional embedding with delay coordinates. = 0.0 (top
row) denotes the case of a singular time lag m = 1. Transients are omitted. Bars
above the potential plots in the center column exemplify the integration interval
of past potentials for the given distribution, arrows mark the instant when this
integrated feedback acts on the change of v. From Thiel et al. (2003:L).

differential equation
Z

dN (t) 1
= rN (t) 1 N (t )( ) d (3.18)
dt K
0

with a time-shifted gamma distribution ( ) and r, k const. A typical example


is shown in Figure 3.12.
What mechanism is responsible for the simplification of the dynamics if the
singular-delay case is compared to the case of a wider distribution of time lags?
I-110 CHAPTER 3. NEURAL DYNAMICS

1.0

0.8 76

0.6

0.4

0.2 1

0.0
0 50 100 150 200

Figure 3.11: Period of membrane potential fluctuations of a simulated average


hippocampus neuron using different combinations of gain and temporal dis-
persion of the recurrent inhibition. Dark areas mark parameter combinations
that induce complex periodic (or even irregular) fluctuations with high period
number. Parameter combinations used in the simulations shown in Figure 3.10
are marked by crosses; circles are irrelevant here. From Thiel et al. (2003:L).

Like in a system employing a low-pass filter, an averaging , or smoothing, over


past behaviour is likely to account for the observed effects. In fact, the variance
in the feedbacks time course is reduced if the width of the delay distribution
is increased (data not shown). However, it should be noted that the functional
delay differential equations studied here are fundamentally different from simple
filters: The solution space of a delay-differential equation is infinite-dimensional,
as can be easily seen from the functional space of initial conditions. Furthermore,
by way of the feedback the smoothed signal itself enters the future dynamics, i. e.,
the time-average is part of the differential equation. An important observation
is that the length of the integration interval is smaller than the period time
of the resulting oscillations (see Figure 3.10). This means that fluctuations of
the dynamical variable are by no means totally averaged out; rather, a slight
smoothing of the feedback is sufficient to abolish complex behaviour.

The results obtained for the different feedback systems suggest that one may
obtain an inappropriate description of the dynamics if only mean values of inter-
action delays are considered. A diversity in time lags, as it is frequently observed
in nature, may have a major impact on the systems time evolution. In particular,
the irregular behavior observed in recurrent feedback models with delays could
be a model artifact resulting from oversimplified assumptions on the nature of
the feedback.
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-111

(a)6 0.004
2=0.0
0.003
4
N(t)

()
0.002
2
0.001
0 0.000

(b)6 0.004
2=0.5
0.003
4
N(t)

()
0.002
2
0.001
0 0.000

(c) 6 0.004
2=1.0
0.003
4
N(t)

()
0.002
2
0.001
0 0.000
0 20 40 60 80 100 0 2 4 6
m
t

Figure 3.12: Left: Time course of the population density N (t) of a single species
simulated by numerically solving (3.18) for 100 arbitrary time units. Right:
Distributions ( ) of delay times used for the simulations. The variance 2 of
the distributions increases from top to bottom while the mean delay m = 1.5 is
the same in all three cases (a): 2 = 0.0, corresponding to a singular delay time,
(b): 2 = 0.5; (c): 2 = 1.0. As 2 is increased, oscillations in population density
are reduced in amplitude compared to the case of a singular delay. Limit cycles
can even be replaced by an approach to a stable equilibrium if the variance is
sufficiently large. From Thiel et al. (2003:L).

3.3.1.3 Delay Adaptation

Unlike in computers, in the brain we find a tight relationship between morphol-


ogy and physiology of neural tissue on the one hand and the dynamics of (or
computations that are performed by) neural populations on the other hand.
One such physiological element are delays in neurons and neural feedback loops.
For example, the consequence of conduction delays yielding a synchronous input
of two presynaptic neurons to a common target neuron is a higher impact of each
incoming spike. This suggests that delays in the nervous system might not be
arbitrary but that they may be adapted to some computational requirement.
There are indeed several studies indicating that delays are adapted. The
I-112 CHAPTER 3. NEURAL DYNAMICS

most prominent example is the auditory system of barn owls (e. g., Knudsen
et al., 1977; Knudsen and Konishi, 1978; Konishi, 1986; Carr and Konishi, 1990;
Konishi, 1991; Wagner and Takahashi, 1992; Carr, 1993) where azimuthal object
localization has been shown to rely on the time (or phase) difference of sound
arriving at both ears. The mechanism consisting of signals that are phase-locked
to the input, a series of delay lines, and binaural neurons that realize coincidence
detectors was originally proposed by Jeffress (1948).7 Time-sensitive object
localization is supposed to play a role also in other species and other sensory
modalities; see Carr and Konishi (1990) for a review.
There exist more hints on adapted delays, in particular on equalized delays
because this special case of adaptation can be easily recognized as such. Stanford
(1987) measured the conduction time of X-cells in the cat retina. He noticed a
considerable variability in the intraretinal conduction times (due to the missing
myelination and the different distances between the somata and the blind spot)
but only small total conduction time differences: extraretinal signal conduction
to the LGN compensates for the intraretinal time differences. Innocenti et al.
(1994) analyzed axonal arborization in cat visual neurons with callosal axons. A
detailed reconstruction of the axons and a corresponding computational study
on conduction times suggests that the geometry of the fibres is adapted to syn-
chronously activating spatially separate target columns.
How does this adaptation evolve? The usual hypothesis is that unspecific tis-
sue (neurons, dendritic branches, synapses) grows and is subsequently selected by
Hebbian mechanisms; see for example Gerstner et al. (1996) for a model on the
auditory system of the barn owl. A different hypothesis is an adaptivity of the de-
lays themselves, for example through an adaptation of the overall neural (dendritc,
somatic, axonal, synaptic) morphology, through the positioning of synapses, or
even through an adaptive myelination (see Demerens et al., 1996; Stevens et al.,
1998, for the latter). A few models have been developed that employ the adap-
tation of time delays in machine learning (Day and Davenport, 1993; Baldi and
Atiya, 1994; Draye et al., 1996) and also in a biological context (Vibert et al.,
1994; Eurich et al., 1997a, 1998; Huning et al., 1998; Eurich et al., 1999:L; Steuber
and Willshaw, 1999; Eurich et al., 2000; Tversky and Miikulainen, 2002).
Eurich et al. (1999:L) consider self-organized delay adaptation according to
the scheme of spike-timing dependent plasticity (STDP) as it has recently been
discovered (e. g., Bi and Poo (1998); Zhang et al. (1998b); Bi and Poo (1999);
Egger et al. (1999); for modelling and reviews, see Gerstner et al. (1996); Song
et al. (2000); Abbott and Nelson (2000); Bi and Poo (2001)). The experimental
studies have shown that the strengthening or weakening of synapses depends on
the relative timing of pre- and postsynaptic action potential within a short time
interval of approximately 50 ms, the so-called learning window . In Eurich et al.
7
More recent investigations suggest that the auditory system of mammals does not fit this
model (McAlpine and Grothe, 2003).
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-113

(1999:L), a framework comprising two delay adaptation mechanisms as limiting


cases is suggested:
A selection of delays accomplished by increasing and decreasing synaptic
weights that are associated with fibres of different delays;
A shift of the delay times themselves, whereby the actual biophysical mech-
anisms remain undetermined.
Both limiting cases will be associated with different learning windows.

t1,k t2,k t3,k tN-2,k tN-1,k tN,k W(x)

1 2 3 N-2 N-1 N
1 2 3 2 1 x

W (x)

x
(a) (b)

Figure 3.13: (a) Schematic overview of the network under study. It is composed
of N presynaptic neurons and a single postsynaptic neuron. ti,k denotes the
kth spike time in the ith presynaptic neuron, i is the conduction delay, and i
the corresponing synaptic weight (i = 1, . . . , N ). (b) Examples for the window
functions W and W for delay selection and delay shift, respectively. From
Eurich et al. (1999:L).

An overview of the model is shown in Figure 3.13. The network under consid-
eration consists of N presynaptic neurons. The kth spike time of the ith neuron
is denoted as ti,k . After a conduction delay i in the ith fibre, a spike contributes
to the postsynaptic membrane potential with a synaptic weight i (i = 1, . . . , N ).
The postsynaptic neuron is modelled as an integrate-and-firen neuron (Tuckwell,
1988a, chapter 3). After each spike of the postsynaptic neuron at time ts , learn-
ing takes place. The delay shift rule reads i W (ti,k + i ts ) (i = 1, . . . , N ),
where W () represents the delay shift learning window, an example of which is
shown in Figure 3.13b. Numerical investigations show that this network self-
organizes in a way that spikes arrive simultaneously at the postsynaptic neuron,
thus enhancing the impact of single spikes. An example is shown in Figure 3.14.

To obtain analytical results on attractors of the delay dynamics and their sta-
bility, a continuous formulation of the problem is used. For simplicity, we assume
I-114 CHAPTER 3. NEURAL DYNAMICS

(a)

(b)
0.04
p()

0.02

0 1.3 1.4
1 1.1 1.2 1.5
0.1
(c)
p()

0.05

0
1.1 1.15 1.2 1.25 1.3

Figure 3.14: Delay shift dynamics with Poissonian input. (a) Schematic drawing
of the spiking behavior of three (out of a total of 400) presynaptic neurons. Action
potentials occur independently in all neurons according to a Poisson point process
with rate rb . During presentation of a stimulus (bar), this rate is simultaneously
increased to rp in all presynaptic neurons. (b) Randomized delay distribution
p( ) prior to learning and (c) after 1,700 learning steps. In this case, the delays
become very similar to each other due to the simultaneous shift in the rate; the
remaining variance of the distribution is due to the stochastic nature of the spikes.
Adapted from Eurich et al. (1999:L).

that spikes are generated simultaneously in all presynaptic neurons; furthermore


they fire periodically with period T . The input of the postsynaptic neuron is
described by two functions, (, t) and (, t), for the delays and for the weights,
respectively. (, t)d gives the fraction of connections with delays in [ ; + d ],
and (, t) is the average weight of connections with delay . Assume that and
change on a slow timescale t (the timescale of delay adaptation) such that their
temporal development is determined by an average over an ensemble of presy-
naptic input patterns; the faster timescale of neuronal dynamics is described by
the variable . The input density at the postsynaptic neuron, J(, t), provided
by the synapses at time after presentation of a pattern takes the form
J(, t) = (, t)(, t) . (3.19)
The input density, J(, t), as a function of results in a distribution of spike
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-115


P
times of the postsynaptic neuron, P (, t) m ( m ), where m denotes
the m-th spike time. The firing of the postsynaptic neuron in turn acts on the
weights (, t) and the delays (, t) via one of the learning rules described in the
previous section. The dynamics of the input are governed by two simultaneous
equations: a balance equation for the input density,

J(, t) = (J(, t)v(, t)) + Q(, t) , (3.20)
t
and a continuiuty equation for (, t) indicating the conservation of the number
of neural connections,

(, t) = ((, t)v(, t)) . (3.21)
t
The drift velocity, v(, t), and the source term, Q(, t) are defined according to
Hebbian principles. Here we consider the special case of delay shift only; the case
of delay selection is described in Eurich et al. (1999:L).
During delay shift, the weights are not modified and the source term, Q(, t),
on the right hand side of (3.20) vanishes. The dynamics are governed by (3.21),
where the drift velocity, v = d /dt, of the delays realizes the Hebbian adaptation,
Z
v(, t) := W ( 0 )P ( 0 , t) d 0 , (3.22)

and denotes the learning rate. For the distribution of spike times we assume
a linear neural response, P (, t) = J(, t) it has been shown that adding a
small amount of noise to the input approximately linearizes the behavior (Yu
and Lewis, 1989), so this assumption is valid if there is some random background
activity. Figure 2 in Eurich et al. (1999:L) suggests that this can be true also
without additional noise.
In Eurich et al. (1999:L) is was shown that these delayPshift dynamics has
an equilibrium solution (, t) = ( 0 ) provided that n= W (nt) = 0
which is the case for antisymmetric window functions. The solutions form a
one-dimensional manifold described by a parameter 0 [0; T ] which R is a delay
Roffset0 common to all input neurons. The Liapunov functional L[] = (, t)(
( , t) 0 d 0 )2 d yields the result that the equilibrium solutions are marginally
stable in the 0 direction and stable in the other directions provided that W (x) >
0 for x < 0 and W (x) < 0 for x > 0. In other words, the self-organizing
process yields equalized delays as they are observed in biological neural systems.
Figure 3.15 illustrates this result.

In summary, the effect of interaction delays in the nervous system is neither


trivial, nor may it be neglected when studying biologically plausible dynamics in
neural tissue.
I-116 CHAPTER 3. NEURAL DYNAMICS

Figure 3.15: Numerical iteration of (3.21) with W as in Figure 3.13b. (a) Initial
distribution of the density of neural connections with delay , (, 0) = 1 + ( ),
where ( ) is Gaussian white noise. (b)-(d) (, t) for t = 12.0, 18.0 and 20.0,
respectively. Two peaks transiently emerge and the merge into a single delta
peak which corresponds to the equilibrium solution of the distribution of delays.
T = 1.0, = 0.1, c = 0.2. From Eurich et al. (1999:L).

3.3.2 Systems with Noise


In the models for sensorimotor control described in Section 3.3.1.1, noise played
an important role, for example by yielding transitions between coexisting periodic
orbits. In this section, we consider further phenomena occurring in systems with
noise. The consideration of noise in neural models is motivated by numerous
electrophysiological experiments where neural responses to identical stimuli differ
from trial to trial. A straightforward approach to deal with this kind of behavior
is the use of stochastic methods estimation theory or information theory in
neural coding, and stochastic modeling (e. g., stochastic differential equations) in
neural dynamics. This is independent of the origin of the observed irregularity. In
fact, the question if noise is already present in single neurons (Lass and Abeles,
1975; Croner et al., 1993) or if it is a network effect and primarily reflects a
complex inaccessible dynamics (Shadlen and Newsome, 1998; Arieli et al., 1996),
is nontrivial and still under debate. Furthermore, the reliability of single neurons
probably differs among brain regions and in some cases also depends on the
actual input (Mainen and Sejnowski, 1995; Harris and Wolpert, 1998; Cecchi
et al., 2000).
While modeling studies of neural populations with noise have already been
mentioned in Section 3.2, we will consider a few topics here that can be subsumed
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-117

under the title of benefits of noise. This notion emphasizes the observation that
irregular dynamical behavior is not necessarily a nuisance in the sense that it
reduces signal transmission rates or interferes with some clear-cut dynamics by
blurring trajectories. For example, Silberberg et al. (in press) investigate the
response of neural populations to uncorrelated inputs and find both empirically
and theoretically that the variance of the input regarded as a signal enables
faithful transmission of graded signals with a temporal resolution that is much
higher than in the transmission of the mean input. A second example is the
observation by Yu and Lewis (1989) that the nonlinear process of spike generation
in neuron models is linearized by adding noise.
Two topics in statistical physics that are relevant also in neuroscience have
been discussed intensively in the literature for approximately two decades: Stochas-
tic resonance and self-organized criticality or, more generally, avalanches and the
occurrence of power-law behavior . The number of publications in each field is of
the order of 103 which makes even a coarse overview impossible in the context of
this thesis. We will therefore briefly discuss two examples.

3.3.2.1 Avalanches of Spike Activity


An avalanche of spike activity can be phenomenologically defined simply as the
co-occurrence of spikes within a certain time interval, for example within a few
milliseconds in cortical tissue. As a mechanism we assume here that later events
during the avalanche are triggered by earlier events in the same neural population.
From a biological point of view, the study of avalanches in neural networks is
motivated by the observation of sychronized activity in cortex (e. g., Singer,
1993; Singer and Gray, 1995) which may (Eckhorn et al., 1988; Gray et al., 1989;
Engel et al., 1991, 1992) or may not (Phillips and Singer, 1997) be accompanied
by oscillatory behavior; see Konig and Schillen (1991); Koch and Schuster (1992);
Lin et al. (1998) for modeling studies.
Eurich et al. (2002:La) study a simple neural network that allows for an
analytical understanding of such phenomena for arbitrary system size, i. e., for an
arbitrary number N of neurons. From the point of view of statistical physics, the
model is a contribution to the literature on self-organized criticality and avalanches
in non-conservative models.
The notion of self-organized criticality (SOC) was introduced by Bak et al.
(1987, 1988) to account for the frequent occurrence of power-law behavior in
systems involving interacting threshold elements driven by slow external input.
Examples include such diverse systems as piles of rice (Frette et al., 1996), the
Game of Life (Bak et al., 1989), friction (Feder and Feder, 1991), and sound
generated in the lung during breathing (Alencar et al., 2001). Self-organized
criticality is studied in so-called sandpile models where locally connected elements
receiving random input self-organize into a critical state characterized by power-
law distributions of avalanches without the explicit tuning of a model parameter as
I-118 CHAPTER 3. NEURAL DYNAMICS

it is the case in phase transitions (e. g., Bak et al., 1988; Kadanoff et al., 1989;
Grinstein et al., 1990; Pietronero et al., 1994; Sornette et al., 1995; Vespignani
et al., 1996; Vespignani and Zapperi, 1997; Dhar and Ramaswamy, 1989; Dhar,
1990; Hwa and Kardar, 1989; Tsuchiya and Katori, 2000). Analytical results
were derived for sandpile models (Dhar and Ramaswamy, 1989; Dhar, 1990), and
it was shown that the existence of a conservation law is a necessary prerequisite
to obtain SOC (Hwa and Kardar, 1989; Tsuchiya and Katori, 2000). A minimal
model exhibiting SOC is presented in Nagler et al. (1999).
A second class of models inspired by earthquake dynamics employs continu-
ous driving and nonconservative interaction between the elements of the system
(Feder and Feder, 1991; Olami et al., 1992). In the Olami-Feder-Christensen
(OFC) model (Olami et al., 1992), the amount of dissipation is controlled by a
parameter , and power-law behavior of avalanches occurs for a wide range of
values. Subsequent investigations emphasized the importance of boundary con-
ditions and tied the existence of the observed scaling behavior to synchronization
phenomena induced by spatial inhomogeneities (Christensen and Olami, 1992;
Socolar et al., 1993; Grassberger, 1994; Corral et al., 1995; Middleton and Tang,
1995). More specifically, Lise and Jensen (1996) introduced a random-neighbor
interaction in the OFC model to avoid the buildup of spatial correlations. Fur-
ther analysis indeed revealed that the random-neighbor OFC model does not
display SOC in the dissipative regime (Broker and Grassberger, 1997; Chabanol
and Hakim, 1997; Kinouchi et al., 1998). In these avalanche models with non-
conservative interaction, analytical results were obtained only for system size
N , i. e., in the Abelian case (Broker and Grassberger, 1995, 1997). The
model introduced by Eurich et al. (2002:La) circumvents the problem of system
boundaries, and also yields an analytical access for finite system sizes N .

In brief, N perfect integrate-and-fire neurons are identically coupled via synaptic


weights U/N , where U is the firing threshold, and is a coupling parameter.
Unlike in the random-neighbor models cited above, randomness is introduced
through a random external input: At each time step, the membrane potential of
a randomly chosen neuron is increased by an amount U . If an element exceeds
its firing threshold due to such an external input, an avalanche starts: Internal
inputs are sent to all neurons which may in turn trigger further spike events. Dur-
ing an avalanche, there is no external input, corresponding to a separation of fast
and slow time scales that is frequently considered in the SOC literature. During
an avalanche, the elements become unstable and relax in a fixed order determined
by the state of the system immediately prior to the avalanche. Therefore, the
system is strictly Abelian. Here we are primarily interested in the distributions
p(L; N, ) of avalanche sizes L, i. e., the number of firing events in the network
during an avalanche.
Figure 3.16 shows avalanche size distributions and avalanche duration distri-
butions for different values of . N = 10000 is chosen as the system size, but the
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-119

=0.8000 =0.9900
(a) 0 (b) 0
p(x,N,), numerical p(x,N,), numerical
1 p(x,N,), analytical 1 p(x,N,), analytical
pd(x,N,), numerical pd(x,N,), numerical

2 2

3 3

4 4

5 5

6 6

7 7
4 3 2 1 0 4 3 2 1 0

=0.9990 =0.9999
(c) 0 (d) 2
p(x,N,), numerical
1 p(x,N,), analytical
pd(x,N,), numerical
3
2

3 4

4 5
5
6
6 p(x,N,), numerical
pd(x,N,), numerical
7 7
4 3 2 1 0 4 3 2 1 0
log x/N log10 x/N
10

Figure 3.16: Probability distributions of avalanche sizes, p(x; N, ), and avalanche


durations, pd (x; N, ), in the subcritical (a; = 0.8), critical (b; = 0.99),
supracritical (c; = 0.999), and multi-peaked (d; = 0.99997) regime. (a-
c) Solid lines and symbols denote the analytical and the numerical results for
the avalanche size distributions, respectively. In (d), the solid line shows the
numerically calculated avalanche size distribution. The dashed lines in (a-d)
show the numerically evaluated avalanche duration distributions. In all cases,
the presented curves are temporal averages over 107 avalanches with N = 10000,
U = 0.022, and U = 1. From Eurich et al. (2002:La).

curves look very similar for any other choice of N . Four qualitatively different
regimes can be distinguished which are termed subcritical, critical, supracritical,
and multi-peaked. For small values of , subcritical avalanche size distributions
exist which can be approximated by the general expression

p(L; N, ) L exp(L/) , (3.23)

where is an exponent independent of N , and = (N, ) describes the range


of avalanche sizes over which power-law behavior is observed (Figure 3.16a). For
fixed N , (N, ) is a monotonously increasing function of as long as <
I-120 CHAPTER 3. NEURAL DYNAMICS

c which we refer to as the critical case For c , the system has avalanche
distributions with an approximate power-law behavior with exponent 3/2 from
L = 1 almost up to the size of the system, where the usual exponential cutoff
is observed (Figure 3.16b). For finite N , c is in a regime where the system
is dissipative. Above the critical value c , avalanche size distributions become
nonmonotone (Figure 3.16c). Such supracritical curves have a minimum at some
intermediate avalanche size. Finally, multi-peaked avalanche size distributions
like the one shown in Figure 3.16d are characterized by multiple firing of single
neurons.

For the subcritical, critical and supracritical regimes, a closed expression for
p(L; N, ) is given by

L2 N 1 L1 N L1
p(L; N, ) = L 1L
L1 N N
N (1 )
for 1 L N . (3.24)
N (N 1)
This is obtained via a geometrical argument in the N -dimensional configuration
space of the system comprising the neurons membrane potentials; see Eurich
et al. (2002:La) for details.

The occurrence of large avalanches can be related to synchronous spike activ-


ity and oscillatory behavior in networks; see also Koch and Schuster (1992) for a
study with binary neurons. Figure 3.17 shows raster plots of a network of N = 100
leaky integrate-and-fire neurons. Time is continuous in this model. There are two
parameters governing the occurrence of synchrony and oscillations: The coupling
strength and the rate 1/t at which external inputs are given to randomly
chosen neurons. A large t corresponds to rare but strong input whereas small
t yields small and more frequent input. The cross-correlograms show that syn-
chronization without oscillations occurs for large coupling and large t, whereas
more freqent external input results in statistical oscillations, irrespective of the
size of the neural couplings.
Such network behavior can in principle be tested experimentally in small,
densely connected neural networks, for example in cortical columns.

3.3.2.2 Wave Propagation in Inhomogeneous Media


The second example for the influence of irregularity in neural tissue is related to
the mechanism of stochastic resonance; see Moss et al. (1994); Moss and Wiesen-
feld (1995); Bulsara and Gammaitoni (1996); Gammaitoni et al. (1998) for in-
troductions and reviews. The term is given to a phenomenon that is manifest in
nonlinear systems whereby a weak signal can be amplified and optimized by the
assistance of noise. The effect requires three ingredients:
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-121

=0.6, t=20/N ms =0.92, t=20/N ms


100 100
t

CCF (arb. units)

CCF (arb. units)


80 80

60 60

#(unit)
200 0 200 100 0 100
40 40

20 20

100 200 300 400 500 50 100 150 200 250

=0.6, t=1/N ms =0.92, t=1/N ms


100 100
CCF (arb. units)

CCF (arb. units)


80 80

60 60
#(unit)

100 0 100 50 0 50
40 40

20 20

50 100 150 200 250 20 40 60 80 100 120


t [ms] t [ms]

Figure 3.17: Raster plots showing the firing dynamics of a network of N =


100 integrate-and-fire neurons. Each spike is drawn as a small black tick in
dependence of the time t, and the number of the neuron which emitted that spike.
The coupling parameter is chosen to be = 0.6 (top left), 0.92 (top right),
0.6 (bottom left), and 0.92 (bottom right), while the time interval between two
external inputs is given by t = 20/N , 20/N , 1/N and 1/N ms, respectively.
The insets show the mean over the cross-correlation functions from 300 pairs
selected from the N = 100 neurons. The cross-correlation functions have been
scaled arbitrarily but identically for all four insets. The firing threshold and the
time constant of the neurons was determined to yield an output rate of 10 Hz
for an uncoupled neuron. The critical coupling strength for N = 100 neurons is
c = 0.9. From Eurich et al. (2002:La).

An activation barrier or threshold;


A weak coherent input (such as a periodic signal);8
A source of noise that is inherent in the system, or that adds to the coherent
input.
8
But see Collins et al. (1995b,a); Heneghan et al. (1996) for stochastic resonance with arbi-
trary input signals.
I-122 CHAPTER 3. NEURAL DYNAMICS

Then some intermediate noise intensity exists which yields a resonance-like be-
havior; it is a matching of the time scales of the periodic forcing and the threshold
crossing due to the noise input (Gammaitoni et al., 1995, 1998). This noise level
allows for an optimal inference of properties of the coherent input from the su-
perthreshold signals alone. Bezrukov and Vodyanoy (1997) extend the notion of
stochastic resonance to non-dynamical systems even without response thresholds.
Stochastic resonance has been suggested to be a mechanism at work in a large
number of systems, for example the ice ages (Benzi et al., 1981), lasers (McNa-
mara et al., 1988), superconducting quantum interference devices (Rouse et al.,
1995), and physiological systems including somatosensory percepton (Collins
et al., 1996, 1997), human blood pressure regulation (Hidaka et al., 2000), human
muscle spindles (Cordo et al., 1996), mechanoreception in the crayfish (Douglass
et al., 1993) and electroreception in the paddlefish (Russell et al., 1999; Green-
wood et al., 2000).
Neurons as threshold elements have also attracted attention. Longtin et al.
(1991); Longtin (1993) and Longtin et al. (1994) compare the interspike inter-
val histogram of a sinusoidally stimulated auditory nerve from a cat with the
return-time distributions of a periodically driven bistable potential, which is a
toy model for stochastic resonance. Wiesenfeld et al. (1994) propose an approx-
imate theory for modeling neuron firing in the presence of noise and a periodic
stimulus; the system shows stochastic resonance and compares well with prop-
erties of a crayfish mechanoreceptor. Bulsara et al. (1996) use an approximate
image-source method to solve the Fokker-Planck equation in the presence of a
weak and slowly varying sinusoidal input of an integrate-and-fire neuron. The
same neuron model is treated in Plesser and Geisel (1999) in the context of
Markov processes to avoid the unrealistic assumption of a stimulus reset after
each spike.

Stochastic resonance occurs also in spatially extended systems and thus con-
tributes to the field of pattern formation in excitable media; see for example Gierer
and Meinhardt (1972); Winfree (1987); Murray (1993); Koch and Meinhardt
(1994) for overviews of wave propagation and pattern formation. In biological
tissue and networks, the propagation of large-scale activity such as spiral waves,
target waves and planar waves has been demonstrated in heart tissue (Nagai
et al., 2000), cortical spreading depression (Basarsky et al., 1998), LGN (Kim
et al., 1995) (the latter two in vitro), and in cultured networks of glial cells (Jung
et al., 1998). In addition, there is much evidence that cortical tissue is a medium
capable of pattern formation which is indicated by its distance-dependent con-
nectivity structure (e. g., Hellwig, 2000), by psychophysics (Wilson et al., 2001),
migraine aura dynamics (Dahlem and Muller, 2003) and modelling studies on
visual hallucinations (Ermentrout and Cowan, 1979; Bressloff et al., 2001). Re-
views on cortical waves and the modelling of neural networks as spatio-temporal
pattern-forming systems are given by Ermentrout and Kleinfeld (2001) and Er-
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-123

mentrout (1998), respectively.


Travelling waves can be elicited and supported by noise as has been demon-
strated in the photosensitive Belousov-Zhabotinsky reaction (Kadar et al., 1998)
and in cultured glial cells (Jung et al., 1998). Theoretical investigations on this
issue are found in Jung and Mayer-Kress (1995a); Fohlmeister et al. (1995); Lind-
ner et al. (1998). Jung and Mayer-Kress (1995b) and Neiman et al. (1999) show
that pattern formation and phase synchronization are optimal for an intermedi-
ate noise level a spatiotemporal stochastic resonance effect.

Wheras most studies consider dynamical noise, i. e., a noisy input to layered
neural networks, the existence of spiral waves in disordered neural networks has
also been demonstrated (Milton et al., 1993; Chu et al., 1994). For example,
Chu et al. (1994) study a lattice of integrate-and-fire neurons with a distance-
dependent connectivity. The actual weight between two neurons at a distance r
is determined by the number q of synaptic connections between them; q is drawn
from a Poisson distribution whose mean decays exponentially with r. Such net-
works can carry spiral waves.
Eurich and Schulzke (in press:L) consider the question if large-scale activity
spiral waves, target waves, planar waves in such disordered media spreads
most easily at some intermediate disorder level; this would correspond to a new
variant of stochastic resonance.
The model consists of a two-dimensional network of excitatory integrate-and-
fire neurons with a refractory period, whereby iterations are performed in discrete
time steps t. An interaction between units takes place when a neuron j firing
at time t gives an input current to another neurons i according to its connectivity
matrix Kji . Explicitely, the input of neuron i is given by
X
Ii (t + t) = I0 Kji , (3.25)
j

where I0 is a global coupling constant. The boundary conditions are chosen to be


open in order to avoid reflection artefacts. In a network without impurities (de-
terministic case), the connectivity strength depends on the Euclidean distance
rij between the pre- and postsynaptic neuron and is modelled as a Gaussian:
2
Kji = exp(rij ).
Static stochasticity of the connection matrix is now introduced according to
two different noise models. Model one replaces every connection weight in the
connection matrices (Kji ) by a random value, which is drawn from a gamma
distribution,
xd1 x
(x) = d exp . (3.26)
b (d) b
The gamma distribution is determined by two parameters d and b related to
the mean x = db and the variance 2 = db2 of the distribution. The mean
I-124 CHAPTER 3. NEURAL DYNAMICS

and the variance can therefore be varied independently of each other. To ensure
that the sum of all coupling weights is held constant on average, the means
of the distributions are given by the deterministic weights, which are distance-
dependent. The variance is then considered a free parameter and is interpreted
as a measure of the disorder introduced in the network. Scaled by the squared
mean it gives rise to the disorder parameter
2 1
1 = 2 = , (3.27)
x d
where the subscript indicates model one. For d 0 and x > 0, 1 is limited to
the range 0 1. As a consequence, connections between nearby neurons can
vary much stronger than between distant neurons, as the variance is proportional
to the squared mean for a fixed value of 1 . Model one therefore yields only weak
disorder in the connections. In order to obtain larger disorder that allows for

a b

c d

Figure 3.18: Evolution of a target wave in a 100 100 network (disorder model
two, 2 = 0.2, = 0.1). Snapshots are taken at the times of 1, 9, 17 and 25 ms,
respectively. Active units are marked white, refractory black and resting neuron
are grey. (a) shows the initial conditions, in (c,d) a central pot with units in the
three different states is visible, which is a consequence of disordered connectivity.
Adapted from Eurich and Schulzke (in press:L).
3.3. MECHANISMS IN SYSTEMS WITH DELAYS AND NOISE I-125

strong couplings also between distant neurons we define a second disorder model.
In this model, weights are shuffled: a certain number of pairs of coupling weights
from the connection matrices of all neurons are chosen by random. Subsequently,
the values of these pairs are interchanged. The sum over all coupling weights
remains constant, ensuring that eventually occuring activity waves do not re-
sult from stronger connectivity. For the second model the disorder parameter is
defined as
number of exchanged weights
2 = . (3.28)
total number of weights
An example of the dynamics is shown in Figure 3.18, demonstrating that a
network with disordered connectivity is able to support target waves.
The influence of disorder is quantified by the minimal global coupling strength
I0 in (3.25) henceforth called I0,min that is necessary for sustained large-scale
activity. The influence of disorder model one is shown in Figure 3.19a. I0,min
decreases monotonically with increasing disorder, which can be interpreted as
disorder-enhanced self-sustaining activity. For larger disorder, target or spiral
waves should break up. Indeed, this can be seen in the result for the second,
stronger disorder model in Figure 3.19b; for small values of the disorder parame-
ter (2 < 0.2) the same propagation-enhancing effect as in model one is obtained;
for higher values of 2 , however, disorder strongly disrupts the Euclidean topol-
ogy of the connections which results in hampered wave formation or propagation.
In between there is an optimal parameter value which is best suited for the self-
maintenance of neural activity. The minimum of I0,min as a function of 2 can
be interpreted as a stochastic resonance-like phenomenon induced by disordered
connectivity. In addition to the results presented here, a similar dependency of
plain wave propagation on structural disorder has been found, which leads to the
speculation that spread of activity in the visual cortex may be facilitated through
irregular connectivity.

The examples discussed in this section demonstrate that topics of statistical


physics can be relevant for physiological systems and that neurophysics yields
contributions to the neurosciences that cannot be provided by other disciplines.
I-126 CHAPTER 3. NEURAL DYNAMICS

1.1 1.1

a b
1 1
0,min

0,min
0.9 0.9
I

I
0.8 0.8

0.7 0.7
0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8

1 2

Figure 3.19: The minimal needed global coupling factor I0,min for activity main-
tenance as a function of the disorder parameter of disorder model one (a) and
disorder model two (b). Error bars represent one standard deviation. The ordi-
nate is normalized to I0,min in the case of no disorder. Note that the abscissae 1
and 2 are conceptually different variables with no direct relation except for their
range. (a) Result obtained by 15 realizations of random connectivity matrix sets
generated by disorder model one. The size of the network was 100 100 neurons,
= 0.1, absolute refractory period 2 ms. I0,min decreases monotonically with
1 . (b) Simulation with disorder model two, here obtained by 10 random matrix
set realizations. Network size was 200 200 neurons, other parameters are the
same as in (a). I0,min (2 ) exhibits a distinct minimum. Adapted from Eurich and
Schulzke (in press:L).
Chapter 4

Towards a Combined Approach

According to the research plan mapped out in Chapter 1, the preceding two chap-
ters described the complementary approaches of neural coding and neural dynam-
ics, respectively. It is of course desirable to obtain an integrated understanding
of phenomena in the nervous system. A combined dynamical and functional ap-
proach is associated with questions like the following: Do dynamical mechanisms
that can be identified in the nervous system serve particular functions which are rele-
vant for the survival and reproduction of animals or humans? What measures can be
extracted from dynamical variables in single neurons or neural populations to explain
perception, action, or mental states? A few contributions exist that consider both
approaches.

Attractor networks (such as Hopfield networks) are models for associative


memory (Hopfield, 1982; Amit, 1989). Stable steady states correspond to
memorized patterns, and attractor basins can be interpreted as consisting
of states that contain partial information about the memorized patterns or
patterns contaminated with noise. A complete pattern is obtained from
the network dynamics if initial conditions within its basin of attraction are
chosen.

Gerstner et al. (1996) and also Eurich et al. (1999:L, 2000) model the adap-
tation of time delays in neural populations via a Hebbian mechanism.
The direct functional relevance is the self-organized emergence of tapped
delay lines (for example, in the auditory system of birds) which transform
a temporal code into a spatial code, i. e. a neural map, for the localiza-
tion of objects. Related biological systems are echolocation in bats and the
two lateral line systems (electroreception and mechanoreception) in weakly
electric fish and in amphibians, respectively (Carr and Konishi, 1990; Suga,
1990; Carr and Friedman, 1999; Franosch et al., 2003).

Attempts have been made to relate the temporal activity in neural popu-
lations, synchronization (Eckhorn et al., 1988; Gray et al., 1989 and sub-

I-127
I-128 CHAPTER 4. TOWARDS A COMBINED APPROACH

sequent publications) and synfire chains (Abeles, 1991), to object binding


and signal propagation, respectively. Both the exact dynamical mechanisms
and the functional relevance, however, are still under debate experimentally
and theoretically.

Whereas in Chapter 2 the reconstruction of stimuli from neural activity


was primarily regarded as a signal analysis tool, i. e. as a scientific method,
some approaches exist which aim at describing biologically plausible decod-
ing mechanisms. For example, Pouget et al. (1998); Deneve et al. (1999);
Pouget et al. (2000) employ neural network models for estimation and relate
this model to cortical computations.

Salinas and Abbott (1995) investigate the question how information is


transferred from sensory networks to motor networks. Such a transfor-
mation must realize coordinate transformations which put the sensory data
in a form appropriate for use by the motor system. See also Pouget and
Snyder (2000) for a more recent review.

Finally, psychophysical results may be explained with models of cortical


population dynamics. A particularly useful model class was put forward
by Wilson and Cowan (Wilson and Cowan, 1972, 1973) and has since been
applied to various psychophysical phenomena like the emergence of cortical
maps (Ernst et al., 2001), contour integration (Li, 1999, 2001), and masking
effects like shine-through (Herzog et al., 2003:L and Herzog et al., 2003). See
Ernst and Eurich (2003) for a recent review on the Wilson-Cowan model
class and its applications.

As an example, a model of the shine-through effect will be explained with respect


to the model-based interpretation of functionality in the dynamics of neural pop-
ulations.

A dynamical model of the shine-through effect. Basic features of the


shine-through effect (Herzog and Koch, 2001; Herzog and Fahle, 2002; Herzog
et al., 2003) are shown in Figure 4.1. In brief, a vernier presented for 20 ms is
followed by a masking grating which is presented for 300 ms. The visibility of
the vernier crucially depends on the size and geometry of the grating and cannot
be easily explained with standard concepts like mask energy. In particular, the
vernier is well visible for an extended grating (Figure 4.1a), whereas observers
are usually not aware of it if the mask comprises only 5 elements (Figure 4.1b).
Even subtle changes in the geometry of the extended grating like the gaps in
the grating of Figure 4.1b may also diminish vernier visibility dramatically. 1
1
A second psychophysical effect called feature inheritance also shows up in these experiments:
in the cases where the vernier is not visible, its displacement either to the left or to the right is
bound to the grating. This phenomenon, however, is not further considered here.
I-129

(a) (b) (c)

20 ms

300 ms

Percept

Threshold 40.7 (13.2) 209.0 (71.4) 274.2 (75.7)

Figure 4.1: (a) A vernier precedes a grating comprising 25 elements (only 19


of which are shown here). In this condition, the vernier emerges as a transient
flash superimposed on the grating appearing wider, brighter, and even longer
than the vernier really is. It is called the shine-through element. Performance is
determined by the threshold for which subjects reach 75% correct responses in
discriminating the spatial offset direction of the vernier (right vs. left). Stan-
dard errors of thresholds are shown in brackets. (b) If the grating contains five
elements, the vernier remains largely invisible. Thresholds increase dramatically,
i. e. performance deteriorates compared to condition (a). (c) Also, inserting gaps
in the grating diminishes the shine-through effect. As with the homogeneous
grating with 5 elements, thresholds are strongly degraded compared to condition
(a). From Herzog et al. (2003:L).

A neural network model of the shine-through effect (Herzog et al., 2002; Ernst
et al., 2002; Herzog et al., 2003; Ernst et al., 2003; Herzog et al., 2003:L) quan-
titatively accounts for the visibility of the vernier for these and a number of
other stimulus configurations. A Wilson-Cowan type network employing partial
differential equations for the activities of an excitatory and an inhibitory layer of
neurons is chosen because of its relative simplicity: the model allows for the iden-
tification of neural mechanisms that yield the observed psychophyical properties
and is not meant to reproduce physiological details of a specific cortical area.
An overview of the model is given in Figure 4.2. The model employs the hor-
izontal axis x of the visual field only and neglects the vertical spatial direction
and the orientation tuning of cortical visual cells to simplify the analysis. The
network consists of a one-dimensional layer with excitatory and inhibitory neu-
ronal populations, mutually connected with coupling kernels W{e,i} , with typical
length scales {e,i} ,
!
0 2
1 (x x )
W{e,i} (x x0 ) = q exp 2
. (4.1)
2 2 2 {e,i}
{e,i}
I-130 CHAPTER 4. TOWARDS A COMBINED APPROACH

Excitatory We
Layer

Wi We

Wi
Inhibitory V
Layer
t
Stimulus
he(Je), h i(J i)

S(x,t)
se si
x
e i Je, Ji

Figure 4.2: Structure of the model network. A spatio-temporal stimulus S(x, t)


is filtered by a difference of Gaussians and projected onto two populations in
a one-dimensional neuronal layer. The two populations, an excitatory and an
inhibitory one, are mutually coupled with synaptic weight functions described by
the Gaussian kernels We and Wi , respectively. The inset shows the neuronal gain
functions mapping the synaptic inputs J{e,i} to the firing rates h{e,i} . From Ernst
et al. (2002).

The excitatory activities Ae and the inhibitory activities Ai of the populations


are governed by the equations
Ae (x, t)
e = Ae (x, t) + he {wee (Ae ? We ) (x, t)+
t
+wie (Ai ? Wi ) (x, t) + I(x, t)} (4.2)
Ai (x, t)
i = Ai (x, t) + hi {wei (Ae ? We ) (x, t)+
t
+wii (Ai ? Wi ) (x, t) + I(x, t)} , (4.3)
with wee , wei , wie , wii denoting coupling strengths, {e,i} denoting time constants,
I(x, t) denoting the efferent input, and h{e,i} describing the gain functions. The
stars in Eqs. (4.2)-(4.3) denote convolutions of the population activities with the
coupling functions as for example
Z
wee (Ae ? We ) (x, t) = wee Ae (x0 , t)We (x x0 ) dx0 . (4.4)

I-131

The convolution of the efferent coupling kernel V ,


(x x0 )2 (x x0 )2

0 1 1
V (x x ) = p exp p exp , (4.5)
2E2 2E2 2I2 2I2

with the spatio-temporal pattern S(x, t) modeling the stimulus sequences used in
the experiment yields the efferent input I(x, t) converging onto both populations,
I(x, t) = (S ? V ) (x, t).

Figure 4.3 shows simulations of the experimental conditions given in Figure 4.1.
The top row shows the spatio-temporal activity of the excitatory population in a
grey scale code. The spatial center (at x = 0) is of particular interest because it
is the location of the vernier. A comparison of Figures 4.3a and b clearly shows
that the neural activity corresponding to the vernier persists longer in the case
of the 25-bar grating than in the case of the 5-bar grating. The dynamics is the
result of two properties of the system: first, edges (like the edges of the mask and
the vernier itself) are enhanced, i. e., represented by large neural activity. Second,
there is an inhibitory interaction between such edge activities. In Figure 4.3b the
edges of the 5-bar grating are sufficiently close to the center position to quickly
suppress the activity of the vernier. In contrast, the edges of the 25-bar grating
are further away from the center and have less or no influcence which results in
a longer persistence of the center activity (Figure 4.3a).
The model results suggests that the visibility of the vernier as observed in psy-
chophysical experiments can be related to a dynamical property of the underlying
network, the duration T for which the neural activity is above some threshold
A0 (see Figure 4.3d where the center activity is plotted as a function of time).
This is consistent with the results of Figure 4.3c where the short duration of the
activity of the center population corresponds to a low visibility of the vernier for
the gap grating.
The suggested dynamical equivalent of vernier visibility can actually be used
as a quantitative measure. A further set of psychophysical experiments employing
parametrized stimuli is used to calibrate the model and quantitatively compare
model output and psychophysical thresholds in subjects. It turns out that the
duration T of neural activity is not sufficient to account for all data but that a
second measure is necessary in addition, the amplitude Am of neural activity at
the position of the vernier (see Figure 4.3d); details are described in Herzog et al.
(2003:L).

The example of the shine-through model demonstrates a fruitful way to obtain an


understanding of neural processes and their functional consequences. Necessary
ingredients of such a research program are
At least basic knowledge about neural populations in the relevant neural
areas: excitation/inhibition, time constants of the neural activity, etc.
I-132 CHAPTER 4. TOWARDS A COMBINED APPROACH

(a) (b) (c) 0.35


20 0.3

40 0.25

0.2
t [ms]

60
0.15
80
0.1
100
0.05
120 Ae(x, t)
2000 0 2000 2000 0 2000 2000 0 2000
x [arcsec] x [arcsec] x [arcsec]

(d) 0.3
A (0, t) [arb. units]

0.2
Am

0.1
A
0
e

T
0
0 20 40 60 80 100 120
t [ms]

Figure 4.3: Spatio-temporal activation levels Ae (x, t) of the excitatory layer for
(a) the 25 element, for (b) the 5 element, and for (c) the gap grating. Activation
levels are expressed on a grey scale (see right scale). Time t proceeds on the
vertical axis. The position x within the excitatory neuronal layer is depicted
at the horizontal axis. (d) Activation Ae (0, t) of the center neural population
at position x = 0, being directly stimulated by the preceding vernier and the
following central element of the various gratings. The solid line shows A e (0, t)
for the 25 element, the dashed-dotted line for the 5 element, and the dashed line
for the gap grating. While in the 25 element grating condition, Ae (0, t) remains
above the threshold A0 = 0.08 for a long period T , the inhibitory interactions in
the 5 element and the gap grating conditions drive Ae (0, t) very quickly below
A0 . The maximum activity is denoted with Am . Adapted from Herzog et al.
(2003:L).

Data on the functionality of the system; these can be psychophysical data


or data on motor behavior;

A sufficiently simple model whose dynamics can be analyzed and from which
general mechanisms can be deduced;

A correlate of the functional phenomenon to be explained, i. e., some mea-


sure derived from the dynamics of the system that quantitatively accounts
I-133

for the psychophysical or motor data;

Finally, a desired step is a confirmation of the proposed correspondence be-


tween neural activity and psychophysical or motor performance by record-
ing neural activity (preferably through electrophysiology) and comparing it
to the model predictions.

A systematic investigation of the nervous system under this point of view is only
at its beginning. A particular interesting field of research is the consideration of
cognitive phenomena which may yield deeper insights into our position and our
actions in this world.
I-134 BIBLIOGRAPHY
Bibliography

L. F. Abbott. Decoding neuronal firing and modelling neural networks. Quarterly


Rev. Biophys., 27:291331, 1994.

L. F. Abbott. Lapicques introduction of the integrate-and-fire model neuron


(1907). Brain Research Bulletin, 50:303304, 1999.

L. F. Abbott and K. I. Blum. Functional significance of the long-term potentiation


for sequence learning and prediction. Cerebral Cortex, 6:406416, 1996.

L. F. Abbott and P. Dayan. The effect of correlated variability on the accuracy


of a population code. Neural Comp., 11:91101, 1999.

L. F. Abbott, E. Farhi, and S. Gutmann. The path integral for dendritic trees.
Biol. Cybern., 66:4960, 1991.

L. F. Abbott and T. B. Kepler. Model neurons: from Hodgkin-Huxley to Hopfield.


In L. Garrido, editor, Statistical Mechanics of Neural Networks, pages 518.
Springer-Verlag, Berlin, 1990.

L. F. Abbott and S. B. Nelson. Synaptic plasticity: taming the beast. Nature


Neurosci. Suppl., 3:11781183, 2000.

L. F. Abbott and T. J. Sejnowski, editors. Neural Codes and Distributed Repre-


sentations. MIT Press, Cambridge MA, 1999.

M. Abeles. Corticonics. Cambridge University Press, Cambridge, 1991.

E. D. A. Adrian. The Basis of Sensation, the Action of the Sense Organs. Norton,
New York, 1928.

H. Agmon-Snir and I. Segev. Signal delay and input synchronization in passive


dendritic structures. J. Neurophysiol., 70:20662085, 1993.

H. Agmon-Snir and I. Segev. The concept of decision points as a tool in analyzing


dendritic computation. In J. Bower, editor, Computational Neuroscience, pages
4146. Academic Press, San Diego, 1996.

I-135
I-136 BIBLIOGRAPHY

E. Ahissar, R. Sosnik, and S. Haidarliu. Transformation from temporal to rate


coding in a somatosensory thalamocortical pathway. Nature, 406:302306,
2000.
B. Ahmed, J. C. Andersen, R. J. Douglas, K. A. C. Martin, and D. Whitteridge.
Estimates of the net excitatory currents evoked by visual stimulation of iden-
tified neurons in cat visual cortex. Cerebral Cortex, 8:462476, 1998.
R. Albert and A.-L. Barabasi. Statistical mechanics of complex networks. Rev.
Mod. Phys., 74:4797, 2002.
A. M. Alencar, S. V. Buldyrev, A. Majumdar, H. E. Stanley, and B. Suki.
Avalanche dynamics of crackle sound in the lung. Phys. Rev. Lett., 87, 2001.
T. K. Alkasab, T. C. Bozza, T. A. Cleland, K. M. Dorries, T. C. Pearce, J. White,
and J. S. Kauer. Characterizing complex chemosensors: information-theoretic
analysis of olfactory systems. TINS, 22:102108, 1999.
S. Amari. Dynamics of pattern formation in lateral-inhibition neural fields. Biol.
Cybern., 27:7787, 1977.
S. Amari. Topographic organization of nerve fields. Bull. Math. Biol., 42:339364,
1980.
D. J. Amit. Modeling Brain Function. Cambridge University Press, Cambridge,
1989.
U. an der Heiden. Delays in physiological systems. J. Math. Biol., 8:345364,
1979.
J. A. Anderson, A. Pellionisz, and E. Rosenfeld, editors. Neurocomputing 2. The
MIT Press, Cambridge, MA, 1990.
J. A. Anderson and E. Rosenfeld, editors. Neurocomputing. The MIT Press,
Cambridge, MA, 1988.
M. A. Arbib, P. Erdi, and J. Szentagothai. Neural Organization. MIT Press,
Cambridge, MA, 1998.
J. Argyris, G. Faust, and M. Haase. Die Erforschung des Chaos. Vieweg, Braun-
schweig, 1994.
A. Arieli, A. Sterkin, A. Grinvald, and A. Aertsen. Dynamics of ongoing activity:
explanation of the large variability in evoked cortical responses. Science, 273:
18681871, 1996.
W. F. Asaad, G. Rainer, and E. K. Miller. Task-specific neural activity in the
primate prefrontal cortex. J. Neurophysiol., 84:451459, 2000.
BIBLIOGRAPHY I-137

N. P. Azari, J. Nickel, G. Wunderlich, M. Niedeggen, H. Hefter, L. Tellmann,


H. Herzog, P. Stoerig, D. Birnbacher, and R. J. Seitz. Neural correlates of
religious experience. Europ. J. Neurosci., 13:16491652, 2001.

P. Bak, K. Chen, and M. Creutz. Self-organized criticality in the Game of Life.


Nature, 342:780782, 1989.

P. Bak, C. Tang, and K. Wiesenfeld. Self-organized criticality: an explanation of


1/f noise. Phys. Rev. Lett., 59:381384, 1987.

P. Bak, C. Tang, and K. Wiesenfeld. Self-organized criticality. Phys. Rev. A, 38:


364374, 1988.

J. D. Balakrishnan. Decision processes in discrimination: fundamental misrepre-


sentations of signal detection theory. J. Exp. Psychol.: Human Percept. Perf.,
25:11891206, 1999.

V. Balasubramanian, D. Kimber, and M. J. Berry II. Metabolically efficient


information processing. Neural Comp., 12:799815, 2001.

P. Baldi and A. F. Atiya. How delays affect neural dynamics and learning. IEEE
Trans. Neural Networks, 5:612621, 1994.

P. Baldi and W. Heiligenberg. How sensory maps could enhance resolution


through ordered arrangements of broadly tuned receivers. Biol. Cybern., 59:
313318, 1988.

H. Barlow. Redundancy reduction revisited. Network: Comput. Neural Syst., 12:


241253, 2001.

H. B. Barlow. Summation and inhibition in the frogs retina. J. Physiol. (Lond.),


119:6978, 1953.

H. B. Barlow. Single units and sensation: a neuron doctrine for perceptual


psychology? Perception, 1:371394, 1972.

T. A. Basarsky, S. N. Duffy, R. D. Andrew, and B. A. MacVicar. Imaging


spreading depression and associated intracellular calcium waves in brain slices.
J. Neurosci., 18:71897199, 1998.

J. Belair, L. Glass, U. an der Heiden, and J. Milton. Dynamical disease: identifi-


cation, temporal aspects and treatment strategies of human illness. Chaos, 5:
17, 1995.

J. Benda and A. V. M. Herz. A universal model for spike-frequency adaptation.


Neural Computation, 15:25232564, 2003.
I-138 BIBLIOGRAPHY

R. Benzi, A. Sutera, and A. Vulpiani. The mechanism of stochastic resonance.


J. Physics A, 14:L453L457, 1981.
S. Bernard, J. Belair, and M. C. Mackey. Sufficient conditions for stability of
linear differential equations with distributed delays. Discrete and Continuous
Dynamical Systems B, 1:233256, 2001.
S. Bernard, J. Belair, and M. C. Mackey. Oscillations in cyclical neutropenia:
new evidence based on mathematical modeling. J. theor. Biol., 223:283298,
2003.
M. J. Berry, D. K. Warland, and M. Meister. The structure and precision of
retinal spike trains. Proc. Natl. Acad. Sci. USA, 94:54115416, 1997.
M. Bethge, D. Rotermund, and K. Pawelzik. Optimal short-term population
coding: when Fisher information fails. Neural Comp., 14:23172351, 2002.
M. Bethge, D. Rotermund, and K. Pawelzik. Second order phase transition in
neural rate coding: binary encoding is optimal for rapid signal transmission.
Phys. Rev. Lett., 90:088104, 2003.
S. M. Bezrukov and I. Vodyanoy. Stochastic resonance in non-dynamical systems
without response thresholds. Nature, 385:319321, 1997.
G.-q. Bi and M.-m. Poo. Synaptic modifications in cultured hippocampal neurons:
dependence on spike timing, synaptic strength, and postsynaptic cell type. J.
Neurosci., 18:1046410472, 1998.
G.-q. Bi and M.-m. Poo. Distributed synaptic modification in neural networks
induced by patterned stimulation. Nature, 401:792796, 1999.
G.-q. Bi and M.-m. Poo. Synaptic modification by correlated activity: Hebbs
postulate revisited. Annu. Rev. Neurosci., 24:139166, 2001.
W. Bialek and F. Rieke. Reliability and information transmission in spiking
neurons. TINS, 15:428434, 1992.
W. Bialek, F. Rieke, R. R. de Ruyter van Steveninck, and D. Warland. Reading
a neural code. Science, 252:18541857, 1991.
W. Bialek and A. Zee. Coding and computation with neural spike trains. J. Stat.
Phys., 59:103115, 1990.
C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford,
1996.
E. Blaser, Z. W. Pylyshyn, and A. O. Holcombe. Tracking an object through
feature space. Nature, 408:196199, 2000.
BIBLIOGRAPHY I-139

T. V. P. Bliss and G. L. Collingridge. A synaptic model of memory: long-term


potentiation in the hippocampus. Nature, 361:3139, 1993.

K. I. Blum and L. F. Abbott. A model of spatial map formation in the hippocam-


pus of the rat. Neural Comp., 8:8593, 1996.

R. Bormann, J.-L. Cabrera, J. G. Milton, and C. W. Eurich. Visuomotor tracking


on a computer screen: an experimental paradigm to study the dynamics of
motor control. Neurocomputing, in press.

S. Bornholdt and H. G. Schuster, editors. Handbook of Graphs and Networks.


Wiley-VCH, Weinheim, 2003.

A. Borst and F. E. Theunissen. Information theory and neural coding. Nature


Neurosci., 2:947957, 1999.

J. M. Bower and D. Beeman. The Book of GENESIS: Exploring Realistic Neural


Models with the GEneral NEural SImulation System. Telos, Santa Clara CA,
1998.

R. Brause. Neuronale Netze. B. G. Teubner, Stuttgart, 1991.

C. E. Bredfeldt and D. L. Ringach. Dynamics of spatial frequency tuning in


macaque V1. J. Neurosci., 22:19761984, 2002.

P. C. Bressloff. Dynamics of a compartmental model integrate-and-fire neuron


with somatic potential reset. Physica D, 80:399412, 1995.

P. C. Bressloff, J. D. Cowan, M. Golubitsky, P. J. Thomas, and M. C. Wiener.


Geometric visual hallucinations, euclidean symmetry and the functional archi-
tecture of striate cortex. Phil. Trans. R. Soc. Lond. B, 356:299330, 2001.

H. Bringmann. Probabilistic encoding and decoding of neural activity in rat pri-


mary visual cortex. Bachelors Thesis, University of Osnabruck, 2002.

K. H. Britten, M. N. Shadlen, W. T. Newsome, and J. A. Movshon. The analysis


of visual motion: a comparison of neuronal and psychphysical performance. J.
Neurosci., 12:47454765, 1992.

C. D. Brody. Correlations without synchrony. Neural Comp., 11:15371551, 1999.

T. A. Brody, J. Flores, J. B. French, P. A. Mello, A. Pandey, and S. S. M. Wong.


Random-matrix physics: spectrum and strength flutuations. Rev. Mod. Phys.,
53:385479, 1981.

H.-M. Broker and P. Grassberger. Mean-field behaviour in a local low-dimensional


model. Europhys. Lett., 30:319324, 1995.
I-140 BIBLIOGRAPHY

H.-M. Broker and P. Grassberger. Random neighbor theory of the Olami-Feder-


Christensen earthquake model. Phys. Rev. E, 56:39443952, 1997.
E. N. Brown, L. M. Frank, D. Tang, M. C. Quirk, and M. A. Wilson. A statistical
paradigm for neural spike train decoding applied to position prediction from
ensemble firing patterns of rat hippocampal place cells. J. Neurosci., 18:7411
7425, 1998.
T. H. Brown, P. F. Chapmann, E. W. Kairiss, and C. L. Keenan. Long-term
synaptic potentiation. Science, 242:724728, 1988.
T. H. Brown, E. W. Kairiss, and C. L. Keenan. Hebbian synapses: biophysical
mechanisms and algorithms. Annu. Rev. Neurosci., 13:475511, 1990.
N. Brunel and V. Hakim. Fast global oscillations in networks of integrate-and-fire
neurons with low firing rates. Neural Comp., 11:16211671, 1999.
N. Brunel and J.-P. Nadal. Mutual information, Fisher information, and popu-
lation coding. Neural Comp., 10:17311757, 1998.
A. R. Bulsara, T. C. Elston, C. R. Doering, S. B. Lowen, and K. Lindenberg. Co-
operative behavior in periodically driven noisy integrate-fire models of neuronal
dynamics. Phys. Rev. E, 53:39583969, 1996.
A. R. Bulsara and L. Gammaitoni. Tuning in to noise. Physics Today, 49(3):
3945, 1996.
G. T. Buracas and T. D. Albright. Gauging sensory representations in the brain.
TINS, 22:303309, 1999.
B. Burns and A. C. Webb. The spontaneous activities of neurones in the cats
cerebral cortex. Proc. R. Soc. London B, 194:211223, 1976.
R. B. Buxton. Introduction to Functional Magnetic Resonance Imaging : Prin-
ciples and Techniques. Cambridge University Press, Cambridge, 2001.
J. L. Cabrera and J. G. Milton. On-off intermittency in a human balancing task.
Phys. Rev. Lett., 89, 2002.
J. Camhi. Neuroethology: Nerve Cells and the Natural Behavior of Animals.
Sinauer Associates, Sunderland MA, 1984.
M. Carandini and D. J. Heeger. Summation and division by neurons in primate
visual cortex. Science, 264:13331336, 1994.
M. Carandini, D. J. Heeger, and J. A. Movshon. Linearity and normalization in
simple cells of the macaque primary visual cortex. J. Neurosci., 17:86218644,
1997.
BIBLIOGRAPHY I-141

S. Cardoso de Oliveira, A. Thiele, and K.-P. Hoffmann. Synchronization of neu-


ronal activity during stimulus expectation in a direction discrimination task.
J. Neurosci., 17:92489260, 1997.
T. Carew. Behavioral Neurobiology: The Cellular Organization of Natural Be-
havior. Sinauer Associates, Sunderland MA, 2000.
G. A. Carpenter and S. Grossberg. ART2: self-organization of stable category
recognition codes for analog input patterns. Applied Optics, 26:49194930,
1987.
C. E. Carr. Processing of temporal information in the brain. Annu. Rev. Neu-
rosci., 16:223243, 1993.
C. E. Carr and R. E. Boudreau. Central projections of auditory nerve fibers in
the barn owl. J. Comp. Neurol., 314:306318, 1991.
C. E. Carr and M. A. Friedman. Evolution of time coding systems. Neural Comp.,
11:120, 1999.
C. E. Carr and M. Konishi. A circuit for detection of interaural time differences
in the brain stem of the barn owl. J. Neurosci., 10:32273246, 1990.
M. Carrasco and B. McElree. Covert attention accelerates the rate of visual
information processing. PNAS, 98:53635367, 2001.
E. Catsigeras and R. Budelli. Limit cycles of a bineuronal network model. Physica
D, 56:235252, 1992.
G. A. Cecchi, M. Sigman, J.-M. Alonso, L. Martinez, D. R. Chialvo, and M. O.
Magnasco. Noise in neurons is message dependent. Proc. Natl. Acad. Sci. USA,
97:55575561, 2000.
L. Cervetto, P. L. Marchiafava, and E. Pasino. Influence of efferent retinal fibres
on responsiveness of ganglion cells to light. Nature, 260:5657, 1976.
M.-L. Chabanol and V. Hakim. Analysis of a dissipative model of self-organized
criticality with random neighbors. Phys. Rev. E, 56:R2343R2346, 1997.
D. J. Chalmers. The Conscious Mind: in Search of a Fundamental Theory.
Oxford University Press, New York, 1995.
C. C. Chow and J. J. Collins. A pinned polymer model of posture control. Phys.
Rev. E, 52:907912, 1995.
K. Christensen and Z. Olami. Scaling, phase transitions, and nonuniversality in a
self-organized critical cellular automaton model. Phys. Rev. A, 46:18291838,
1992.
I-142 BIBLIOGRAPHY

P. H. Chu, J. G. Milton, and J. D. Cowan. Connectivity and the dynamics of


integrate-and-fire neural networks. Int. J. Bif. Chaos, 4:237243, 1994.
M. M. Chun and J. M. Wolfe. Visual attention. In B. Goldstein, editor, Blackwell
Handbook of Perception, pages 272310. Blackwell Publishers, Oxford, 2001.
T. Collett. Stereopsis in toads. Nature, 267:349351, 1977.
J. J. Collins, C. C. Chow, and T. T. Imhoff. Aperiodic stochastic resonance in
excitable systems. Phys. Rev. E, 52:R3321R3324, 1995a.
J. J. Collins, C. C. Chow, and T. T. Imhoff. Stochastic resonance without tuning.
Nature, 376:236238, 1995b.
J. J. Collins and C. J. De Luca. Open-loop and closed-loop control of posture:
a random walk analysis of center-of-pressure trajectories. Exp. Brain Res., 95:
308318, 1993.
J. J. Collins and C. J. De Luca. Random walking during quiet standing. Phys.
Rev. Lett., 73:764767, 1994.
J. J. Collins and C. J. De Luca. The effects of visual input on open-loop and
closed-loop postural control mechanisms. Exp. Brain Res., 103:151163, 1995a.
J. J. Collins and C. J. De Luca. Upright, correlated random walks: a statistical-
biomechanic approach to the human postural control system. Chaos, 5:5763,
1995b.
J. J. Collins, C. J. De Luca, A. Burrows, and L. A. Lipsitz. Age-related changes
in open-loop and closed-loop postural control mechanisms. Exp. Brain Res.,
104:480492, 1995c.
J. J. Collins, C. J. De Luca, A. E. Pavlik, S. H. Roy, and M. S. Emley. The
effects of spaceflight on open-loop and closed-loop postural control mechanisms:
human neurovestibular studies on SLS-2. Exp. Brain Res., 107:145150, 1995d.
J. J. Collins, T. T. Imhoff, and P. Grigg. Noise-enhanced tactile sensation. Nature,
383:770, 1996.
J. J. Collins, T. T. Imhoff, and P. Grigg. Noise-mediated enhancements and
decrements in human tactile sensation. Phys. Rev. E, 56:923926, 1997.
J. A. Connor and C. F. Stevens. Prediction of repetitive firing behaviour from
voltage clamp data on an isolated neurone soma. J. Physiol., 213:3153, 1971.
J. A. Connor, D. Walter, and R. McKown. Neural repetitive firing: modifications
of the hodgkin-huxley axon suggested by experimental results from crustacean
axons. Biophys. J., 18:81102, 1977.
BIBLIOGRAPHY I-143

B. W. Connors, M. J. Gutnick, and D. A. Prince. Electrophysiological properties


of neocortical neurons in vitro. J. Neurophysiol., 48:13021320, 1982.

P. Cordo, J. T. Inglis, S. Verschueren, J. J. Collins, D. M. Merfeld, S. Rosenblum,


S. Buckley, and F. Moss. Noise in human muscle spindles. Nature, 383:769770,
1996.

A. Corral, C. J. Perez, A. Daz-Guilera, and A. Arenas. Self-organized criticality


and synchronization in a lattice model of integrate-and-fire oscillators. Phys.
Rev. Lett., 74:118121, 1995.

T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New


York, 1991.

F. Crick and C. Koch. A framework for consciousness. Nature Neurosci., 6:


119126, 2003.

L. J. Croner, K. Purpura, and E. Kaplan. Response variability in retinal ganglion


cells of primates. Proc. Natl. Acad. Sci. USA, 90:81288130, 1993.

M. A. Dahlem and S. C. Muller. Migraine aura dynamics after reverse retinotopic


mapping of weak excitation waves in the primary visual cortex. Biol. Cyb., 88:
419424, 2003.

Y. Dan, J.-M. Alonso, W. M. Usrey, and R. C. Reid. Coding of visual infor-


mation by precisely correlated spikes in the lateral geniculate nucleus. Nature
Neurosci., 1:501507, 1998.

A. Das. Plasticity in adult sensory cortex: a review. Network: Comput. Neural


Syst., 8:R33R76, 1997.

A. Das and C. Gilbert. Topography of contextual modulations mediated by


short-range interactions in primary visual cortex. Nature, 399:655660, 1999.

S. P. Day and M. R. Davenport. Continuous-time temporal back-propagation


with adaptable time delays. IEEE Trans. Neural Netw., 4:348354, 1993.

P. Dayan and L. F. Abbott. Theoretical Neuroscience. MIT Press, Cambridge


MA, 2001.

P. Dayan, S. Kakade, and P. R. Montague. Learning and selective attention.


Nature Neurosci. Suppl., 3:12181223, 2000.

P. Dayan and R. S. Zemel. Statistical models and sensory attention. In D. Will-


shaw and A. Murray, editors, Proceedings of the Ninth International Conference
on Artificial Neural Networks, ICANN, pages 10171022. Venue, University of
Edinburgh, 1999.
I-144 BIBLIOGRAPHY

R. R. de Ruyter van Steveninck and S. B. Laughlin. The rate of information


transfer at graded-potential synapses. Nature, 379:642645, 1996.

S. A. Deadwyler and R. E. Hampson. The significance of neural ensemble codes


during behavior and cognition. Annu. Rev. Neurosci., 20:217244, 1997.

A. F. Dean. The variability of discharge of simple cells in the cat striate cortex.
Exp. Brain Res., 44:437440, 1981.

J. Dean and H. Cruse. Motor pattern generation. In M. A. Arbib, editor, The


Handbook of Brain Theory and Neural Networks, 2nd edition, pages 696701.
MIT Press, Cambridge MA, 2003.

G. C. DeAngelis, G. M. Ghose, I. Ohzawa, and R. D. Freeman. Functional


micro-organization of primary visual cortex: receptive field analysis of nearby
neurons. J. Neurosci., 19:40464064, 1999.

S. Deban, D. B. Wake, and G. Roth. Salamander with a ballistic tongue. Nature,


389:2728, 1997.

R. C. deCharms and A. Zador. Neural representation and the cortical code.


Annu. Rev. Neurosci., 23:613647, 2000.

G. Deco and B. Schurmann. The coding of information by spiking neurons: an


analytical study. Network: Comput. Neural Syst., 9:303317, 1998.

A. Delorme, G. Richard, and M. Fabre-Thorpe. Rapid processing of complex


natural scenes: a role for the magnocellular visual pathways? Neurocomputing,
26-27:663670, 1999.

A. Delorme, G. Richard, and M. Fabre-Thorpe. Ultra-rapid categorisation of


natural scenes does not rely on colour cues: a study in monkeys and humans.
Vision Res., 40:21872200, 2000.

C. Demerens, B. Stankoff, M. Logak, P. Anglade, B. Allinquant, F. Couraud,


B. Zalc, and C. Lubetzki. Induction of myelination in the central nervous
system by electrical activity. Proc. Natl. Acad. Sci. USA, 93:98879892, 1996.

S. Deneve, P. E. Latham, and A. Pouget. Reading population codes: a neural


implementation of ideal observers. Nature Neurosci., 2:740745, 1999.

R. Desimone and J. Duncan. Neural mechanisms of selective visual attention.


Annu. Rev. Neurosci., 18:193222, 1995.

M. R. DeWeese and M. Meister. How to measure the information gained from


one symbol. Network, 10:325340, 1999.
BIBLIOGRAPHY I-145

M. R. DeWeese and A. M. Zador. Binary coding in auditory cortex. In S. Becker,


S. Thrun, and K. Obermayer, editors, Advances in Neural Information Pro-
cessing Systems 15, pages 101108. MIT Press, Cambridge MA, 2003a. Neural
Information Processing Systems.

M. R. DeWeese and A. M. Zador. Binary spiking in auditory cortex. J. Neurosci.,


23:79407949, 2003b.

D. Dhar. Self-organized critical state of sandpile automaton models. Phys. Rev.


Lett., 64:16131616, 1990.

D. Dhar and R. Ramaswamy. Exactly solved model of self-organized critical


phenomena. Phys. Rev. Lett., 63:16591662, 1989.

M. A. Dichter. Rat cortical neurons in cell culture: culture methods, cell mor-
phology, electrophysiology, and synapse formation. Brain Res., 149:279293,
1978.

U. Dicke, C. W. Eurich, and H. Schwegler. Postontogenetic short-term plastic-


ity in the somatosensory system: a neural network model. In D. Willshaw
and A. Murray, editors, Proceedings of the Ninth International Conference on
Artificial Neural Networks, ICANN 99, pages 150155. Venue, University of
Edinburgh, 1999.

H. R. Dinse, B. Godde, T. Hilger, S. S. Haupt, F. Spengler, and R. Zepka. Short-


term functional plasticity of cortical and thalamic sensory representations and
its implication for information processing. In H.-J. Freund, B. A. Sabel, and
O. W. Witte, editors, Brain Plasticity, Advances in Neurology, Vol. 73, pages
159178. Lippincott-Raven, Philadelphia, 1997.

H. R. Dinse, B. Godde, and F. Spengler. Short-term plasticity of topographic or-


ganization of somatosensory cortex and improvement of spatial discrimination
performance induced by an associative pairing of tactile stimulation. Internal
Report 95-01, Institut fur Neuroinformatik, Ruhr-Universitat Bochum, 1995.

H. R. Dinse, B. Godde, F. Spengler, B. Stauffenberg, and R. Kraft. Hebbian


pairing of tactile stimulation. II: Human psychophysics: change of tactile spa-
tial and frequency discrimination performance. Soc. Neurosci. Abstr., 20:1429,
1994.

H. R. Dinse, K. Kruger, and J. Best. A temporal structure of cortical information


processing. Concepts in Neuroscience, 1:199238, 1990.

J. K. Douglass, L. Wilkens, E. Pantazelou, and F. Moss. Noise enhancement


of information transfer in crayfish mechanoreceptors by stochastic resonance.
Nature, 365:337340, 1993.
I-146 BIBLIOGRAPHY

J.-P. S. Draye, D. A. Pavisic, G. A. Cheron, and G. A. Libert. Dynamic recurrent


neural networks: a dynamical analysis. IEEE Trans. Syst. Man Cybern., B26:
692706, 1996.

R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley, New


York, 2001.

R. Eckhorn, R. Bauer, W. Jordan, M. Borsch, W. Kruse, M. Munk, and H. J.


Reitboeck. Coherent oscillations: a mechanism of feature linking in the visual
cortex? Biol. Cybern., 60:121130, 1988.

R. Eckhorn, O.-J. Grusser, J. Kroller, K. Pellnitz, and B. Popel. Efficiency of


different neuronal codes: information transfer calculations for three different
neuronal systems. Biol. Cybern., 22:4960, 1976.

R. Eckhorn, F. Krause, and J. I. Nelson. The RF-cinematogram. Biol. Cybern.,


69:3755, 1993.

R. Eckhorn and B. Popel. Rigorous and extended application of information


theory to the afferent visual system of the cat. I. Basic concepts. Kybernetik,
16:191200, 1974.

R. Eckhorn and B. Popel. Rigorous and extended application of information


theory to the afferent visual system of the cat. I. Experimental results. Biol.
Cybern., 17:717, 1975.

R. Eckhorn and B. Popel. Responses of cat retinal ganglion cells to the random
motion of a spot stimulus. Vision Res., 21:435443, 1981.

R. Eckhorn and H. Querfurth. Information transmission by isolated frog muscle


spindle. Biol. Cybern., 52:165176, 1985.

R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. Dicke. A neural network for


feature linking via synchronous activity: results from cat visual cortex and
from simulations. In R. M. J. Cotterill, editor, Models of Brain Function,
pages 255272. Cambridge University Press, Cambridge, 1989.

R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. Dicke. Feature linking via synchro-


nization among distributed assemblies: simulations of results from cat visual
cortex. Neural Comp., 2:293307, 1990.

J.-M. Edeline, G. Dutrieux, Y. Manunta, and E. Hennevin. Diversity of receptive


field changes in auditory cortex during natural sleep. Europ. J. Neurosci., 14:
18651880, 2001.
BIBLIOGRAPHY I-147

V. Egger, D. Feldmeyer, and B. Sakmann. Coincidence detection and changes of


synaptic efficacy in spiny stellate neurons in rat barrel cortex. Nature Neurosci.,
2:10981105, 1999.

A. K. Engel, P. Konig, A. K. Kreiter, T. B. Schillen, and W. Singer. Temporal


coding in the visual cortex: new vistas on integration in the nervous system.
TINS, 15:218226, 1992.

A. K. Engel, P. Konig, A. K. Kreiter, and W. Singer. Interhemispheric synchro-


nization of oscillatory neuronal responses in cat visual cortex. Science, 252:
11771179, 1991.

F. Engert and T. Bonhoeffer. Dendritic spine changes associated with hippocam-


pal long-term synaptic plasticity. Nature, 399:6670, 1999.

G. B. Ermentrout. Neural networks as spatio-temporal pattern-forming systems.


Rep. Prog. Phys., 61:353430, 1998.

G. B. Ermentrout and J. D. Cowan. A mathematical theory of visual hallucination


patterns. Biol. Cybern., 34:137150, 1979.

G. B. Ermentrout and D. Kleinfeld. Traveling electrical waves in cortex: insights


from phase dynamics and speculation on a computational role. Neuron, 29:
3344, 2001.

U. Ernst, K. Pawelzik, and T. Geisel. Synchronization induced by temporal delays


in pulse-coupled oscillators. Phys. Rev. Lett., 74:15701573, 1995.

U. Ernst, K. Pawelzik, and T. Geisel. Delay-induced multistable synchronization


of biological oscillators. Phys. Rev. E, 57:21502162, 1998.

U. A. Ernst, A. Etzold, M. H. Herzog, and C. W. Eurich. Object representation


through transient neural dynamics. In R. P. Wurtz and M. Lappe, editors, Dy-
namic Perception, pages 7176. Akademische Verlagsgesellschaft, Berlin, 2002.

U. A. Ernst, A. Etzold, M. H. Herzog, and C. W. Eurich. Dynamics of neuronal


populations modeled by a Wilson-Cowan system account for the transient vis-
ibility of masked stimuli. Neurocomputing, 52-54:747753, 2003.

U. A. Ernst and C. W. Eurich. Cortical population dynamics and psychophysics.


In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks,
2nd edition, pages 294300. MIT Press, Cambridge MA, 2003.

U. A. Ernst, K. R. Pawelzik, C. Sahar-Pikielny, and M. V. Tsodyks. Intracortical


origin of visual maps. Nature Neurosci., 4:431436, 2001.
I-148 BIBLIOGRAPHY

A. Etzold, C. W. Eurich, and H. Schwegler. Tuning properties of noisy cells with


application to orientation selectivity in rat visual cortex. Neurocomputing, 52
54:497503, 2003:L.

A. Etzold, H. Schwegler, and C. W. Eurich. A critical assessment of measures


for the tuning of cells with special emphasis to orientation tuning. In S. Gross-
berg, editor, Sixth International Conference on Cognitive and Neural Systems.
Boston, 2002.

A. Etzold, H. Schwegler, and C. W. Eurich. Coding with noisy neurons: sta-


bility of tuning curve estimation depends strongly on the analysis method. J.
Neurosci. Meth., in press.

C. W. Eurich. Zur Streudynamik des anisotropen Kepler-Problems. Diploma


thesis, Universitat Munster, 1991.

C. W. Eurich. Getrennt marschieren, vereint kodieren. Neurotheorie fur Anfanger


V. Gehirn und Geist, 2(2):8283, 2003.

C. W. Eurich. An estimation-theoretic framework for the presentation of multiple


stimuli. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural
Information Processing Systems 15, pages 293300. MIT Press, Cambridge
MA, 2003:L.

C. W. Eurich, J. D. Cowan, and J. G. Milton. Hebbian delay adaptation in a


network of integrate-and-fire neurons. In W. Gerstner, A. Germond, M. Hasler,
and J.-D. Nicoud, editors, Artificial Neural Networks ICANN 97, pages 157
162. Springer-Verlag, Berlin, 1997a.

C. W. Eurich, H. R. Dinse, U. Dicke, B. Godde, and H. Schwegler. Coarse coding


accounts for improvement of spatial discrimination after plastic reorganization
in rats and humans. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud,
editors, Artificial Neural Networks ICANN 97, pages 5560. Springer-Verlag,
Berlin, 1997b.

C. W. Eurich, U. Ernst, and K. Pawelzik. Continous dynamics of neuronal delay


adaptation. In L. Niklasson, M. Boden, and T. Ziemke, editors, ICANN 98,
pages 355360. Springer-Verlag, London, 1998.

C. W. Eurich and W. Freiwald. Kodierungsstrategien bei eigenschafts- und objek-


tbasierter Aufmerksamkeit: Multi-Elektroden-Ableitungen und Schatztheorie.
In Sonderforschungsbereich 517, Neuronale Grundlagen kognitiver Leistungen,
Antrag auf Finanzierung fur die Jahre 20022004, Projekt B7. Bremen and
Oldenburg, 2001.
BIBLIOGRAPHY I-149

C. W. Eurich, J. M. Herrmann, and U. A. Ernst. Finite-size effects of avalanche


dynamics. Phys. Rev. E, 66:066137, 2002:La.

C. W. Eurich, M. C. Mackey, and H. Schwegler. Recurrent inhibitory dynamics:


the role of state-dependent distributions of conduction delay times. J. Theor.
Biol., 216:3150, 2002:Lb.

C. W. Eurich and H. Mallot. Mathematics for the neural and cognitive sciences.
In IK 2001 Interdisziplinares Kolleg. arendtap, Bremen, 2001.

C. W. Eurich and J. G. Milton. Noise-induced transitions in human postural


sway. Phys. Rev. E, 54:66816684, 1996:L.

C. W. Eurich and K. Pawelzik. Neuronale Signalverarbeitung. In IK 2000


Interdisziplinares Kolleg. arenDTaP, Bremen, 2000.

C. W. Eurich, K. Pawelzik, U. Ernst, J. D. Cowan, and J. G. Milton. Dynamics


of self-organized delay adaptation. Phys. Rev. Lett., 82:15941597, 1999:L.

C. W. Eurich, K. Pawelzik, U. Ernst, A. Thiel, J. D. Cowan, and J. G. Milton.


Delay adaptation in the nervous system. Neurocomputing, 32-33:741748, 2000.

C. W. Eurich, G. Roth, H. Schwegler, and W. Wiggers. Simulander: a neural


network model for the orientation movement of salamanders. J. Comp. Physiol.
A, 176:379389, 1995:L.

C. W. Eurich and E. L. Schulzke. Irregular connectivity in neural layers yields


temporally stable activity patterns. Neurocomputing, in press:L.

C. W. Eurich and H. Schwegler. Coarse coding: calculation of the resolution


achieved by a population of large receptive field neurons. Biol. Cybern., 76:
357363, 1997:L.

C. W. Eurich, H. Schwegler, and R. Woesler. Coarse coding: applications to the


visual system of salamanders. Biol. Cybern., 77:4147, 1997:L.

C. W. Eurich and W. Wiggers. Salamander Simulander: Simulation komplexer


Vorgange im Gehirn durch kunstliche neuronale Netze. Unterricht Biologie,
21:4851, 1997.

C. W. Eurich and S. D. Wilke. Multi-dimensional encoding strategy of spiking


neurons. Neural Comp., 12:15191529, 2000:L.

C. W. Eurich, S. D. Wilke, and H. Schwegler. Neural representation of multi-


dimensional stimuli. In S. A. Solla, T. K. Leen, and K.-R. Muller, editors,
Advances in Neural Information Processing Systems 12, pages 115121. MIT
Press, Cambridge MA, 2000:L.
I-150 BIBLIOGRAPHY

C. W. Eurich, R. Woesler, and H. Schwegler. A coarse coding model for prey


localization in tongue-projecting salamanders. In K. Nishikawa and M. Ar-
bib, editors, Sensorimotor Coordination: Amphibians, Models & Comparative
Studies, pages 4648. Sedona, AZ, 1996.
E. V. Evarts. Representation of movements and muscles by pyramidal tract neu-
rons of the precentral motor cortex. In M. D. Yahr and D. P. Purpura, editors,
Neurophysiological Basis of Normal and Abnormal Motor Activity, pages 215
253. Ravens Press, Hewlett NY, 1967.
J.-P. Ewert. Tectal mechanisms that underlie prey-catching and avoidance be-
haviors in toads. In H. Vanegas, editor, Comparative Neurology of the Optic
Tectum, pages 247416. Plenum Press, New York, London, 1984.
J.-P. Ewert. Neuroethology of releasing mechanisms: prey-catching in toads.
Behav. Brain Sci., 10:337405, 1987.
J.-P. Ewert. The release of visual behavior in toads: stages of parallel / hi-
erarchical information processing. In J.-P. Ewert and M. A. Arbib, editors,
Visuomotor Coordination, pages 39120. Plenum Press, New York, 1989.
U. Eysel. Turning a corner in vision research. Nature, 399:641644, 1999.
M. Fahle and T. Poggio, editors. Perceptual Learning. MIT Press, Cambridge
MA, 2002.
A. L. Fairhall, G. D. Lewen, W. Bialek, and R. R. de Ruyter van Steveninck.
Efficiency and ambiguity in an adaptive neural code. Nature, 412:787792,
2001.
H. J. S. Feder and J. Feder. Self-organized criticality in a stick-slip process. Phys.
Rev. Lett., 66:26692672, 1991.
D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in
primate cerebral cortex. Cerebral Cortex, 1:147, 1991.
D. Ferster, S. Chung, and H. Wheat. Orientation selectivity of thalamic input to
simple cells of cat visual cortex. Nature, 380:249252, 1999.
S. A. Fisher, T. M. Fisher, and F. J. Carew. Multiple overlapping processes
underlying short-term synaptic enhancement. TINS, 20:170177, 1997.
R. Fitzhugh. Impulses and physiological states in theoretical models of nerve
membrane. Biophys. J., 1:445466, 1961.
E. Florey. Geist Seele Gehirn: Eine kurze Geschichte der Hirnforschung. In
G. Roth and W. Prinz, editors, Kopf-Arbeit, pages 3786. Spektrum Akademis-
cher Verlag, Heidelberg, 1996.
BIBLIOGRAPHY I-151

C. Fohlmeister, W. Gerstner, R. Ritz, and J. L. van Hemmen. Spontaneous


excitations in the visual cortex: stripes, spirals, rings, and collective bursts.
Neural Comp., 7:905914, 1995.

P. Foldiak. The ideal homunculus: statistical inference from neural population


responses. In F. Eeckman and J. Bower, editors, Computation and Neural
Systems, pages 5560. Kluwer Academic Publishers, Boston, 1993.

N. Fourcaud and N. Brunel. Dynamics of the firing probability of noisy integrate-


and-fire neurons. Neural Comp., 14:20572110, 2002.

K. Franke. Zur Dynamik des Korperschwerpuktes im aufrechten Stand. Studien-


arbeit, Universitat Bremen, 2003.

J.-M. P. Franosch, M. C. Sobotka, A. Elepfandt, and J. L. van Hemmen. Minimal


model of prey localization through the lateral-line system. Phys. Rev. Lett.,
91:158101, 2003.

W. Freiwald, H. Stemmann, A. Wannig, A. K. Kreiter, U. G. Hofmann, M. D.


Hills, G. T. A. Kovacs, D. T. Kewley, J. M. Bower, A. Etzold, S. D. Wilke,
and C. W. Eurich. Stimulus representation in rat primary visual cortex: multi-
electrode recordings with micro-machined silicon probes and estimation theory.
Neurocomputing, 4446:407416, 2002:L.

V. Frette, K. Christensen, A. M. Malthe-Srenssen, J. Feder, T. Jssang, and


P. Meakin. Avalanche dynamics in a pile of rice. Nature, 397:49, 1996.

B. R. Frieden. Physics from Fisher Information. Cambridge University Press,


Cambridge, 1998.

A. Frien, R. Eckhorn, R. Bauer, T. Woelbern, and H. Kehr. Stimulus-specific fast


oscillations at zero phase between visual areas V1 and V2 of awake monkey.
NeuroReport, 5:22732277, 1994.

P. Fries, J. H. Reynolds, A. E. Rorie, and R. Desimone. Modulation of oscillatory


neuronal synchronization by selective visual attention. Science, 291:15601563,
2001.

P. Fromherz and V. Gaede. Exclusive-OR function of single arborized neuron.


Biol. Cybern., 69:337344, 1993.

F. Gaillard. Binocularly driven neurons in the rostral part of the frog optic
tectum. J. Comp. Physiol. A, 157:4755, 1985.

M. Galarreta and S. Hestrin. Electrical synapses between GABA-releasing in-


terneurons. Nature Reviews Neuroscience, 2:425433, 2001.
I-152 BIBLIOGRAPHY

L. Gammaitoni, P. Hanggi, P. Jung, and F. Marchesoni. Stochastic resonance.


Rev. Mod. Phys., 70:223287, 1998.

L. Gammaitoni, F. Marchesoni, and S. Santucci. Stochastic resonance as a bona


fide resonance. Phys. Rev. Lett., 74:10521055, 1995.

J. Gautrais and S. J. Thorpe. Rate coding versus temporal order coding: a


theoretical approach. BioSystems, 48:5765, 1998.

T. J. Gawne. The simultaneous coding of orientation and contrast in the responses


of V1 complex cells. Exp. Brain Res., 133:293302, 2000.

T. J. Gawne and B. J. Richmond. How independent are the messages carried by


adjacent inferior temporal cortical neurons? J. Neurosci., 13:27582771, 1993.

M. S. Gazzaniga, R. B. Ivry, and G. Manqun. Cognitive Neuroscience: The


Biology of the Mind. W. W. Norton & Company, New York, 1998.

A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis.


Chapman & Hall/CRC, Boca Raton, 1997.

A. P. Georgopoulos. Current issues in directional motor control. TINS, 18:506


510, 1995.

A. P. Georgopoulos, A. B. Schwartz, and R. E. Kettner. Neuronal population


coding of movement direction. Science, 233:14161419, 1986.

E. D. Gershon, M. C. Wiener, P. E. Latham, and B. J. Richmond. Coding


strategies in monkey V1 and inferior temporal cortices. J. Neurophysiol., 79:
11351144, 1998.

W. Gerstner. Time structure of the activity in neural network models. Phys.


Rev. E, 51:738760, 1995.

W. Gerstner. Population dynamics of spiking neurons: fast transients, asyn-


chronous states, and locking. Neural Comp., 12:4389, 2000.

W. Gerstner, R. Kempter, J. L. van Hemmen, and H. Wagner. A neuronal


learning rule for sub-millisecond temporal coding. Nature, 383:7678, 1996.

W. Gerstner and J. L. van Hemmen. Coherence and incoherence in a globally


coupled ensemble of pulse-emitting units. Phys. Rev. Lett., 71:312315, 1993.

A. Gierer and H. Meinhardt. A theory of biological pattern formation. Kybernetik,


12:3039, 1972.
BIBLIOGRAPHY I-153

D. C. Gillespie, I. Lampl, J. S. Anderson, and D. Ferster. Dynamics of the


orientation-tuned membrane potential response in cat primary visual cortex.
Nature Neurosci., 4:10141019, 2001.

S. V. Girman, Y. Sauve, and R. D. Lund. Receptive field properties of single


neurons in rat primary visual cortex. J. Neurophysiol., 82:301311, 1999.

L. Glass, A. Beuter, and D. Larocque. Time delays, oscillations, and chaos in


physiological control systems. Math. Biosci., 90:111125, 1988.

L. Glass and M. C. Mackey. Pathological conditions resulting from instabilities


in physiological systems. Ann. N. Y. Acad. Sci., 316:214235, 1979.

L. Glass and M. C. Mackey. From Clocks to Chaos. Princeton University Press,


Princeton, 1988.

J. M. Goldberg and P. B. Brown. Response of binaural neurons of dog superior


olivary complex to dichotic tonal stimuli: some physiological mechanisms of
sound localization. J. Neurophysiol., 32:613636, 1969.

L. Gomez and R. Budelli. Two-neurons networks. II. Leaky integrator pacemaker


models. Biol. Cybern., 74:131137, 1996.

S. J. Gould. The Mismeasure of Man. W. W. Norton & Company, New York,


1996.

P. Grassberger. Efficient large-scale simulations of a uniformly driven system.


Phys. Rev. E, 49:24362444, 1994.

C. M. Gray, P. Konig, A. K. Engel, and W. Singer. Oscillatory responses in


cat visual cortex exhibit inter-columnar synchronisation which reflects global
stimulus properties. Nature, 338:334337, 1989.

D. M. Green and J. A. Swets. Signal Detection Theory and Psychophysics. Penin-


sula Publishing, Los Altos, CA, 1988.

P. E. Greenwood, L. M. Ward, D. F. Russell, A. Neiman, and F. Moss. Stochastic


resonance enhances the electrosensory information available to paddlefish for
prey capture. Phys. Rev. Lett., 84:47734776, 2000.

G. Grinstein, D.-H. Lee, and S. Sachdev. Conservation laws, anisotropy, and


self-organized criticality in noisy nonequilibrium systems. Phys. Rev. Lett.,
64:19271930, 1990.

J. M. Groh, E. Seidemann, and W. T. Newsome. Neural fingerprints of visual


attention. Current Biology, 6:14061409, 1996.
I-154 BIBLIOGRAPHY

S. Gruen, M. Diesmann, and A. Aertsen. Unitary events in multiple single-neuron


spiking activity: I. Detection and significance. Neural Comp., 14:4380, 2001a.

S. Gruen, M. Diesmann, and A. Aertsen. Unitary events in multiple single-neuron


spiking activity: II. Nonstationary data. Neural Comp., 14:81119, 2001b.

O.-J. Grusser and U. Grusser-Cornehls. Neurophysiology of the anuran visual


system. In R. Llinas and W. Precht, editors, Frog Neurobiology, pages 297
385. Springer-Verlag, Berlin, 1976.

U. Grusser-Cornehls. The neurophysiology of the amphibian optic tectum. In


H. Vanegas, editor, Comparative Neurology of the Optic Tectum, pages 211
245. Plenum Press, New York, London, 1984.

U. Grusser-Cornehls and W. Himstedt. The urodele visual system. In K. Fite,


editor, The Amphibian Visual System: A Multidisciplinary Approach, pages
203266. Academic Press, New York, 1976.

M. R. Guevara, L. Glass, M. C. Mackey, and A. Shrier. Chaos in neurobiology.


IEEE Trans. Syst. Man Cybern., SMC-13:790798, 1983.

R. Gutig, A. Aersten, and S. Rotter. Statistical significance of coincident spikes:


Count-based versus rate-based statistics. Neural Comp., 14:121153, 2001.

J. Haag and A. Borst. Active membrane properties and signal encoding in graded
potential neurons. J. Neurosci., 18:79727986, 1998.

K. P. Hadeler. Delay equations in biology. In Lecture Notes in Mathematics,


pages 136156. Springer-Verlag, Berlin, 1979.

J. K. Hale and S. M. V. Lunel. Introduction to Fuctional Differential Equations.


Springer-Verlag, New York, 1993.

T. A. Woolsey J. Hanaway and M. H. Gado. The Brain Atlas : A Visual Guide to


the Human Central Nervous System. Fitzgerald science, Bethesda MD, 2002.

C. M. Harris and D. M. Wolpert. Signal-dependent noise determines motor plan-


ning. Nature, 394:780784, 1998.

N. G. Hatsopoulos, C. L. Ojakangas, L. Paninski, and J. P. Donoghue. Informa-


tion about movement direction obtained from synchronous activity of motor
cortical neurons. Proc. Natl. Acad. Sci. USA, 95:1570615711, 1998.

C. Haurie, D. C. Dale, and M. C. Mackey. Cyclical neutropenia and other periodic


hematological disorders: a review of mechanisms and mathematical models.
Blood, 92:26292640, 1998.
BIBLIOGRAPHY I-155

C. Haurie, D. C. Dale, R. Rudnicki, and M. C. Mackey. Modeling complex


neutrophil dynamics in the grey collie. J. Theor. Biol., 204:505519, 2000.

C. Haurie, R. Person, D. C. Dale, and M. C. Mackey. Hematopoietic dynamics


in grey collies. Experimental Hematology, 27:11391148, 1999.

J. F. Heagy, N. Platt, and S. M. Hammel. Characterization of on-off intermit-


tency. Phys. Rev. E, 49:11401150, 1994.

T. Hearn, C. Haurie, and M. C. Mackey. Cyclical neutropenia and the peripheral


control of white blood cell production. J. theor. Biol., 192:167181, 1998.

D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949.

D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neu-


rosci., 9:181197, 1992.

W. Heiligenberg. Central processing of sensory information en electric fish. J.


Comp. Physiol. A, 161:621631, 1987.

B. Hellwig. A quantitative analysis of the local connectivity between pyramidal


neurons in layers 2/3 of the rat visual cortex. Biol. Cybern., 82:111121, 2000.

C. Heneghan, C. C. Chow, J. J. Collins, T. T. Imhoff, S. B. Lowen, and M. C.


Teich. Information measure quantifying aperiodic stochastic resonance. Phys.
Rev. E, 54:R2228R2231, 1996.

M. Henle, editor. Documents of Gestalt Psychology. University of California


Press, Berkeley CA, 1961.

J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural


Computation. Addison-Wesley Publ. Comp., Redwood City, Cal, 1991.

J. Hertz and S. Panzeri. Sensory coding and information transmission. In M. A.


Arbib, editor, The Handbook of Brain Theory and Neural Networks, 2nd edi-
tion, pages 10231026. MIT Press, Cambridge MA, 2003.

M. H. Herzog, U. Ernst, and C. W. Eurich. Ein Kamel ist keine Palme. Gehirn
und Geist, 1(1):1113, 2002.

M. H. Herzog, U. A. Ernst, A. Etzold, and C. W. Eurich. Local interactions


in neural networks explain global effects in gestalt processing and masking.
Neural Comp., 15:20912113, 2003:L.

M. H. Herzog and M. Fahle. Effects of grouping in contextual modulation. Nature,


415:433436, 2002.
I-156 BIBLIOGRAPHY

M. H. Herzog, M. Harms, U. A. Ernst, C. W. Eurich, S. H. Mahmud, and


M. Fahle. Extending the shine-through effect to classical masking paradigms.
Vision research, 43:26592667, 2003.

M. H. Herzog and C. Koch. Seeing properties of an invisible object: feature


inheritance and shine-through. PNAS, 98:42714275, 2001.

E. Hida and K.-i. Naka. Spatiotemporal visual receptive fields as revealed by


spatiotemporal random noise. Z. Naturforsch., 37 c:10481049, 1982.

E. Hida, K.-i. Naka, and K. Yokoyama. A new photographic method for mapping
spatiotemporal receptive field using television snow stimulation. J. Neurosci.
Meth., 8:225230, 1983.

I. Hidaka, D. Nozaki, and Y. Yamamoto. Functional stochastic resonance in the


human brain: noise-induced sensitization of the baroreflex system. Phys. Rev.
Lett., 85:37403743, 2000.

M. L. Hines and N. T. Carnevale. The NEURON simulation environment. Neural


Comp., 9:11791209, 1997.

G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. Distributed representa-


tions. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed
Processing, Vol. 1, pages 77109. The MIT Press, Cambridge, MA, 1986.

A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current


and its application to conduction and excitation in nerve. J. Physiol., 117:500
544, 1952.

J. J. Hopfield. Neural networks and physical systems with emergent collective


computational abilities. Proc. Natl. Acad. Sci. USA, 79:25542558, 1982.

J. J. Hopfield. Neurons with graded response have collective computational prop-


erties like those of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088
3092, 1984.

J. J. Hopfield. Pattern recognition computation using action potential timing for


stimulus representation. Nature, 376:3336, 1995.

E. Hoshi, K. Shima, and J. Tanji. Task-dependent selectivity of movement-related


neuronal activity in the primate prefrontal cortex. J. Neurophysiol., 80:3392
3397, 1998.

C.-F. Hsiao, M. Watanabe, and Y. Fukuda. The relation between axon diameter
and axonal conduction velocity of Y, X and W cells in the cat retina. Brain
Res., 309:357361, 1984.
BIBLIOGRAPHY I-157

D. H. Hubel and T. N. Wiesel. Receptive fields, binocular interaction and func-


tional architecture of cats visual cortex. J. Physiol., 160:106154, 1962.

H. Huning, H. Glunder, and G. Palm. Synaptic delay learning in pulse-coupled


neurons. Neural Comp., 10:555565, 1998.

T. Hwa and M. Kardar. Dissipative transport in open systems: an investigation


of self-organized criticality. Phys. Rev. Lett., 62:18131816, 1989.

D. Ingle. Visual releasers of prey-catching behavior in frogs and toads. Brain,


Behav. Evol., 1:500518, 1968.

D. Ingle. Spatial vision in anurans. In K. Fite, editor, The Amphibian Visual


System: A Multidisciplinary Approach, pages 119140. Academic Press, New
York, 1976.

G. M. Innocenti, P. Lehmann, and J.-C. Houzel. Computational structure of


visual callosal axons. Europ. J. Neurosci., 6:918935, 1994.

E. Ising. Beitrag zur Theorie des Ferromagnetismus. Zeitschr. f. Physik, 31:


253258, 1925.

S. K. Itaya. Retinal efferents from the pretectal area in the rat. Brain Research,
201:436441, 1980.

J. J. B. Jack, D. Noble, and R. W. Tsien. Electric Current Flow in Excitable


Cells. Clarendon Press, Oxford, 1975.

H. Jaeger. The echo state approach to analysing and training recurrent neural
networks. GMD Report 148, Sankt Augustin, 2001a.

H. Jaeger. Short term memory in echo state networks. GMD Report 152, Sankt
Augustin, 2001b.

F. Jakel. Decoding neural activity of dynamic stimuli. Bachelors Thesis, Univer-


sity of Osnabruck, 2001.

D. Jancke, F. Chavane, A. Arieli, and A. Grinvald. Drawing an illusion across


primary visual cortex: line-motion revealed by voltage-sensitive dye imaging.
In R. P. Wurtz and M. Lappe, editors, Dynamic Perception, pages 6769.
Akademische Verlagsgesellschaft, Berlin, 2002.

L. A. Jeffress. A place theory of sound localization. J. Comp. Physiol. Psychol.,


41:3539, 1948.

D. H. Johnson. Point process models of single-neuron discharges. J. Comp.


Neurosci., 3:275299, 1996.
I-158 BIBLIOGRAPHY

D. H. Johnson, C. M. Gruner, K. Baggerly, and C. Seshagiri. Information-


theoretic analysis of neural coding. J. Comp. Neurosci., 10:4769, 2001.
K. O. Johnson. Sensory discrimination: neural processes prededing discrimination
decision. J. Neurophysiol., 43:17931815, 1980.
D. Johnston, J. C. Magee, C. M. Colbert, and B. R. Christie. Active properties
of neuronal dendrites. Annu. Rev. Neurosci., 19:165186, 1996.
D. Johnston and S. M.-S. Wu. Foundations of Cellular Neurophysiology. MIT
Press, Cambridge, 1995.
F. Joublin, F. Spengler, S. Wacquant, and H. R. Dinse. A columnar model of
somatosensory reorganizational plasticity based on Hebbian and non-Hebbian
learning rules. Biol. Cybern., 74:275286, 1996.
P. Jung, A. Cornell-Bell, K. S. Madden, and F. Moss. Noise-induced spiral waves
in astrocyte syncytia show evidence of self-organized criticality. J. Neurophys-
iol., 79:10981101, 1998.
P. Jung and G. Mayer-Kress. Noise controlled spiral growth in excitable media.
Chaos, 5:458462, 1995a.
P. Jung and G. Mayer-Kress. Spatiotemporal stochastic resonance in excitable
media. Phys. Rev. Lett., 74:21302133, 1995b.
M. Juusola, A. S. French, R. O. Uusitalo, and M. Weckstrom. Information
processing by graded-potential transmission through tonically active synapses.
TINS, 19:292297, 1996.
L. P. Kadanoff, S. R. Nagel, L. Wu, and S. Zhou. Scaling and universality in
avalanches. Phys. Rev. A, 39:65246537, 1989.
S. Kadar, J. Wang, and K. Showalter. Noise-supported travelling waves in sub-
excitable media. Nature, 391:770772, 1998.
E. R. Kandel, J. H. Schwartz, and T. M. Jessell. Principles of Neural Science.
Elsevier, New York, third edition, 1991.
J. Karbowski. Fisher information and temporal correlations for spiking neurons
with stochastic dynamics. Phys. Rev. E, 61:42354252, 2000.
S. Kastner, P. de Weerd, R. Desimone, and L. G. Ungerleider. Mechanisms of
directed attention in the human extrastriate cortex as revealed by functional
MRI. Science, 282:108111, 1998.
P. Katz, editor. Beyond Neurotransmission: Neuromodulation and Its Importance
for Information Processing. Oxford University Press, Oxford, 1999.
BIBLIOGRAPHY I-159

S. M. Kay. Fundamentals of Statistical Signal Processing. Volume I: Estimation


Theory. Prentice Hall, Upper Saddle River, NJ, 1993.

R. Kempter, W. Gerstner, and J. L. van Hemmen. Hebbian learning and spiking


neurons. Phys. Rev. E, 59:44984514, 1999.

H. Kettenmann and R. Grantyn, editors. Practical Electrophysiological Methods.


Wiley-Liss, New York, 1992.

U. Kim, T. Bal, and D. A. McCormick. Spindle waves are propagating synchro-


nized oscillations in the ferret LGNd in vitro. J. Neurophysiol., 74:13011323,
1995.

O. Kinouchi, S. T. R. Pinho, and C. P. C. Prado. Random-neighbor Olami-Feder-


Christensen slip-stick model. Phys. Rev. E, 58:39974000, 1998.

J. J. Knierim and D van Essen. Neuronal responses to static texture patterns in


area V1 of the alert macaque monkey. J. Neurophysiol., 67:961979, 1992.

B. W. Knight. Dynamics of encoding in a population of neurons. J. gen. Physiol.,


59:734766, 1972a.

B. W. Knight. The relationship between the firing rate of a single neuron and
the level of activity in a population of neurons. J. gen. Physiol., 59:767778,
1972b.

E. I. Knudsen and M. Konishi. A neural map of auditory space in the owl.


Science, 200:795797, 1978.

E. I. Knudsen, M. Konishi, and J. D. Pettigrew. Receptive fields of auditory


neurons in the owl. Science, 198:12781280, 1977.

A. J. Koch and H. Meinhardt. Biological pattern formation: from basic mecha-


nisms to complex structures. Rev. Mod. Phys., 66:14811507, 1994.

C. Koch. Computation and the single neuron. Nature, 385:207210, 1997.

C. Koch and H. Schuster. A simple network showing burst sychronization without


frequency-locking. Neural Comp., 4:211223, 1992.

E. Koechlin, J. L. Anton, and Y. Burnod. Bayesian inference in populations of


cortical neurons: a model of motion integration and segmentation in area MT.
Biol. Cybern., 80:2544, 1999.

W. Kohler. Gestalt Psychology. Liveright Publishing Corporation, New York,


1992.
I-160 BIBLIOGRAPHY

T. Kohonen. Self-organized formation of topologically correct feature maps. Biol.


Cybern., 43:5969, 1982.

V. B. Kolmanovskii and V. R. Nosov. Stability of Functional Differential Equa-


tions. Academic Press, London, 1986.

P. Konig and T. B. Schillen. Stimulus-dependent assembly formation of oscilla-


tory responses: I. Synchronization. Neural Comp., 3:155166, 1991.

M. Konishi. Localization of acoustic signals in the owl. In J.-P. Ewert, R. R.


Capranica, and D. J. Ingle, editors, Advances in Vertebrate Neuroethology,
pages 227245. Plenum Press, New York, London, 1983.

M. Konishi. Centrally synthesized maps of sensory space. TINS, 9:163168, 1986.

M. Konishi. Deciphering the brains code. Neural Comp., 3:118, 1991.

M. Konishi. Listening with two ears. Sci. Am., 268(4):3441, 1993.

W. B. Kristan Jr and B. K. Shaw. Population coding and behavioral choice.


Curr. Op. Neurobiol., 7:826831, 1997.

L. Kruger. Photographic Atlas of the Rat Brain: The Cell and Fiber Architecture
Illustrated in Three Planes with Stereotaxic Coordinates. Cambridge University
Press, Cambridge, 1994.

M. H. Kryger and T. Millar. Cheyne-Stokes respiration: stability of interacting


systems in heart failure. Chaos, 1:265269, 1991.

I. Kupferman and K. R. Weiss. The command neuron concept. Behav. Brain


Sci., 1:339, 1978.

L. Lapicque. Recherches quantitatives sur lexcitation electrique des nerfs traitee


comme une polarization. J. Physiol. Pathol. Gen., 9:620635, 1907.

Y. Lass and M. Abeles. Transmission of information by the axon: I. Noise and


memory in the myelinated nerve fiber of the frog. Biol. Cybern., 19:6167,
1975.

S. B. Laughlin, R. R. de Ruyter van Steveninck, and J. C. Anderson. The


metabolic cost of neural information. Nature Neurosci., 1:3641, 1998.

D. Lee, N. L. Port, W. Kruse, and A. P. Georgopoulos. Variability and correlated


noise in the discharge of neurons in motor and parietal areas of the primate
cortex. J. Neurosci., 18:11611170, 1998.

D. K. Lee, L. Itti, C. Koch, and J. Braun. Attention activates winner-take-all


competition among visual filters. Nature Neurosci., 2:375381, 1999.
BIBLIOGRAPHY I-161

S. R. Lehky, A. Pouget, and T. J. Sejnowski. Neural models of binocular depth


perception. In Cold Spring Harbor Symposia on Quantitative Biology, Volume
LV, pages 765777. Cold Spring Harbor Laboratory Press, 1990.
J. Y. Lettvin, H. R. Maturana, W. S. McCulloch, and W. H. Pitts. What the
frogs eye tells the frogs brain. Proc. Inst. Radio Eng. NY, 47:19401951, 1959.
W. B. Levy and R. A. Baxter. Energy efficient neural codes. Neural Comp., 8:
531543, 1996.
G. D. Lewen, W. Bialek, and R. R. de Ruyter van Steveninck. Neural coding of
naturalistic motion stimuli. Network: Comput. Neural Syst., 12:317329, 2001.
Z. Li. Visual segmentation by contextual influences via intra-cortical interactions
in the primary visual cortex. Network: Comput. Neural Syst., 10:187212,
1999.
Z. Li. Computational design and nonlinear dynamics of a recurrent network
model of the primary visual cortex. Neural Comp., 13:17491780, 2001.
J. K. Lin, K. Pawelzik, U. Ernst, and T. J. Sejnowski. Irregular synchronous ac-
tivity in stocastically-coupled networks of integrate-and-fire neurons. Network:
Comput. Neural Syst., 9:333344, 1998.
J. F. Lindner, S. Chandramouli, A. R. Bulsara, M. Lochler, and W. L. Ditto.
Noise enhanced propagation. Phys. Rev. Lett., 81:50485051, 1998.
S. Lise and H. J. Jensen. Transitions in nonconserving models of self-organized
criticality. Phys. Rev. Lett., 76:23262329, 1996.
J. E. Lisman. Bursts as a unit of neural information: making unreliable synapses
reliable. TINS, 20:3843, 1997.
M. S. Livingstone and D. H. Hubel. Segregation of form, color, movement, and
depth: anatomy, physiology, and perception. Science, 240:740749, 1988.
A. Longtin. Stochastic resonance in neuron models. J. Stat. Phys., 70:309327,
1993.
A. Longtin, A. Bulsara, and F. Moss. Time-interval sequences in bistable systems
and the noise-induced transmissions of information by sensory neurons. Phys.
Rev. Lett., 67:656659, 1991.
A. Longtin, A. Bulsara, D. Pierson, and F. Moss. Bistability and the dynamics
of periodically forced sensory neurons. Biol. Cybern., 70:569578, 1994.
A. Longtin and J. G. Milton. Complex oscillations in the human pupil light reflex
with mixed and delayed feedback. Math. Biosci., 90:183199, 1988.
I-162 BIBLIOGRAPHY

A. Longtin and J. G. Milton. Insight into the transfer function, gain, and oscilla-
tion onset for the pupil light reflex using nonlinear delay-differential equations.
Biol. Cybern., 61:5158, 1989a.

A. Longtin and J. G. Milton. Modelling autonomous oscillations in the human


pupil light reflex using non-linear delay-differential equations. Bull. Math. Biol.,
51:605624, 1989b.

A. Longtin, J. G. Milton, J. E. Bos, and M. C. Mackey. Noise and critical behavior


of the pupil light reflex at oscillation onset. Phys. Rev. A, 41:69927005, 1990.

S. J. Luck, L. Chelazzi, S. A. Hillyard, and R. Desimone. Neural mechanisms of


spatial selective attention in areas V1, V2, and V4 of macaque visual cortex.
J. Neurophysiol., 77:2442, 1997.

W. Maass and H. Markram. Synapses as dynamic memory buffers. Neural Net-


works, 15:155161, 2002.

W. Maass, T. Natschlager, and H. Markram. Real-time computing without stable


states: a new framework for neural computation based on perturbations. Neural
Comp., 14:25312560, 2002.

N. MacDonald. Biological Delay Systems: Linear Stability Theory. Cambridge


University Press, Cambridge, 1989.

D. M. MacKay and W. S. McCulloch. The limiting information capacity of a


neuronal link. Bull. Math. Biophys., 14:127135, 1952.

M. C. Mackey and U. an der Heiden. Dynamical diseases and bifurcations: un-


derstanding functional disorders in physiological systems. Funkt. Biol. Med.,
1:156164, 1982.

M. C. Mackey and U. an der Heiden. The dynamics of recurrent inhibition. J.


Math. Biol., 19:211225, 1984.

M. C. Mackey and L. Glass. Oscillation and chaos in physiological control sys-


tems. Science, 197:287289, 1977.

M. C. Mackey and J. G. Milton. Dynamical diseases. Ann. N. Y. Acad. Sci.,


504:1632, 1987.

Z. F. Mainen and T. J. Sejnowski. Reliability of spike timing in neocortical


neurons. Science, 268:15031506, 1995.

R. E. Mains and P. H. Patterson. Primary cultures of dissociated sympathetic


neurons: I. establishment of long-term growth in culture and studies of differ-
entiated properties. J. Cell Biol., 59:329345, 1973.
BIBLIOGRAPHY I-163

B. B. Mandelbrot and J. W. Van Ness. Fractional Brownian motions, fractional


noises and applications. SIAM Review, 10:422437, 1968.

A. Manwani and C. Koch. Detecting and estimating signals over noisy and
unreliable synapses: information-theoretic analysis. Neural Comp., 13:133,
2001.

H. Markram and B. Sakmann. Action potentials propagating back into dendrites


triggers changes in efficacy of single-axon synapses between layer V pyramidal
cells. Neurosci. Abstracts, 21:2007, 1995.

H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neo-


cortical pyramidal neurons. Nature, 382:807810, 1996.

H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon
of neocortical pyramidal neurons. PNAS, 95:53235328, 1998.

K. A. C. Martin. A brief history of the feature detector. Cerebral Cortex, 4:


17, 1994.

K. A. C. Martin. The Pope and the grandmother a frogs eye-view of theory.


Nature Neurosci. Suppl., 3:1169, 2000.

J. Massion and M. H. Woollacott. Posture and equilibrium. In A. M. Bron-


stein, T. Brandt, and M. H. Woollacott, editors, Clinical Disorders of Balance,
Posture and Gait, pages 118. Arnold, London, 1996.

J. H. R. Maunsell. The brains visual world: representation of visual targets in


cerebral cortex. Science, 270:764769, 1995.

E. M. Maynard, N. G. Hatsopoulos, C. L. Ojakangas, B. C. Acuna, J. N. Sanes,


R. A. Normann, and J. P. Donoghue. Neuronal interactions improve cortical
population coding of movement direction. J. Neurosci., 19:80838093, 1999.

C. J. McAdams and J. H. R. Maunsell. Effects of attention on orientation-tuning


functions of single neurons in macaque cortical area V4. J. Neurosci., 19:
431441, 1999.

D. McAlpine and B. Grothe. Sound localization and delay lines do mammals


fit the model? TINS, 26:347350, 2003.

W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in


nervous activity. Bull. Math. Biophys., 5:115133, 1943.

B. McNamara, K. Wiesenfeld, and R. Roy. Observation of stochastic resonance


in a ring laser. Phys. Rev. Lett., 60:26262629, 1988.
I-164 BIBLIOGRAPHY

M. R. Mehta, A. K. Lee, and M. A. Wilson. Role of experience and oscillations


in transforming a rate code into a temporal code. Nature, 417:741746, 2002.
M. R. Mehta, M. C. Quirk, and M. A. Wilson. Experience-dependent asymmetric
shape of hippocampal receptive fields. Neuron, 25:707715, 2000.
M. Meister. Multineuronal codes in retinal signaling. Proc. Natl. Acad. Sci. USA,
93:609614, 1996.
M. Meister and M. J. Berry II. The neural code of the retina. Neuron, 22:435450,
1999.
M. Meister, L. Lagnado, and D. A. Baylor. Concerted signaling by retinal gan-
glion cells. Science, 270:12071210, 1995.
B. W. Mel. NMDA-based pattern discrimination in a modeled cortical neuron.
Neural Comp., 4:502517, 1992.
B. W. Mel. Information processing in dendritic trees. Neural Comp., 6:10311085,
1994.
T. Metzinger, editor. Neural Correlates of Consciousness. MIT Press, Cambridge
MA, 2000.
A. A. Middleton and C. Tang. Self-organized criticality in nonconserved systems.
Phys. Rev. Lett., 74:742745, 1995.
K. D. Miller and D. J. C. MacKay. The role of constraints in Hebbian learning.
Neural Comp., 6:100126, 1994.
M. I. Miller and M. B. Sachs. Representation of voice pitch in discharge patterns
of auditory nerve fibers. Hearing Research, 14:257279, 1984.
J. G. Milton, U. an der Heiden, A. Longtin, and M. C. Mackey. Complex dynamics
and noise in simple neural networks with delayed mixed feedback. Biomed.
Biochim. Acta, 49:697707, 1990.
J. G. Milton and D. Black. Dynamic diseases in neurology and psychiatry. Chaos,
5:813, 1995.
J. G. Milton, J.-L. Cabrera, E. Rayburn, J. D. Hunter, and C. W. Eurich. Survival
times for a pencil balanced at the end of a finger. In Proceedings of the Meeting
of the American Physical Society, page I13.013. 2000.
J. G. Milton, P. H. Chu, and J. D. Cowan. Spiral waves in integrate-and-fire
neural networks. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Ad-
vances in Neural Information Processing Systems 5, pages 10011006. Morgan
Kaufmann, San Mateo, CA, 1993.
BIBLIOGRAPHY I-165

J. G. Milton, A. Longtin, A. Beuter, M. C. Mackey, and L. Glass. Complex


dynamics and bifurcations in neurology. J. Theor. Biol., 138:129147, 1989.

R. E. Mirollo and S. H. Strogatz. Synchronization of pulse-coupled biological


oscillators. SIAM J. Appl. Math., 50:16451662, 1990.

S. P. Moore, D. S. Rushmer, S. L. Windus, and L. M. Nasher. Human automatic


postural responses: responses to horizontal perturbations of stance in multiple
directions. Exp. Brain Res., 73:648658, 1988.

J. Moran and R. Desimone. Selective attention gates visual processing in the


extrastriate cortex. Science, 229:782784, 1985.

F. Moss, D. Pierson, and D. OGorman. Stochastic resonance: tutorial and


update. Int. J. Bif. Chaos, 4:13831397, 1994.

F. Moss and K. Wiesenfeld. The benefits of background noise. Sci. Am., 273(2):
6669, 1995.

B. C. Motter. Focal attention produces spatially selective processing in visual


cortical areas V1, V2, and V4 in the presence of competing stimuli. J. Neuro-
physiol., 70:909919, 1993.

J. D. Murray. Mathematical Biology. Springer-Verlag, Berlin, 1993.

M. R. Murray. Nervous tissues in vitro. In E. N. Willmer, editor, Cells and


Tissues in Culture: Methods, Biology, and Physiology, Vol. 2, pages 373455.
Academic Press, New York, 1965.

Y. Nagai, H. Gonzalez, A. Shrier, and L. Glass. Paroxysmal starting and stopping


of circulating waves in excitable media. Phys. Rev. Lett., 84:42484251, 2000.

J. Nagler, C. Hauert, and H. G. Schuster. Self-organized criticality in a nutshell.


Phys. Rev. E, 60:27062709, 1999.

J. S. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line


simulating nerve axon. Proc. I. R. E., 50:20612070, 1962.

H. Nakahara, S. Wu, and S. i. Amari. Attention modulation of neural tuning


through peak and base rate. Neural Comp., 13:20312047, 2001.

L. M. Nashner. Adapting reflexes controlling the human posture. Exp. Brain


Res., 26:5972, 1976.

L. M. Nashner. Fixed patterns of rapid postural responses among leg muscles


during stance. Exp. Brain Res., 30:1324, 1977.
I-166 BIBLIOGRAPHY

L. M. Nashner and M. Woollacott. The organization of rapid postural adjust-


ments of standing humans: an experimental-conceptual model. In R. E. Talbot
and D. R. Humphrey, editors, Posture and Movement, pages 243257. Raven
Press, New York, 1979.

L. M. Nashner, M. Woollacott, and G. Tuma. Organization of rapid responses to


postural and locomotor-like perturbations of standing man. Exp. Brain Res.,
36:463476, 1979.

A. Neiman, L. Schimansky-Geier, A. Cornell-Bell, and F. Moss. Noise-enhance


phase synchronization in excitable media. Phys. Rev. Lett., 83:48964899, 1999.

E. J. Nestler, S. E. Hyman, and R. Malenka. Molecular Basis of Neuropharma-


cology: A Foundation for Clinical Neuroscience. McGraw-Hill, 2001.

S. Nirenberg, S. M. Carcieri, A. L. Jacobs, and P. E. Latham. Retinal ganglion


cells act largely as independent encoders. Nature, 411:698701, 2001.

J. Nolte. The Human Brain: An Introduction to Its Functional Anatomy. Mosby


(Elsevier), Oxford, 2002.

L. G. Nowak and J. Bullier. The timing of information transfer in the visual


system. In K. Rocklund, J. Kaas, and A. Peters, editors, Cerebral cortex,
extrastriate cortex in primates, Vol. 12, pages 205241. Plenum Press, New
York, 1997.

P. L. Nunez. Electric Fields of the Brain. Oxford University Press, Oxford, 1981.

P. L. Nunez. Neocortical Dynamics and Human EEG Rhythms. Oxford University


Press, Oxford, 1995.

A. OHagan. Kendalls Advanced Theory of Statistics. Volume 2B: Bayesian


Inference. Arnold, London, 1994.

T. Ohira and J. G. Milton. Delayed random walks. Phys. Rev. E, 52:32773280,


1995.

Z. Olami, H. J. S. Feder, and K. Christensen. Self-organized criticality in a


continous, nonconservative cellular automaton modeling earthquakes. Phys.
Rev. Lett., 68:12441247, 1992.

B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field proper-


ties by learning a sparse code for natural images. Nature, 381:607609, 1996a.

B. A. Olshausen and D. J. Field. Natural image statistics and efficient coding.


Network: Comput. Neural Syst., 7:333339, 1996b.
BIBLIOGRAPHY I-167

C. R. Olson. Object-based vision and attention in primates. Curr. Op. Neurobiol.,


11:171179, 2001.

M. W. Oram, P. Foldiak, D. I. Perrett, and F. Sengpiel. The ideal homunculus:


decoding neural populations signals. TINS, 21:259265, 1998.

M. W. Oram, M. C. Wiener, R. Lestienne, and B. J. Richmond. Stochastic


nature of precisely timed spike patterns in visual system neuronal responses.
J. Neurophysiol., 81:30213033, 1999.

C. C. Pack, V. K. Berezovskii, and R. T. Born. Dynamic properties of neurons


in cortical area MT in alert and anaesthetized monkeys. Nature, 414:905908,
2001.

A. R. Palmer and I. M. Winter. Cochlear nerve and cochlear nucleus responses


to the fundamental frequency of voiced speech sounds and harmonic complex
tones. Adv. Biosci., 83:231239, 1992.

S. Panzeri, G. Biella, E. T. Rolls, W. E. Skaggs, and A. Treves. Speed, noise,


information and the graded nature of neuronal responses. Network: Comp.
Neural Syst., 7:365370, 1996.

S. Panzeri, R. S. Petersen, S. R. Schultz, M. Lebedev, and M. E. Diamond. The


role of spike timing in the coding of stimulus location in rat somatosensory
cortex. Neuron, 29:769777, 2001.

S. Panzeri and S. R. Schultz. A unified approach to the study of temporal,


correlational, and rate coding. Neural Comp., 13:13111349, 2001.

S. Panzeri, S. R. Schultz, A. Treves, and E. T. Rolls. Correlations and the


encoding of information in the nervous system. Proc. R. Soc. Lond. B, 266:
10011012, 1999.

M. A. Paradiso. A theory for the use of visual orientation information which


exploits the columnar structure of striate cortex. Biol. Cybern., 58:3549,
1988.

K. Pawelzik and C. W. Eurich. Das Gehirn als dynamisches System. In IK 2000


Interdisziplinares Kolleg. arenDTaP, Bremen, 2000.

D. H. Perkel, G. L. Gerstein, and G. P. Moore. Neuronal spike trains and stochas-


tic point processes. II. Simultaneous spike trains. Biophys. J., 7:419440, 1967.

M. F. Peschl. Reprasentation und Konstruktion. Kognitions- und neuroinforma-


tische Konzepte als Grundlage einer naturalisierten Epistemologie und Wis-
senschaftstheorie. Vieweg, Wiesbaden, 1994.
I-168 BIBLIOGRAPHY

R. J. Peterka. Postural control model interpretation of stabilogram diffusion


analysis. Biol. Cybern., 82:335343, 2000.

R. S. Petersen, S. Panzeri, and M. E. Diamond. Population coding of stimulus


location in rat somatosensory cortex. Neuron, 32:503514, 2001.

W. A. Phillips and W. Singer. In search for common foundations for cortical


computation. Behav. Brain Sci., 40:657722, 1997.

L. Pietronero, A. Vespignani, and S. Zapperi. Renormalization scheme for self-


organized criticality in sandpile models. Phys. Rev. Lett., 72:16901693, 1994.

N. Platt, E. A. Spiegel, and C. Tresser. On-off intermittency: a mechanism for


bursting. Phys. Rev. Lett., 70:279282, 1993.

H. E. Plesser and T. Geisel. Markov analysis of stochastic resonance in a period-


ically driven integrate-and-fire neuron. Phys. Rev. E, 59:70087017, 1999.

M. I. Posner and C. D. Gilbert. Attention and primary visual cortex. PNAS, 96:
25852587, 1999.

S. M. Potter and T. B. DeMarse. A new approach to neural cell culture for


long-term studies. J. Neurosci. Meth., 110:1724, 2001.

A. Pouget, P. Dayan, and R. Zemel. Information processing with populations


codes. Nature Rev. Neurosci., 1:125132, 2000.

A. Pouget, S. Deneve, and J.-C. Ducom. Narrow versus wide tuning curves:
whats best for a population code? Neural Comp., 11:8590, 1999.

A. Pouget and L. H. Snyder. Computational approaches to sensorimotor trans-


formations. Nature Neurosci. Suppl., 3:11921198, 2000.

A. Pouget, K. Zhang, S. Deneve, and P. E. Latham. Statistically efficient esti-


mation using population coding. Neural Comp., 10:373401, 1998.

W. Rall. Membrane potential transients and membrane time constant of mo-


toneurons. Exp. Neurol., 2:503532, 1960.

G. H. Recanzone, R. H. Wurtz, and U. Schwarz. Responses of MT and MST


neurons to one and two moving objects in the receptive field. J. Neurophysiol.,
78:29042915, 1997.

D. S. Reich, F. Mechler, and J. D. Victor. Independent and redundant information


in nearby cortical neurons. Science, 294:25662568, 2001.

H. Reichert. Neurobiologie. Georg Thieme Verlag, Stuttgart, New York, 1990.


BIBLIOGRAPHY I-169

P. Reinagel. How do visual neurons respond in the real world? Curr. Opin.
Neurobiol., 11:437442, 2001.

J. H. Reynolds, L. Chelazzi, and R. Desimone. Competitive mechanisms subserve


attention in macaque areas V2 and V4. J. Neurosci., 19:17361753, 1999.

F. Rieke, D. Warland, and W. Bialek. Coding efficiency and information rates in


sensory systems. Europhys. Lett., 22:151156, 1993.

F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek. Spikes -


Exploring the Neural Code. MIT Press, Cambridge, MA, 1997.

M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in


cortex. Nature Neurosci., 2:10191025, 1999.

D. L. Ringach, M. J. Hawken, and R. Shapley. Dynamics of orientation tuning


in macaque primary visual cortex. Nature, 387:281284, 1997a.

D. L. Ringach, G. Sapiro, and R. Shapley. A subspace reverse-correlation tech-


nique for the study of visual neurons. Vision Res., 37:24552464, 1997b.

J. Rinzel. Discussion: electrical excitability of cells, theory and experiment:


review of the Hodgkin-Huxley foundation and an update. Bull. Math. Biol.,
52:523, 1990.

M.-S. Rioult-Pedotti, D. Friedman, and J. P. Donoghue. Learning-induced LTP


in neocortex. Science, 290:533536, 2000.

B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University


Press, Cambridge, 1996.

P. R. Roelfsema, A. K. Engel, P. Konig, and W. Singer. Visuomotor integration


is associated with zero time-lag synchronization among cortical areas. Nature,
385:157161, 1997.

P. R. Roelfsema, V. A. F. Lamme, and H. Spekreijse. Object-based attention in


the primary visual cortex of the macaque monkey. Nature, 395:376380, 1998.

F. Rohrbein and C. Zetzsche. The statistics of natural scenes and Webers law.
In R. P. Wurtz and M. Lappe, editors, Dynamic Perception, pages 233238.
Akademische Verlagsgesellschaft, Berlin, 2002.

R. Rojas. Theorie der neuronalen Netze. Springer-Verlag, Berlin, 1993.

F. Rosenblatt. The perceptron: a probabilistic model for information storage and


organization in the brain. Psychol. Rev., 65:386408, 1958.

A. L. Roskies. The binding problem. Neuron, 24:79, 1999.


I-170 BIBLIOGRAPHY

J. P. Rospars and P. Lansky. Stochastic model neuron without resetting of den-


dritic potential: application to the olfactory system. Biol. Cybern., 69:283294,
1993.

G. Roth. Experimental analysis of the prey catching behavior of Hydromantes


italicus Dunn (Amphibia, Plethodontidae). J. Comp. Physiol., 109:4758, 1976.

G. Roth. Responses in the optic tectum of the salamander Hydromantes italicus


to moving prey stimuli. Exp. Brain Res., 45:386392, 1982.

G. Roth. Visual Behavior in Salamanders. Springer-Verlag, Berlin, 1987.

G. Roth. Das Gehirn und seine Wirklichkeit. Suhrkamp, Frankfurt, 1994.

G. Roth. Fuhlen, Denken, Handeln Wie das Gehirn unser Verhalten steuert.
Suhrkamp, Frankfurt, 2001.

G. Roth and W. Himstedt. Response characteristics of neurons in the tectum


opticum of Salamandra. Naturwissenschaften, 65:657658, 1978.

R. Rouse, S. Han, and J. E. Lukens. Flux amplification using stochastic su-


perconducting quantum interference devices. Appl. Phys. Lett., 60:108110,
1995.

D. L. Ruderman. The statistics of natural images. Network: Comput. Neural


Syst., 5:517548, 1994.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by


back-propagating errors. Nature, 323:533536, 1986.

D. F. Russell, L. A. Wilkens, and F. Moss. Use of behavioural stochastic resonance


by paddle fish for feeding. Nature, 402:291294, 1999.

E. Salinas and L. F. Abbott. Vector reconstruction from firing rates. J. Comp.


Neurosci., 1:89107, 1994.

E. Salinas and L. F. Abbott. Transfer of coded information from sensory to motor


networks. J. Neurosci., 15:64616474, 1995.

T. D. Sanger. Probability density estimation for the interpretation of neural


population codes. J. Neurophysiol., 76:27902793, 1996.

R. Sarpeshkar. Analog versus digital: extrapolating from electronics to neurobi-


ology. Neural Comp., 10:16011638, 1998.

S. J. Schmidt, editor. Der Diskurs des Radikalen Konstruktivismus. Suhrkamp,


Frankfurt, 1987.
BIBLIOGRAPHY I-171

S. J. Schmidt, editor. Kognition und Gesellschaft. Der Diskurs des Radikalen


Konstruktivismus 2. Suhrkamp, Frankfurt, 1992.
C. E. Schreiner and G. Langner. Periodicity coding in the inferior colliculus of
the cat. ii. topographical organization. J. Neurophysiol., 60:18231840, 1988.
S. R. Schultz and S. Panzeri. Temporal correlations and neural spike train entropy.
Phys. Rev. Lett., 86:58235826, 2001.
H. G. Schuster. Deterministic Chaos. VCH, Weinheim, 1988.
S. H. Scott, P. L. Gribble, K. M. Graham, and D. W. Cabel. Dissociation between
hand motion and population vectors from neural activity in motor cortex.
Nature, 413:161165, 2001.
I. Segev, J. Rinzel, and G. M. Shepherd. The Theoretical Foundation of Dendritic
Function. Selected Papers of Wilfrid Rall with Commentaries. MIT Press,
Cambridge MA, 1995.
F. Sengpiel, C. Blakemore, and R. Harrad. Interocular suppression in the primary
visual cortex: a possible neural basis of binocular rivalry. Vision Res., 35:179
195, 1995.
H. S. Seung. Half a century of Hebb. Nature Neurosci. Suppl., 3:1166, 2000.
H. S. Seung and H. Sompolinsky. Simple models for reading neuronal population
codes. Proc. Natl. Acad. Sci. USA, 90:1074910753, 1993.
M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neu-
rons: implications for connectivity, computation, and information coding. J.
Neurosci., 18:38703896, 1998.
C. E. Shannon. The mathematical theory of communication. Bell Syst. Tech. J.,
27:379423, 623653, 1949.
J. Shear. Explaining Consciousness: the Hard Problem. MIT Press, Cambridge,
MA, 1997.
C. Sherrington. Man on his nature. Cambridge University Press, Cambridge,
1940.
G. Silberberg, M. Bethge, H. Markram, K. Pawelzik, and M. Tsodyks. Dynamics
of population rate codes in ensembles of neocortical neurons. J. Neurophysiol.,
in press.
A. M. Sillito, K. Grieve, H. Jones, J. Cudeiro, and J. Davies. Visual cortical
mechanisms detecting focal orientation discontinuities. Nature, 378:492496,
1995.
I-172 BIBLIOGRAPHY

E. P. Simoncelli and B. A. Olshausen. Natural image statistics and neural repre-


sentation. Annu. Rev. Neurosci., 24:11931215, 2001.

W. Singer. Synchronization of cortical activity and its putative role in information


processing and learning. Annu. Rev. Physiol., 55:349374, 1993.

W. Singer and C. M. Gray. Visual feature integration and the temporal correlation
hypothesis. Annu. Rev. Neurosci., 18:555586, 1995.

S. Single and A. Borst. Dendritic integration and its role in computing image
velocity. Science, 281:18481850, 1998.

G. D. Smith, C. L. Cox, S. M. Sherman, and J. Rinzel. A firing-rate model of


spike-frequency adaptation in sinusoidally driven thalamocortical relay cells.
Thalamus & Related Systems, 11:122, 2001.

H. P. Snippe and J. J. Koenderink. Discrimination thresholds for channel-coded


systems. Biol. Cybern., 66:543551, 1992a.

H. P. Snippe and J. J. Koenderink. Information in channel-coded systems: cor-


related receivers. Biol. Cybern., 67:183190, 1992b.

J. E. S. Socolar, G. Grinstein, and C. Jayaprakash. On self-organized criticality


in nonconserving systems. Phys. Rev. E, 47:23662376, 1993.

W. R. Softky and C. Koch. Cortical cells should fire regularly, but do not. Neural
Comp., 4:643646, 1992.

W. R. Softky and C. Koch. The highly irregular firing of cortical cells is incon-
sistent with temporal integration of random EPSPs. J. Neurosci., 13:334350,
1993.

S. Song, K. D. Miller, and L. F. Abbott. Competitive Hebbian learning through


spike-timing-dependent synaptic plasticity. Nature Neurosci., 3:919926, 2000.

D. Sornette, A. Johansen, and I. Dornic. Mapping self-organized criticality onto


criticality. J. Phys. I, 5:325335, 1995.

L. Spillmann and J. S. Werner, editors. Visual Perception: The Neurophysiolog-


ical Foundations. Academic Press, London, 1990.

L. R. Stanford. Conduction velocity variations minimize conduction time differ-


ences among retinal ganglion cell axons. Science, 238:358360, 1987.

G. B. Stanley, F. F. Li, and Y. Dan. Reconstruction of natural scenes from


ensemble responses in the lateral geniculate nucleus. J. Neurosci., 19:8036
8042, 1999.
BIBLIOGRAPHY I-173

P. N. Steinmetz, A. Roy, P. J. Fitzgerald, S. S. Hsiao, K. O. Johnson, and


E. Niebuhr. Attention modulates synchronized neuronal firing in primate so-
matosensory cortex. Nature, 404:187190, 2000.

H. Stemmann, A. Wannig, E. L. Schulzke, C. W. Eurich, and W. A. Freiwald.


Population analysis of stimulus representation in rat primary visual cortex. In
N. Elsner and H. Zimmermann, editors, The Neurosciences from Basic Re-
search to Therapy. Proceedings of the 29th Gottingen Neurobiology Conference,
pages 607608. Georg Thieme Verlag, Stuttgart, 2003.

V. Steuber and D. J. Willshaw. Adaptive leaky integrator models of cerebellar


Purkinje cells can learn the clustering of temporal patterns. Neurocomputing,
2627:271276, 1999.

B. Stevens, S. Tanner, and R. D. Fields. Control of myelination by specific


patterns of neural impulses. J. Neurosci., 18:93039311, 1998.

A. Stuart, K. Ord, and S. Arnold. Kendalls Advanced Theory of Statistics.


Volume 2A: Classical Inference and the Linear Model. Arnold, London, 1999.

G. J. Stuart and B. Sakmann. Active propagation of somatic action potentials


into neocortical pyramidal cell dendrites. Nature, 367:6972, 1994.

N. Suga. Biosonar and neural computation in bats. Sci. Am., 262(6):3441, 1990.

W. E. Sullivan and M. Konsishi. Segregation of stimulus phase and intensity


coding in the cochlear nucleus of the barn owl. J. Neurosci., 4:17871799,
1984.

N. V. Swindale. Orientation tuning curves: empirical description and estimation


of parameters. Biol. Cybern., 78:4556, 1998.

D. W. Tank and J. J. Hopfield. Neural computation by concentrating information


in time. Proc. Natl. Acad. Science USA, 84:18961900, 1987.

P. Tass, J. Kurths, M. G. Rosenblum, G. Guasti, and H. Hefter. Delay-induced


transitions in visually guided movements. Phys. Rev. E, 54:R2224R2227, 1996.

T. Tateno and Y. Jimbo. Activity-dependent enhancement in the reliability of


correlated spike timings in cultured cortical neurons. Biol. Cybern., 80:4555,
1999.

F. E. Theunissen, S. V. David, N. C. Singh, A. Hsu, W. E. Vinje, and J. L.


Gallant. Estimating spatio-temporal receptive fields of auditory and visual
neurons from their responses to natural stimuli. Network: Comp. Neural. Syst.,
12:289316, 2001.
I-174 BIBLIOGRAPHY

A. Thiel and C. W. Eurich. Regular dynamics in hippocampal pyramidal cells


despite strongly delayed recurrent inhibition. In NeuroNord Conference on
Cognitive and Emotional Neuroscience. Delmenhorst, 2002.

A. Thiel, C. W. Eurich, and H. Schwegler. Stabilized dynamics in physiological


and neural systems despite strongly delayed feedback. In J. R. Dorronsoro, ed-
itor, Artificial Neural Networks ICANN 2002, pages 1520. Springer-Verlag,
Berlin, 2002.

A. Thiel, H. Schwegler, and C. W. Eurich. Complex dynamics is abolished in


delayed recurrent systems with distributed feedback times. Complexity, 8:102
108, 2003:L.

S. Thorpe, A. Delorme, and R. Van Rullen. Spike-based strategies for rapid


processing. Neural Networks, 14:715725, 2001.

S. J. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual


system. Nature, 381:520522, 1996.

S. J. Thorpe and J. Gautrais. Rapid visual processing using spike asynchrony.


In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural
Information Processing Systems 9, pages 901907. MIT Press, Cambridge MA,
1997.

S. J. Thorpe and J. Gautrais. Rank order coding. In J. Bower, editor, Computa-


tional Neuroscience, pages 113118. Plenum Press, New York, 1998.

M. Timme, F. Wolf, and T. Geisel. Prevalence of unstable attractors in network


of pulse-coupled oscillators. Phys. Rev. Lett., 89:154105, 2002.

G. Tomko and D. Crapper. Neuronal variability: non-stationary responses to


identical visual stimuli. Brain Res., 79:405418, 1974.

L. J. Toth and J. A. Assad. Dynamic coding of behaviourally relevant stimuli in


parietal cortex. Nature, 415:165168, 2002.

J. Tougaard. Signal detection theory, detectability and stochastic resonance ef-


fects. Biol. Cybern., 87:7990, 2002.

R. D. Traub and R. Miles. Neuronal Networks of the Hippocampus. Cambridge


University Press, Cambridge, 1991.

R. D. Traub, M. A. Whittington, E. H. Buhl, J. G. R. Jefferys, and H. J. Faulkner.


On the mechanism of the frequency shift in neuronal oscillations
induced in rat hippocampal slices by tetanic stimulation. J. Neurosci., 19:
10881105, 1999.
BIBLIOGRAPHY I-175

A. Treisman. Features and objects in visual processing. Scientific American, 254


(11):114125, 1986.

A. Treisman and G. Gelade. A feature-integration theory of visual attention.


Cognitive Psychol., 12:97136, 1980.

S. Treue. Neural correlates of attention in primate visual cortex. TINS, 24:


295300, 2001.

S. Treue, K. Hol, and H.-J. Rauber. Seeing multiple directions of motion


physiology and psychophysics. Nature Neurosci., 3:270276, 2000.

S. Treue and J. H. R. Maunsell. Attentional modulation of visual motion pro-


cessing in cortical areas MT and MST. Nature, 382:539541, 1996.

S. Treue and J. C. Martnetz Trujillo. Feature-based attention influences motion


processing gain in macaque visual cortex. Nature, 399:575579, 1999.

M. V. Tsodyks, T. Kenet, A. Grinvald, and A. Arieli. Linking spontaneous


activity of single cortical neurons and the underlying functional architecture.
Science, 286:19431946, 1999.

M. V. Tsodyks and H. Markram. The neural code between neocortical pyramidal


neurons depends on neurotransmitter release probability. Proc. Natl. Acad.
Sci., 94:719723, 1997.

M. V. Tsodyks, K. Pawelzik, and H. Markram. Neural networks with dynamic


synapses. Neural Comp., 10:821835, 1998.

M. V. Tsodyks and T. J. Sejnowski. Rapid state switching in balanced cortical


network models. Network: Comput. Neural Syst., 6:111124, 1995.

T. Tsuchiya and M. Katori. Proof of breaking of self-organized criticality in a


nonconservative Abelian sandpile model. Phys. Rev. E, 61:11831188, 2000.

H. C. Tuckwell. Introduction to Theoretical Neurobiology: Volume 1, Linear Ca-


ble Theory and Dendritic Structure. Cambridge University Press, Cambridge,
1988a.

H. C. Tuckwell. Introduction to Theoretical Neurobiology: Volume 2, Nonlinear


and Stochastic Theories. Cambridge University Press, Cambridge, 1988b.

T. Tversky and R. Miikulainen. Modeling directional selectivity using self-


organizing delay-adaptation maps. Neurocomputing, 44-46:679684, 2002.

L. Vaina. From Retina to Cortex: Selected Papers of David Marr. Birkhauser,


Boston, 1991.
I-176 BIBLIOGRAPHY

C. van der Togt, V. A. F. Lamme, and H. Spekreijse. Functional connectivity


within the visual cortex of the rat shows state changes. Europ. J. Neurosci.,
10:14901507, 1998.

R. Van Rullen, J. Gautrais, A. Delorme, and S. Thorpe. Face processing using


one spike per neurone. BioSystems, 48:229239, 1998.

R. Van Rullen and S. J. Thorpe. Rate coding versus temporal order coding: what
the retinal ganglion cells tell the visual cortex. Neural Comp., 13:12551283,
2001.

C. van Vreeswijk and H. Sompolinsky. Chaos in neuronal networks with balanced


excitatory and inhibitory activity. Science, 274:17241726, 1996.

R. J. A. van Wezel, M. J. M. Lankheet, F. A. J. Verstraten, A. F. M. Maree,


and W. A. van de Grind. Responses of complex cells in area 17 of the cat to
bi-vectorial transparent motion. Vision Res., 36:28052813, 1996.

J. A. Varela, K. Sen, J. Gibson, J. Fost, L. F. Abbott, and S. B. Nelson. A


quantitative description of short-term plasticity at excitatory synapses in layer
2/3 of rat primary visual cortex. J. Neurosci., 17:79267940, 1997.

K. Vasilakos and A. Beuter. Effects of noise on a delayed visual feedback system.


J. theor. Biol., 165:389407, 1993.

S. C. Venkataramani, T. M. Antonsen Jr., E. Ott, and J. C. Sommerer. On-off


intermittency: power spectrum and fractal properties of time series. Physica
D, 96:6699, 1996.

P. Verghese. Visual search and attention: a signal detection theory approach.


Neuron, 31:523535, 2001.

A. Vespignani and S. Zapperi. Order parameter and scaling fields in self-organized


criticality. Phys. Rev. Lett., 78:47934796, 1997.

A. Vespignani, S. Zapperi, and V. Loreto. Renormalization of nonequilibrium


systems with critical stationary states. Phys. Rev. Lett., 77:45604563, 1996.

J.-F. Vibert, K. Pakdaman, and N. Azmy. Interneural delay modification syn-


chronizes biologically plausible neural networks. Neural Networks, 7:589607,
1994.

J. D. Victor. Temporal aspects of neural coding in the retina and lateral genicu-
late. Network: Comput. Neural Syst., 10:R1R66, 1999.

R. Vogels. Population coding of stimulus orientation by striate cortical cells. Biol.


Cybern., 64:2531, 1990.
BIBLIOGRAPHY I-177

E. von Glasersfeld. Radikaler Konstruktivismus. Suhrkamp, Frankfurt, 1997.


H. von Helmholtz. Handbuch der physiologischen Optik. Leopold Voss, Leipzig,
1867.
T. Wachtler, C. Wehrhahn, and B. B. Lee. A simple model of human foveal
ganglion cell responses to hyperacuity stimuli. J. Comput. Neurosci., 3:7382,
1996.
H. Wagner and T. Takahashi. Influence of temporal cues on acoustic motion-
direction sensitivity of auditory neurons in the owl. J. Neurophysiol., 68(6):
20632076, 1992.
J. D. Wallis, K. C. Anderson, and E. K. Miller. Single neurons in prefrontal
cortex encode abstract rules. Nature, 411:953956, 2001.
D. K. Warland, P. Reinagel, and M. Meister. Decoding visual information from
a population of retinal ganglion cells. J. Neurophysiol., 78:23362350, 1997.
N. M. Weinberger. Dynamic regulation of receptive fields and maps in the adult
sensory cortex. Annu. Rev. Neurosci., 18:129158, 1995.
J. Wessberg, C. R. Stambaugh, J. D. Kralik, P. D. Beck, M. Laubach, J. K.
Chapin, J. Kim, S. J. Biggs, M. A. Srinivasan, and M. A. L. Nicolelis. Real-
time prediction of hand trajectory by ensembles of cortical neurons in primates.
Nature, 408:361365, 2000.
I. M. White and S. P. Wise. Rule-dependent neuronal activity in the prefrontal
cortex. Exp. Brain Res., 126:315335, 1999.
M. C. Wiener and B. J. Richmond. Decoding spike trains instant by instant using
order statistics and the mixture-of-Poissons model. J. Neurosci., 23:23942406,
2003.
K. Wiesenfeld, D. Pierson, E. Pantazelou, C. Dames, and F. Moss. Stochastic
resonance on a circle. Phys. Rev. Lett., 72:21252129, 1994.
W. Wiggers. Elektrophysiologische, neuroanatomische und verhaltensphysio-
logische Untersuchungen zur visuellen Verhaltenssteuerung bei lungenlosen
Salamandern. PhD thesis, Universitat Bremen, 1991.
W. Wiggers, C. W. Eurich, G. Roth, and H. Schwegler. Salamander und Sim-
ulander: Experimente und Modellierung zur Raumorientierung bei Schleud-
erzungensalamandern. Neuroforum, 1:616, 1995a.
W. Wiggers and G. Roth. Anatomy, neurophysiology and functional aspects of
the nucleus isthmi in salamanders of the family Plethodontidae. J. Comp.
Physiol. A, 169:165176, 1991.
I-178 BIBLIOGRAPHY

W. Wiggers, G. Roth, C. W. Eurich, and A. Straub. Binocular depth perception


mechanisms in tongue-projecting salamanders. J. Comp. Physiol. A, 176:365
377, 1995b.

S. D. Wilke and C. W. Eurich. What does a neuron talk about? In M. Verleysen,


editor, ESANN 99 European Symposium on Artificial Neural Networks, pages
435440. D-Facto, Brussels, 1999.

S. D. Wilke and C. W. Eurich. Neural spike statistics modify the impact of


background noise. Neurocomputing, 3840:445450, 2001:La.

S. D. Wilke and C. W. Eurich. Representational accuracy of stochastic neural


populations. Neural Comp., 14:155189, 2001:Lb.

S. D. Wilke and C. W. Eurich. On the functional role of noise correlations in the


nervous system. Neurocomputing, 4446:10231028, 2002.

S. D. Wilke, A. Thiel, C. W. Eurich, M. Greschner, M. Bongard, J. Ammermuller,


and H. Schwegler. Population coding of motion patterns in the early visual
system. J. Comp. Physiol. A, 187:549558, 2001:L.

H. R. Wilson, R. Blake, and S.-H. Lee. Dynamics of travelling waves in visual


perception. Nature, 412:907910, 2001.

H. R. Wilson and J. D. Cowan. Excitatory and inhibitory interactions in localized


populations of model neurons. Biophys. J., 12:124, 1972.

H. R. Wilson and J. D. Cowan. A mathematical theory of the functional dynamics


of cortical and thalamic nervous tissue. Kybernetik, 13:5580, 1973.

M. A. Wilson and B. L. McNaughton. Dynamics of the hippocampal ensemble


code for space. Science, 261:10551058, 1993.

A. T. Winfree. When Time Breaks Down. Princeton University Press, Princeton,


NJ, 1987.

R. Woesler. Object segmentation model: analytical results and biological impli-


cations. Biol. Cybern., 85:203210, 2001.

F. Wolf, H.-U. Bauer, K. Pawelzik, and T. Geisel. Organization of the visual


cortex. Nature, 382:306, 1996.

J. M. Wolfe and S. C. Bennett. Preattentive object files: shapeless bundles of


basic features. Vision Res., 37:2543, 1997.

D. Wolpert and Z. Ghahramani. Computational principles of movement neuro-


science. Nature Neurosci. Suppl., 3:12121217, 2000.
BIBLIOGRAPHY I-179

M. H. Woollacott, C. von Hosten, and B. Rosblad. Relation between muscle


response onset and body segmental movements during postural perturbations
in humans. Exp. Brain Res., 72:593604, 1988.

F. Worgotter and U. T. Eysel. Context, state and the receptive fields of striatal
cortex cells. TINS, 23:497503, 2000.

F. Worgotter, K. Suder, Y. Zhao, N. Kerscher, U. T. Eysel, and K. Funke.


State-dependent receptive-field restructuring in the visual cortex. Nature, 396:
165168, 1998.

S. Wu, S. i. Amari, and H. Nakahara. Population coding and decoding in a neural


field: a computational study. Neural Comp., 14:9991026, 2002.

W. Yao, P. Yu, and C. Essex. Delayed stochastic differential model for quiet
standing. Phys. Rev. E, 63:021902, 2001.

Y. Yeshurun and M. Carrasco. Attention improves or impairs visual performance


by enhancing spatial resolution. Nature, 396:7275, 1998.

Y. Yeshurun and M. Carrasco. Spatial attention improves performance in spatial


resolution tasks. Vision Research, 39:293306, 1999.

H. Yoon and H. Sompolinsky. The effect of correlations on the Fisher information


of population codes. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors,
Advances in Neural Information Processing Systems 11, pages 167173. MIT
Press, Cambridge MA, 1999.

A. W. Young and H. D. Ellis, editors. Handbook of Research on Face Processing.


Elsevier, New York, 1989.

T. Y. Young and T. W. Calvert. Classification, Estimation and Pattern Recog-


nition. American Elsevier, New York, 1974.

X. Yu and E. R. Lewis. Studies with spike initiators: linearization by noise allows


continuous signal modulation in neural networks. IEEE Trans. Biomed. Eng.,
36:3643, 1989.

R. Yuste and D. W. Tank. Dendritic integration in mammalian neurons, a century


after Cajal. Neuron, 16:701716, 1996.

A. M. Zador, H. Agmon-Snir, and I. Segev. The morphoelectrotonic transform:


a graphical approach to dendritic function. J. Neurosci., 15:16691682, 1995.

A. M. Zador and L. E. Dobrunz. Dynamic synapses in the cortex. Neuron, 19:


14, 1997.
I-180 BIBLIOGRAPHY

R. S. Zemel, P. Dayan, and A. Pouget. Probabilistic interpretation of population


codes. Neural Comp., 10:403430, 1998.

R. S. Zemel and G. E. Hinton. Learning population codes by minimizing descrip-


tion length. Neural Comp., 7:549564, 1995.

C. Zetzsche. The visual system is blind to almost all possible images. In R. P.


Wurtz and M. Lappe, editors, Dynamic Perception, pages 239244. Akademi-
sche Verlagsgesellschaft, Berlin, 2002.

K. Zhang, I. Ginzburg, B. L. McNaughton, and T. J. Sejnowski. Interpreting


neuronal population activity by reconstruction: a unified framework with ap-
plication to hippocampal place cells. J. Neurophysiol., 79:10171044, 1998a.

K. Zhang and T. J. Sejnowski. Neuronal tuning: to sharpen or broaden? Neural


Comp., 11:7584, 1999.

L. I. Zhang, H. W. Tao, C. E. Holt, W. A. Harris, and M.-m. Poo. A critical win-


dow for cooperation and competition among developing retinotectal synapses.
Nature, 395:3744, 1998b.

A. Ziemke and O. Breidbach, editors. Reprasentationismus Was sonst? Vieweg,


Braunschweig, 1996.

H. Zimmermann. Synaptic Transmission. Georg Thieme Verlag, Oxford Univer-


sity Press, Stuttgart, New York, 1993.

E. Zohary. Population coding of visual stimuli by cortical neurons tuned to more


than one dimension. Biol. Cybern., 66:265272, 1992.

E. Zohary, M. N. Shadlen, and W. T. Newsome. Correlated neuronal discharge


rate and its implications for psychophysical performance. Nature, 370:140143,
1994.
BIBLIOGRAPHY I-181
I-182 BIBLIOGRAPHY
Part II

Reprints of Original Papers

II-1
Chapter 5

List of Original Papers

Journal Articles
1. C. W. Eurich, G. Roth, H. Schwegler and W. Wiggers, Simulander: a neural
network model for the orientation movement of salamanders, Journal of
Comparative Physiology A 176 (1995) 379389.

2. C. W. Eurich and J. G. Milton, Noise-induced transitions in human postural


sway, Physical Review E 54 (1996) 66816684.

3. C. W. Eurich and H. Schwegler, Coarse coding: calculation of the resolu-


tion achieved by a population of large receptive field neurons, Biological
Cybernetics 76 (1997) 357363.

4. C. W. Eurich, H. Schwegler and R. Woesler, Coarse coding: applications to


the visual system of salamanders, Biological Cybernetics 77 (1997) 4147.

5. C. W. Eurich, K. Pawelzik, U. Ernst, J. D. Cowan and J. G. Milton, Dy-


namics of self-organizd delay adaptation, Physical Review Letters 82 (1999)
15941597.

6. C. W. Eurich and S. D. Wilke, Multi-dimensional encoding strategy of


spiking neurons, Neural Computation 12 (2000) 15191529.

7. S. D. Wilke and C. W. Eurich, Neural spike statistics modify the impact of


background noise, Neurocomputing 3840 (2001) 445450.

8. S. D. Wilke, A. Thiel, C. W. Eurich, M. Greschner, M. Bongard, J. Am-


mermuller and H. Schwegler, Population coding of motion patterns in the
early visual system, Journal of Comparative Physiology A 187 (2001) 549
558.

9. S. D. Wilke and C. W. Eurich, Representational accuracy of stochastic


neural populations, Neural Computation 14 (2001) 155189.

II-3
II-4 CHAPTER 5. LIST OF ORIGINAL PAPERS

10. C. W. Eurich, M. C. Mackey and H. Schwegler, Recurrent inhibitory dy-


namics: the role of state-dependent distributions of conduction delay times,
Journal of Theoretical Biology 216 (2002) 3150.

11. W. Freiwald, H. Stemmann, A. Wannig, A. K. Kreiter, U. G. Hofmann,


M. D. Hills, G. T. A. Kovacs, D. T. Kewley, J. M. Bower, C. W. Eu-
rich and S. D. Wilke, Stimulus representation in rat primary visual cortex:
multi-electrode recordings with micro-machined silicon probes and estima-
tion theory, Neurocomputing 4446 (2002) 407416.

12. C. W. Eurich, J. M. Herrmann and U. A. Ernst, Finite-size effects of


avalanche dynamics, Physical Review E 66 (2002) 066137. Also in: Virtual
Journal of Biological Physics Research 5(1) (2003).

13. A. Etzold, C. W. Eurich and H. Schwegler, Tuning properties of noisy cells


with application to orientation selectivity in rat visual cortex, Neurocom-
puting 5254 (2003) 497503.

14. M. H. Herzog, U. A. Ernst, A. Etzold and C. W. Eurich, Local interactions


in neural networks explain global effects in Gestalt processing and masking,
Neural Computation 15 (2003) 20912113.

15. A. Thiel, H. Schwegler and C. W. Eurich, Complex dynamics is abolished


in delayed recurrent systems with distributed feedback times, Complexity
8 (2003) 102108.

16. C. W. Eurich and E. L. Schulzke, Irregular connectivity in neural layers


yields temporally stable activity patterns, Neurocomputing (in press).

Refereed Conference Contributions


17. C. W. Eurich, S. D. Wilke and H. Schwegler, Neural representation of
multi-dimensional stimuli, in: S. A. Solla, T. K. Leen and K.-R. Muller
(eds), Advances in Neural Information Processing Systems 12, MIT Press,
Cambridge MA (2000) 115121.

18. C. W. Eurich, An estimation-theoretic framework for the presentation of


multiple stimuli, in: S. Becker, S. Thrun and K. Obermayer (eds), Advances
in Neural Information Processing Systems 15, MIT Press, Cambridge MA
(2003) 293300.

The articles on binary coding and the Simulander model, which were part of my
dissertation, were included in the list because they fit very well in the development
of the field of neural coding as it was presented in Part I of this thesis.
Acknowledgments

First of all, I would like to thank my longstanding advisor, Prof. Dr. Helmut
Schwegler, for numerous discussions about physics and the rest of our constructed
world.
Many thanks also to the actual and former members of our institute, in par-
ticular to Prof. Dr. Klaus Pawelzik and Dr. Udo Ernst. Udo (Yahoudi) has
become a close friend. There are so many people to mention here that I would
surely forget about some. Therefore, instead, heres a pool of letters from which
everyone can assemble his or her name (an example is given by the underlined
letters):

AAaaaaaBBbbbbbCCcccccDDdddddEEeeeeeFFfffff
GGgggggHHhhhhhIIiiiiiJJjjjjjKKkkkkkLLlllllMMmm
mmmNNnnnnnOOoooooPPpppppQQqqqqqRRrrrrrSS
sssssTTtttttUUuuuuuVVvvvvvWWwwwwwXXxxxxx
YYyyyyyZZzzzzz`````

Furthermore, I express my gratitude to my collaborators: Prof. Dr. Josef


Ammermuller, Prof. Dr. Jack Cowan, Dr. Hubert Dinse, Dr. Winrich Freiwald,
Dr. Michael Herrmann, Dr. Michael Herzog, Prof. Dr. Michael Mackey, Prof. Dr.
John Milton, Prof. Dr. Dr. Gerhard Roth and many other colleagues with whom
I have been in contact during the last years.
And last but absolutely not least I thank my wife Lilo for her love and
her patience with a husband who spends too much time on conferences and who
is sometimes mentally absent even when he is at home.

II-5

Vous aimerez peut-être aussi