Vous êtes sur la page 1sur 291

Animal Eyes

Oxford Animal Biology Series


Titles
Energy for Animal Life
R. McNeill Alexander
Animal Eyes
M. F. Land, D-E. Nilsson
Animal Locomotion
Andrew A. Biewener
Animal Architecture
Mike Hansell
Animal Osmoregulation
Timothy J. Bradley
Animal Eyes, Second Edition
M. F. Land, D-E. Nilsson

The Oxford Animal Biology Series publishes attractive supplementary text-


books in comparative animal biology for students and professional research-
ers in the biological sciences, adopting a lively, integrated approach. The
series has two distinguishing features: first, book topics address common
themes that transcend taxonomy, and are illustrated with examples from
throughout the animal kingdom; and second, chapter contents are chosen to
match existing and proposed courses and syllabuses, carefully taking into
account the depth of coverage required. Further reading sections, consisting
mainly of review articles and books, guide the reader into the more detailed
research literature. The Series is international in scope, both in terms of the
species used as examples and in the references to scientific work.
Animal Eyes
Second Edition

Michael F. Land
Professor of Neurobiology, University of Sussex, UK

AND

Dan-Eric Nilsson
Professor of Zoology, University of Lund, Sweden

1
1
Great Clarendon Street, Oxford ox2 6dp
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in the United States
by Oxford University Press Inc., New York
© Michael F. Land and Dan-Eric Nilsson 2012
The moral rights of the authors have been asserted
Database right Oxford University Press (maker)
First edition published 2002
Second edition published 2012
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate
reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover
and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
Library of Congress Cataloging in Publication Data
Library of Congress Control Number: 2011944054
Typeset by SPI Publisher Services, Pondicherry, India
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY

ISBN 978–0–19–958113–9 (Hbk)


978–0–19–958114–6 (Pbk)

1 3 5 7 9 10 8 6 4 2
For Rosemary and Maria
This page intentionally left blank
Preface to the second edition

The study of animal eyes is cumulative; old knowledge is rarely superseded,


but often added to. This is reflected in the way we have approached this new
edition. The chapter layout of the first edition worked well, and we have
retained it, along with much of the original text. We have concentrated on
advances that have been made in the last decade, and on remedying some
of the omissions of the first edition. Chapter 1 has been rewritten, because,
thanks largely to the application of molecular genetic techniques, major
advances have been made in our understanding of early phylogeny and the
molecular origins of photoreception. In Chapter 2 there are new sections
on spectral sensitivity and circular polarization; in Chapter 4 new studies
on the eyes of cubomedusans are outlined. In Chapter 6 newly described
photonic reflecting structures are introduced, and Chapter 9 has new mate-
rial on the head movements of birds and insect larvae. Other chapters have
been similarly updated, but not radically changed. And the last line of the
original Preface should now read 80 years rather than 60.

vii
Preface to the first edition

‘The eye’ to most people means an eye like ours, a single-chambered cam-
era-like structure with a retina in place of the film, or the CCD array. Most
know, too, that insects have compound eyes with many lenses, but how
many people can answer the question: does the insect see the multitude of
images beloved of Hollywood horror films, or a single image similar to our
own? We use this example to point out that, even to most biologists, eyes
remote from our own are poorly understood and come in only one or two
varieties. This hugely underestimates the diversity of eye types: there are at
least ten quite distinct ways that eyes form images. Some of these such as
pin-holes and lenses are familiar, but others are more exotic. These include
concave mirrors, and arrays of lenses, telescopes, and corner reflectors. Some
have been known about for centuries (the first demonstration of the inverted
image in a mammalian eye was in 1619) but a number are discoveries of the
last few decades and have yet to find their way into textbooks of either biol-
ogy or optics. Some of these eye-types have counterparts in optical technol-
ogy, but by no means all. Some are still finding applications: for example, the
mirror-based optical system of the compound eyes of shrimps and lobsters
has recently found a use as the optical basis of wide angle X-ray lenses.
It is our aim in this book to provide a comprehensive account of all known
types of eye. We take the diversity of optical mechanisms as a framework,
but many other aspects of the structure and function of eyes are also dealt
with. Visual ecology—the ways that eyes are specifically adapted to the
lifestyles of the animals that bear them—is another important theme. As
humans we tend to think of vision as a general-purpose sense, supplying
us with any kind of information we require. For most other animals this is
not so. Predators and prey, for example, have different visual requirements:
foxes and rabbits have different eyes and different visual systems, as have
dragonflies and mosquitoes. Similarly, a sedentary clam lives in a different
world from a flying insect, and the optical requirements are quite different.

viii
Preface to the first edition ix

Behind the diversity of eye types is the majesty of the evolutionary proc-
ess, and this is where we begin the book. The origins of eyes, and the ways
in which they reached their present highly developed states has posed an
intriguing series of problems from Darwin onwards. The debates still rumble
on, particularly about the early origins of eyes before the great Cambrian radi-
ation event gave us most of the eye types we see in animals today. Chapter 1
addresses these questions, and provides a context in which eyes can be seen
as different solutions to problems that are, in many respects, similar.
As well as diversity, we are concerned with the ‘design philosophy’ of
eyes. What are the physical constraints on the way an eye performs its func-
tions, and how are these addressed by the different types of eye? To answer
this it is necessary first to have some information about the properties of
light that are of importance for vision, and this we provide in Chapter 2.
We are then able to explore the ways that eyes achieve important aspects of
their function, such as good spatial resolution, and (especially for animals
that live in dim environments) adequate sensitivity. This is the purpose of
Chapter 3, which is devoted to the question ‘What makes a good eye?’. This
in turn provides a background for assessing the capabilities of the panoply
of different eye types, presented in the subsequent five chapters. The ninth
and final chapter examines another aspect of the way eyes are used: their
movements. Eyes sample the world not only in space but in time, and the
movements that they make are as important a part of the process of extract-
ing information as are the optical systems that provide them with spatial
resolution.
The book is not aimed at any one readership. It will be of value to under-
graduates in Biology and Neuroscience programmes, and to anyone engaged
in the study of vision at the post-graduate level. Students and practitioners
of ophthalmology and optometry will find it interesting as a background to
the study of the human eye, and optical physicists and engineers will find
that nature has come up with solutions that they will not have met before.
Aware that many biologists will want the story without too much math-
ematical detail, we have used Boxes for some of the more complex sections.
Equally, however, serious students will want to make use of some of these
sections as they contain important ‘how to do it’ information. For exam-
ple, Box 5.1 shows how to find the focal length and image position in any
optical system of reasonable complexity. We have not provided a complete
bibliography justifying every statement in the book, but given references to
reviews where the original literature can be found, and to key works, with
a bias towards the more recent.
We would like readers to enjoy the book, and share in our enthusiasm for
the beauty, intricacy and the logic of animal eyes that has kept us intrigued,
and busy, for a total of 60 years.
Acknowledgements

We would like to thank our editors at Oxford University Press. Cathy


Kennedy, who edited the first edition, provided encouragement and a
great deal of very useful feedback. Helen Eaton, Muhammad Ridwaan and
Vijayasankar Natesan have steered us through the editorial and production
problems of the second edition with efficiency and skill. We thank all those
people who have kindly read and commented upon sections of the book,
and who found mistakes in the first edition which we have endeavoured
to correct. We are particularly grateful to all those who have contributed
original illustrations. Their names appear in the figure captions.
We thank the following publishers for allowing us to use copyright fig-
ures: Academic Press, Cambridge University Press, Elsevier Ltd., Kluwer
Academic Publishers, Macmillan Magazines Ltd., Sigma Xi, and Springer-
Verlag. The authors are referred to in the figure captions, and full citations
appear in the reference list.

x
Contents

1 The origin of vision 1


The first eyes 1
Evolution of the essential components of visual systems 6
Evolution of visual function 10
The diversity of eye design 18
Summary 21

2 Light and vision 23


The nature of light 24
Light intensity 26
Contrast 31
Wavelength and colour 32
Polarization 39
Summary 44

3 What makes a good eye? 46


Fundamentals 46
Resolution 49
Sensitivity 62
Conclusions 69
Summary 70

4 Aquatic eyes: the evolution of the lens 72


Evolutionary origins 72
Pinhole eyes: giant clams and Nautilus 73
Under-focused lens eyes 76

xi
xii Contents

Forming a sharp image 79


Eyes of fish and cephalopods 83
Matching eye to environment 86
Eyes with non-spherical lenses 91
Summary 93

5 Lens eyes on land 94


A new optical surface 94
Basic optics of cornea and lens 95
Variations on the lens/cornea theme
in land vertebrates 104
Amphibious eyes 117
Invertebrate eyes with corneal optics 119
Summary 128

6 Mirrors in animals 130


Mirrors in eyes 130
The physical optics of animal reflectors 143
Uses of photonic reflectors in structures other
than eyes 150
Summary 155

7 Apposition compound eyes 157


Origins 157
A little history: apposition and neural superposition 160
Basic optics 164
Ecological variations in apposition design 176
The anomalous eyes of strepsipterans and trilobites 188
Summary 189

8 Superposition eyes 191


Introduction—the nature of superposition imagery 191
Refracting superposition 194
Superposition and afocal apposition: the eyes
of butterflies 204
Reflecting superposition 208
Parabolic superposition 212
Summary 213

9 Movements of the eyes 215


Sampling the world in space and time 215
How humans acquire visual information 218
Contents xiii

Are other animals like us? 223


Insect flight behaviour seen as eye movement 226
Translational saccades: head-bobbing in birds 227
Why not let the eyes wander? Some consequences
of image motion 228
Exceptions: rotational scanning by one-dimensional
retinae 235
Summary 241

Principal symbols used in the text 243

References 244

Index 265
This page intentionally left blank
1 The origin
of vision

Like no other sensory organs, eyes can provide instantaneous and detailed
information about the environment both close up and far away. It is not hard
to appreciate the enormous competitive value of a good pair of eyes. In this
respect, it may seem a little surprising that within the animal kingdom, only
a handful of the more than 30 different phyla have evolved sophisticated
eyes. But it does not imply that most animals are blind. In nearly every phy-
lum there are representatives with simple ocelli, and the few groups that
have evolved sophisticated eyes, such as vertebrates, arthropods and mol-
luscs, have radiated and diversified to dominate the planet. Evolution has
exploited nearly every optical principle known to physics, and produced
eyes of many different designs, from camera-type eyes, to compound eyes,
and eyes that use mirrors. Having a single pair of eyes located to the head
is a common solution, but not the only one (Fig. 1.1). Ragworms typically
have two pairs of eyes on the head and spiders have four pairs. In addition
to the paired eyes, some lizards and most arthropods have median eyes,
and there are numerous examples of eyes on other parts of the body. Clams
and mussels have eyes on the mantle edge, chitons have eyes sprinkled all
over their back, and starfish have eyes at the tips of their arms. Some jel-
lyfish, having neither a head nor a brain, still have remarkably sophisticated
eyes. How did this bewildering diversity evolve?

The first eyes


Although life has existed for several billion years, animals advanced enough
to make use of good vision have only been around for little more than half

1
2 Animal Eyes

(a) (b) (c)

(d) (e) (f)

Fig. 1.1 Eye diversity extends far beyond the familiar pattern of vertebrate eyes. The photos
illustrate a range of imaging eyes in different organisms. (a) A cuttlefish, Sepia apama, with its
characteristic pupil, (b) a nocturnal bee, Megalopta genalis, with compound eyes and median
ocelli, (c) a ragworm, Platynereis dumerilii, with two pairs of pigment cup eyes, (d) lens-less
compound eyes on the mantle edge of an arc clam, Barbatia cancellaria, (e) two lens eyes and
two pairs of pigment pit eyes on a sensory club from a box jellyfish, Chiropsella bronzie, (f) a
dinoflagellate (unicellular green algae, Erythropsidinium sp.) with an elaborate eye-spot consisting
of a lens and screening pigment. (a, d, e) Photo Dan-E. Nilsson; (b) courtesy Eric Warrant, (c) Detlev
Arendt, (f) Mona Hoppenrath.

a billion years. If we trace eyes back through the fossil record, the oldest
ones date back to the early Cambrian, about 530 million years ago. Animals
from the early Cambrian were not of the same species that exist today, but
most of them can be placed into the modern phyla, and many were fully
equipped with eyes as far as can be told from the fossils. Only 20 million
years earlier, towards the end of the Precambrian, the forms of life seem
to have been much simpler, without any large mobile animals that could
benefit from good vision. It is even hard to identify any animals at all in
the fossil remains of Precambrian organisms. But something remarkable
seems to have happened at the interface between the Precambrian and the
Cambrian. Within less than 5 million years, a rich fauna of macroscopic
animals evolved, and many of them had large eyes. This important evolu-
tionary event is known as the Cambrian explosion.
The Cambrian fossils have been gradually deciphered since 1909 when
the palaeontologist Charles Walcott started to analyse the 515-million-year-
old rock of the Burgess shale in Canada. What Walcott found was the well-
preserved remains of a marine fauna, presumably from shallow water.
The origin of vision 3

The fauna was dominated by arthropods of many different types, but also
contained representatives of numerous other phyla. Some of the interpreta-
tions of the Burgess shale fossils indicated the appearance of many enig-
matic types of animals that did not seem to belong to any of the phyla
remaining today. Subsequent and more careful analyses have demonstrated

Fig. 1.2 Evidence of the first real eyes comes from Cambrian fossils. The first faunas with large
mobile animals appear to have originated at the onset of the Cambrian era, some 540 million
years ago, during a rapid evolutionary event called the Cambrian explosion. In the course of a few
million years, bilaterally symmetric, macroscopic, and mobile animals evolved from ancestors that
were too small or soft bodied to be preserved as fossils. The product of the Cambrian explosion
was not just a few species, but an entire fauna (a) including nearly all the animal phyla we know
today. The invention of visually-guided predation may have been the trigger for this unsurpassed
evolutionary event. Among the very first Cambrian animals, numerous species had prominent eyes.
An early example is the arthropod Xandarella (b) from Chengjiang, China. Unfortunately, fossils
generally reveal very little, if anything about the type or structure of these ancient eyes. (a) from
Briggs (1991), originally adapted from Conway-Morris and Whitington (1985), (b) from Xianguang
and Bergström (1997), with the authors’ permission.
4 Animal Eyes

that nearly all of the Cambrian animals were indeed early representatives
of modern animal phyla (Conway-Morris 1998). After the discovery of the
Burgess shale fauna, even better preserved and earlier Cambrian faunas
have been found. These faunas are particularly interesting because they
offer a close look at animal life very soon after the Cambrian explosion
(Fig. 1.2). Amazingly, these earlier faunas were not very different from those
preserved in the Burgess shale. It thus seems that essentially modern types
of animals, many with large eyes, evolved within a few million years from
ancestors that for some reason were not large or rigid enough to leave many
fossil traces.
In the early Cambrian faunas, trilobites and other arthropods were
abundant, and they viewed the world through compound eyes that at
least superficially resemble those of modern arthropods. In trilobite fos-
sils it is often possible to see the facets of the compound eyes, but in
other Cambrian fossils, the eyes are just visible as dark imprints with
no detailed structures preserved. Figure 1.2 shows a Cambrian fossil and
reconstructed creatures with prominent eyes. From the abundance of eye-
bearing species, and from the sizes of their eyes, it seems that vision was
no less important in the early Cambrian than it is today. The fossils clearly
tell us that, from their first appearance, macroscopic mobile animals were
equipped with eyes.
Even vertebrate eyes can be traced back to the early Cambrian. Among
the very first vertebrates were animals that resemble the larvae of mod-
ern jawless fishes, and these had rather prominent eyes (Fig. 1.3). Later,
the Ordovician conodont animals were another group of early vertebrates
(Fig. 1.3) that had such large eyes that they must have had better vision
than most other animals of their time. Eye evolution is thus largely a story
about what happened in the early Cambrian, and thereafter it was only
the colonization of land that led to further significant evolutionary events
in vision.
As we have seen, the fossil evidence suggests that a large range of visu-
ally-guided animals evolved in a very short time during the early Cambrian.
Did their eyes evolve from scratch at that time, or might their ancestors
already have had some precursor of real eyes? The fossils do not give a
clear answer here, but they provide some interesting clues. Fossils formed
towards the end of the Precambrian reveal tracks made in the seafloor, and
these increase in abundance as the Cambrian explosion approaches. From
the size and appearance of these tracks it seems that they were made by
small (a few millimetres in length) worm-like animals slowly crawling on
the surface of the seafloor. The fact that the actual animals are not fossil-
ized may indicate that they were soft-bodied creatures. If they belong to the
The origin of vision 5

Haikouichthys

Clydagnathus

10 mm

Fig. 1.3 The eyes of vertebrates can be traced back to the early Cambrian. One of the early
chordates, Haikouichthys, resembles the larva of a modern jawless fish, and like these larvae it
had a pair of eyes that may have been early versions of vertebrate eyes. Some 30 million years
later, another group of visually-guided chordates, the conodont animals, were abundant. Many of
the conodonts, such as Clydagnathus, had unusually large eyes, suggesting that it relied heavily
on vision. Haikouichthys is reconstructed after Shu et al. (2003), Clydagnathus redrawn after
Purnell (1995).

ancestors of the early Cambrian faunas they would have had to increase
considerably in size at the initial phase of the Cambrian explosion. It is also
possible that the ancestors of modern animals were small planktonic organ-
isms, similar to the ciliated larvae of many marine invertebrates. Skeletons
and rigid protection seem to have evolved along with the larger bodies. The
first evidence of such structures comes at the very end of the Precambrian.
Fossils of small shells, or fragments of shells, typically in the size range
of 2–10 mm, known as the ‘small shelly fauna’, very closely preceded the
Cambrian explosion.
It is tempting to speculate that a few species of late Precambrian ani-
mals became large enough to acquire good spatial vision and improved
mobility, and became the first visually-guided predators. Such an ecologi-
cal invention would have put a tremendous selection pressure on a large
part of the fauna, and forced other species to evolve protective measures
such as body armour or shells, avoiding exposure by deep burrowing, or
developing good vision and mobility themselves. These possibilities indeed
reflect the key characteristics of the early Cambrian faunas, supporting the
idea that the introduction of visually-guided predation altered much of the
ecological system and fuelled the Cambrian explosion. Because both vision
6 Animal Eyes

and speed of locomotion can improve by a general increase in size, visu-


ally-guided predation offers an understanding of the very sudden appear-
ance of macroscopic animals. In this scenario the small shelly fauna may
have been the very first stages of an arms race between predators and prey,
where rigid structures for protection and mobility evolved along with the
first real eyes.
Even though fossils can tell us much about the evolution of animals
with eyes, information from fossils provide very limited information on the
actual eyes. Details of the eye structure of early animals are only known
from arthropods with a hard eye surface. In other early animals, eyes are
at best preserved as stained spots on the head. Unfortunately, details of the
internal structure, which are crucial for understanding how eyes evolved,
are generally not preserved in fossils. To find out what kinds of photorecep-
tion were present before the Cambrian we need to seek evidence outside
the fossil record.

Evolution of the essential components


of visual systems
A number of different light-harvesting and light-sensing molecules are
used by plants and bacteria, but in animals, there are just a few light-
sensing systems, and only one of these is used for vision. The opsin pro-
teins, binding a light-sensitive vitamin A derivative, are responsible for
vision in all animals from jellyfish to man, and this molecular system is
unique to animals. Opsins belong to the huge G-protein coupled recep-
tor family, in which the majority respond to chemicals rather than light.
Plants, green algae, and fungi also have G-protein coupled receptors, but
none of them are light receptors. To complicate matters further, bacteria
and green algae have other types of opsins that are structurally similar
to animal opsins, but they are not G-protein coupled, and their amino
acid sequence is not obviously related to the G-protein coupled receptor
proteins.
It is thus very likely that the last common ancestor of all animals (possi-
bly excluding sponges) had a light-sensing opsin, signalling via a G-protein
cascade. In other words, the G-protein coupled receptor proteins evolved
before the evolution of animals, and the modification of a protein of this
family to become a light receptor, happened very early in animal evolu-
tion. The efficient signalling cascade of G-protein coupled receptor proteins
made the opsins an excellent light receptor for eyes, and even if other light
sensing systems, such as cryptochromes, still exist among animals, they
have only been employed for a few non-visual tasks. It has been specu-
The origin of vision 7

lated that animal opsins originated as a modification of a chemoreceptor


protein, and at the molecular level, the two sensory modalities are indeed
almost identical.
The genetic control of eye development, including especially the Pax6
control gene, also displays obvious similarities across the animal kingdom,
and this has been taken as evidence that the last common ancestor of all
animals already had eyes (Gehring and Ikeo 1999). But there are good
reasons for being cautious here because the similarities may date back to
the first expression of animal opsins, before these became part of any eye.
Developmental genetic networks are generally known to be conservative,
whereas the structures and functions they control may be subject to dra-
matic modifications or innovations. It is also possible, and perhaps likely,
that a genetic control network originally used only for local expression of
an opsin, has repeatedly been co-opted for use in new places of the nerv-
ous system or epidermis, whenever light sensitivity has been called for. The
question of the origin of eyes is often seen as a simple alternative between
a single common origin for all eyes, or numerous cases of independent evo-
lution. But as we shall see, the early evolution of eyes involves a far more
complex sequence of events than this debate implies.
An important cue for understanding eye evolution is the distinction
between different types of photoreceptor cells. Salvini-Plawen and Mayr
(1977) noted a remarkable diversity of photoreceptor cell morphology across
the animal kingdom, and suggested that photoreceptors evolved independ-
ently numerous times. This is, of course, strongly contradicted by the uni-
versal occurrence of homologous opsins across the animal kingdom, but
later findings have demonstrated that some differences in photoreceptor
morphology are linked to early diversifications of photoreceptor physiol-
ogy. In vertebrate rods and cones the visual pigment is contained in heavily
folded membranes of modified cilia, whereas in the visual photoreceptor
cells of most invertebrates, such as insects and molluscs, the visual pigment
is contained in rhabdoms, consisting of microvilli extending directly from
the cell body. These different ways of extending the membrane area are
associated with different classes of opsin, different transduction cascades,
and different pathways for regeneration of opsin. The structural differences
between the two types of photoreceptor are not entirely consistent through-
out all animal groups, but at the molecular level, ciliary and rhabdomeric
photoreceptors are fundamentally distinct.
Originally, vertebrates were thought to have the ciliary type of photore-
ceptor cell, whereas invertebrates were believed to have the rhabdomeric
type. Now we know that both types of molecular machinery are represented
in both vertebrates and invertebrates. In the retina of vertebrate eyes, with
their ciliated rod and cone cells, there are also ganglion cells that express
8 Animal Eyes

rhabdomeric-type opsins, and are light sensitive on their own (Peirson et al.
2009). Similarly, photoreceptor cells with ciliary-type opsins serve the circa-
dian clock in the brain of some invertebrates that have rhabdomeric recep-
tors in the lateral eyes (Arendt et al. 2004). These findings suggest that the
common ancestor of bilaterian animals had both ciliary and rhabdomeric
photoreceptor cells, or at least that they had both types of opsin with their
distinct transduction cascades and regeneration mechanisms. Later on, one
or the other version of the light-detecting molecular machinery found its
way into eyes of various design in the different animal groups.
With respect to opsin class and transduction cascade, there is even
a third type of photoreceptor in eyes, known primarily from the pecu-
liar mantle eyes of bivalves (Gomez and Nasi 2000). The opsins of this
class, Go -opsins, are closely related to photoisomerase enzymes that are
involved in regeneration of visual pigment in some systems. Jellyfish
have a somewhat different set of opsins, and the primitive placozoans
and sponges have no opsins at all, reflecting the fact that these groups
are early branches on the animal phylogenetic tree. Because cnidarians
and bilaterians split more than 600 million years ago, well before the
Cambrian era, there must have been Precambrian animals with photore-
ceptors. Molecular evidence suggests that a significant diversification of
opsins and transduction cascades preceded the Cambrian explosion by at
least 100 million years.
Even though the eyes of vertebrates, arthropods, squid, and jellyfish
develop in very different ways from different tissues, and are largely the
result of convergent evolution, they share deep homologies in the molec-
ular components that they are composed of. This implies that ancient
molecular modules, serving gene expression or physiological function,
have repeatedly been recruited and co-opted for similar purposes in par-
allel lines of eye evolution in different branches of the animal phylogenetic
tree. The opportunistic use of genetic control networks makes evolution-
ary reconstructions extremely tenuous, but there are still a few things
we can claim with reasonable certainty. All vertebrate eyes clearly date
back to a basically similar type of eye in a common vertebrate or chordate
ancestor. Less certain but still likely is that the paired cephalic eyes of
most invertebrates stem from rhabdomeric ocelli in an ancient common
ancestor. But extracephalic eyes, such as the mantle eyes of clams and
mussels, and the tentacular eyes of fanworms, must have evolved sepa-
rately by recruitment of pre-existing molecular modules for light detection
and neural signalling.
Information on the evolution of eyes can be obtained also from the pro-
teins that make up animal lenses, the crystallins. To make efficient and
The origin of vision 9

clear lenses, these proteins must be suitable for mass expression and dense
packing, but they should not easily aggregate into lumps and thus cause
cataracts. Different animal groups, such as vertebrates, cephalopods, and
jellyfish have used different proteins for this purpose. Interestingly, the
crystallins generally appear to have been recruited from proteins with
other functions such as enzymes or chaperones involved in protein assem-
bly (Piatigorsky 2007). In vertebrate lenses, one of the crystallins is a heat-
shock protein that apparently had properties suitable for making stable and
transparent lenses.
It is clear from the above that visual dioptric systems have evolved
independently many times. With equal certainty we can also say that
opsin-based photoreception evolved once at an early stage of animal evo-
lution. But what happened in between these late and early stages of eye
evolution? The events that placed opsin into the first primitive eyes, is
as yet not very well understood. Even very simple lens-less eyes, such
as those of flatworms, contain pigment cells and photoreceptor cells, and
they connect to the brain through second-order neurons. The assembly
of eyes as organs with several different cell types must have involved
duplications and specializations of cell types, in addition to recruitment
of cells from neighbouring tissues. Presumably these processes started in
Precambrian ancestors of Bilateria and Cnidaria, and have occurred sev-
eral times to produce different lines of cephalic and extracephalic eyes in
different animals.
The small pigmented ocelli of the planula larvae of box jellyfish (Fig.
1.4a) offer a rare example of a ‘visual system’ so primitive that it consists of
only a single cell type. Apart from making a pigment cup, these cells also
have both sensory microvilli and motile cilia, but no neural connections.
It is believed that they are self-contained sensory-motor units, steering the
larva as it swims.
Among the dinoflagellates there is another remarkable example of a
primitive ‘visual system’, though probably one that has little bearing on ani-
mal eye evolution. Dinoflagellates are unicellular green algae that generally
get their energy from photosynthesis. But in some species, the chloroplast
(photosynthetic organelle) has lost its photosynthetic function and become
modified into something that resembles an eye. In the single cell there is
one or sometimes several lenses and a retina-like structure (Greuet, 1982;
Fig. 1.1f). The species that have such structures are known to feed on other
species of dinoflagellates that have retained their photosynthesis, and it is
believed that they use their ‘visual system’ for prey detection. However,
despite these functional similarities, there is little else to link them with the
eyes of metazoan animals. Green algae have opsins for detecting light, but
10 Animal Eyes

as mentioned earlier, these opsins are fundamentally different from animal


opsins and are not believed to share a common origin (Larusso et al. 2008).
The occurrence of eyes in dinoflagellates demonstrates how functional con-
straints can make similar structures and functions evolve independently in
different organisms.

Evolution of visual function


Human eyes, and those of countless other animals, are sophisticated
structures with precisely tuned optics. How could structures with so
many coordinated and ‘perfect’ structures arise gradually from just
light-sensitive cells? To answer this question we have to trace evolution
backwards to simpler and simpler conditions that are still useful. The
difficult part here is to know what we mean by ‘useful’. Ignoring the
implausibility of a small flatworm being able to carry a pair of human
eyes, the worm would neither have the brains nor the locomotory abilities
to benefit from the superior acuity. The lens-less pit eyes of the flatworm
are likely to serve the worm much better than other more sophisticated
eyes. Generally, sensory organs are intimately associated with and tuned
to the behavioural repertoire, and for each species, eye performance can
be expected to closely match the requirements of its visually-guided
behaviours.
Eye evolution, like evolution in general, is driven by selection for
maximal fitness. Eyes or visual performance have an impact on fitness
only through the benefits of visually-guided behaviours. Eye evolution
is thus a consequence of the evolution of visually-guided behaviours. As
far as we know, all eyes, from simple to sophisticated, are well matched
to the tasks they serve. But the tasks clearly differ. Even though it may
sound uncontroversial to claim that eyes have evolved ‘from poor to per-
fect’, this is in fact incorrect. Most eyes, from the simplest to the most
advanced, are probably close to optimal for the biology of the species. A
more correct statement would then be that ‘eyes have evolved from sup-
porting simple tasks optimally to supporting ever more complex tasks
optimally’. Because old tasks often remain useful after new ones have
been added, advanced visual systems have a long history of accumula-
tion of new and gradually more demanding tasks. These arguments sug-
gest that eye evolution can be understood only by first reconstructing the
evolution of visual tasks.
Some visually-guided behaviours require acute vision, whereas others
work fine with low-resolution vision. Simple behaviours such as phototaxis
can, in principle, be performed with a single directional photoreceptor (see
The origin of vision 11

Chapter 9), and even non-directional photoreception is known to control


behaviour. As behaviour became more sophisticated, so too did the infor-
mation needed to control it. Conversely, the amount of spatial information
required is an excellent basis for assessing the evolution of visually-guided
behaviours. There are a number of important steps along this evolutionary
path, and these lead to a classification of light-controlled behaviours into
four basic classes (Nilsson 2009):

1. Behaviours controlled by non-directional monitoring of ambient light. Examples


are the control of circadian rhythms, light-avoidance responses for
protection against harmful levels of short wavelength light, shadow
responses to avoid predation, and surface detection for burrowing ani-
mals.
2. Behaviours based on directional light sensitivity. Examples are phototaxis,
control of body posture (optical statocysts), and alarm responses for
approaching predators.
3. Visual tasks based on low spatial resolution. Examples are detection of self-
motion, object avoidance responses (anti-collision), habitat selection, and
orientation to coarse landmarks or major celestial objects such as the sun
or moon.
4. Visual tasks based on high spatial resolution. Examples are detection and
pursuit of prey, predator detection and evasion, mate detection and eval-
uation, orientation to fine landmarks, visual communication, and recog-
nition of individuals.

Strictly speaking, only classes 3 and 4 are visual tasks. Class 2 is generally
not considered as true vision, and class 1 is typically referred to as non-
visual photoreception. The evolution of visual systems can be assumed to
start by the evolution of class 1 tasks and then progress through gradually
higher classes of behavioural tasks. The four classes of behavioural tasks
are each associated with its own typical requirement for sensory informa-
tion, and these requirements correspond to different stages of eye evolution
(Figs. 1.4 and 1.5).
Class 1 tasks are served by non-directional photoreceptors, and require
one or a few photoreceptor cells, but no other structures. The fundamen-
tal requirements are light sensitive molecules and a signalling mechanism.
For monitoring the diurnal light cycle or measuring the water depth, the
response can be slow, and only rather large intensity differences need be
discriminated. Non-directional photoreceptors are known from various ani-
mals, but because they do not require any structural specializations, they
are morphologically inconspicuous.
12 Animal Eyes

(a) (b)
motile cilium
microvilli
receptor cell
microvilli

screening pigment pigment cell

cornea
(c) (d) lens

iris

microvilli

receptor cell

pigment cell
screening pigment

Fig. 1.4 Semi-schematic drawings of ocelli and simple eyes. (a) and (b) are examples of directional
photoreceptors involved in phototactic responses, and (c) and (d) are simple eyes that produce
a crude image. (a) The single cell ocellus of a box jellyfish larva is both a photoreceptor and
an effector. The motile cilium is used to steer the larva, which has some 20 ocelli, but lacks a
nervous system. (b) The ocellus of a polychaete larva is formed by two cells: one pigment cell
and one photoreceptor cell. (c) The pigment cup eye of a planarian flatworm has a number of
photoreceptor cells lining the inside of a cup that is formed by a single pigment cell. The eye is
of inverse design, with photoreceptor axons emerging towards the light. (d) The eye of a box
jellyfish has a weak lens surrounded by photoreceptor cells that also form the pigment cup. The
eye also has a cornea and a pigmented iris. In all four cases (a–d), the photopigment is located in
stacks of membrane, here formed by microvilli. The function of microvilli and membrane discs in
photoreceptor cells is to concentrate large amounts of photopigment to obtain sufficient sensitivity
to light. The dark screening pigment can be located in the photoreceptor cells, as in (a) and (d),
or confined to specialized pigment cells, as in (b) and (c). Modified from Nilsson (2009).

Class 2 behaviours require the addition of screening pigment or other


structures partly shading the photoreceptor cells such that they become
directionally sensitive. For phototaxis, a 180° field of sensitivity is sufficient,
and body movements will generate information about the direction towards
brighter or dimmer parts of the environment. Spatial information is thus
The origin of vision 13

Focusing optics
4. Spatial vision:
Membrane stacking
high resolution
Screening pigment

3. Spatial vision: low resolution


Spatial information

2. Directional photoreception

1. Non-directional photoreception

Time

Fig. 1.5 Sequential evolution of the four classes of sensory tasks controlled by light, and
corresponding key innovations in eye design. For each higher class of sensory tasks the amount
of spatial information increases. Spatial information is introduced by the transition from non-
directional photoreception to directional photoreception. The step from directional photoreception
to low-resolution spatial vision results in a substantial increase in spatial information, and a massive
increase follows the introduction of high-resolution vision. Because optimal response speed,
dynamic range, and other receptor properties differ between non-directional tasks and directional
tasks, it is unlikely that classes 1 and 2 will be performed by the same type of cell. Evolution of
class 2 tasks is thus likely to involve duplication and subsequent specialization of photoreceptors
with and without associated screening pigment. By introducing stacking of the photoreceptor
membrane, the working range of class 2 tasks can be extended into dim light (Nilsson 2009).
For continued evolution to class 3 tasks, membrane stacking becomes necessary even in bright
light. Low-resolution spatial vision can functionally replace directional photoreception, and there
would be no need to further duplicate the sensory structures such as in the transition from
class 1 to class 2 tasks. In animals that have evolved high-resolution spatial vision (class 4), the
low-resolution tasks (class 3), such as detection of self-motion and obstacle avoidance, remain
important visual functions, and they can easily be performed by a high-resolution eye. To collect
enough photons for spatial vision with higher resolution, the evolution of lenses, or other focusing
optics, is necessary. Lenses, membrane stacking, and photoreceptors associated with screening
pigment are thus three key innovations that have each made possible the evolution of a new class
of sensory tasks. Modified from Nilsson (2009).

obtained sequentially by scanning, and this requires faster responses of the


photoreceptor cells. The intensity differences between different directions
are small compared to the diurnal light cycle, which calls for an improved
ability to discriminate intensity changes. Some flatworms, nematodes, and
numerous planktonic invertebrate larvae have eye spots (ocelli) built for
class 2 tasks (Fig. 1.4).
14 Animal Eyes

Class 3 tasks require true spatial vision, which implies that differ-
ent photoreceptors simultaneously monitor different directions. For
detecting self-motion, tracking coarse landmarks, or avoiding collisions,
the spatial resolution need not be very fine. Resolution in the order of
5–30° is sufficient for many such tasks. To collect enough photons to the
reduced angles seen by each receptor, stacking of photoreceptor mem-
brane must evolve to allow class 3 tasks (Nilsson 2009). Pigment pits or
cups with numerous photoreceptor cells inside are typical eyes that can
support class 3 but not class 4 visual tasks. Many flatworms, ragworms,
molluscs, and larval arthropods have eyes supporting tasks up to class
3 (Fig. 1.4).
Class 4 tasks are more demanding, and differ from class 3 mainly by
requiring much higher spatial resolution. To detect prey and predators, res-
olution cannot be much worse than a few degrees, and in many animals it
is a small fraction of a degree (1/60th of a degree in humans). The evolu-
tion of lenses or other focusing optical arrangements becomes a necessity
for discrimination of small angles. The camera-type eyes of vertebrates and
cephalopods, and the compound eyes of insects and crustaceans are multi-
purpose eyes that support a large number of tasks of class 4 as well as of
class 3 (Land and Nilsson, 2006).
The evolutionary sequence of visually-guided behaviours and their pre-
cursors correspond to different stages of eye evolution (Fig. 1.5). The four
corresponding stages of eye evolution are: (1) unshielded photoreceptor cells,
(2) pigmented ocelli with broadly directional photoreceptors, (3) pigment pit
or cup eyes with coarse spatial vision, and (4), eyes with focusing optics
and high spatial resolution. Because each higher class of tasks require faster
receptors with narrower angular sensitivity and higher contrast sensitivity,
more photons need to be collected per unit time. This is why stacking of
photoreceptor membrane is introduced in most ocelli serving class 2 tasks,
and without exception in eyes serving class 3 and 4 tasks. Eyes built for
class 4 tasks also need imaging optics to collect enough photons. Improved
sensitivity is probably the primary reason for the evolution of lenses. Key
steps in eye evolution, such as the number of receptors, presence of screen-
ing pigment, membrane stacking, and focusing optics are functional adap-
tations that are directly related to the evolution of the different classes of
behavioural tasks (Fig. 1.5).
The common theme throughout this entire sequence is that the acquisi-
tion of spatial information is continuously increasing. For most behavioural
tasks there is a minimum amount of spatial resolution needed to perform
the task to a degree that increases fitness. In most cases the perform-
ance of the task can be further improved by access to more information.
The origin of vision 15

The amount of spatial information may eventually exceed the minimum


requirement for another task, that will then take over and keep up selec-
tion for even higher spatial resolution. This way, eye evolution will go on
for as long as the fitness is increased by adding or improving visually-
guided behaviours.
It may still seem that the evolution of an eye could be difficult because
entirely new structures and principles will have to be ‘invented’ along
the way, and it has frequently been argued that a great deal of good
fortune would be required for eyes to evolve. But the truth is that eyes

364 000 generations,


0.005% change/generation

Fig. 1.6 A patch of light-sensitive epithelium can be gradually turned into a perfectly focused
camera-type eye if there is a continuous selection for improved spatial resolution. A theoretical
model based on conservative assumptions about selection pressure and the amount of variation
in natural populations suggest that the whole sequence can be accomplished amazingly fast,
in less than 400 000 generations. The number of generations between each of the consecutive
intermediates is indicated in the figure. The starting point is a flat piece of epithelium with an
outer protective layer, an intermediate layer of receptor cells, and a bottom layer of pigment
cells. The first half of the sequence is the formation of a pigment cup eye. When this principle
cannot be improved any further, a lens gradually evolves. Modified from Nilsson and Pelger
(1994).
16 Animal Eyes

can evolve gradually, without sudden changes, from the simplest form
of light sensitivity to a perfectly focused eye with all its intricacies. The
only external factor needed is an ongoing selection favouring better
spatial resolution. Using a theoretical model, Nilsson and Pelger (1994)
demonstrated that a light-sensitive patch on the skin can evolve into a
typical vertebrate or octopus eye by numerous minute modifications,
where each modification provides a small improvement of performance
(Fig. 1.6). Using typical values of variation within a population it was
even possible to calculate that the complete sequence from a light-sensi-
tive patch of cells (stage 2 above) to a sharply focused camera-type eye
could be completed in less than half a million generations, or the same
number of years in a small invertebrate. This calculation would have
provided a cure for the famous ‘cold shudder’ that Darwin felt when he
thought about the refined form and function of the human eye. Of more
importance here is that it allows an understanding of how eyes could
evolve so rapidly during the Cambrian explosion.
The reconstructed course of eye evolution can be confirmed by the
numerous intermediates that are represented in animal species living
today. But this also leads to the question why there appear to be so many

Fig. 1.7 Diagram of the evolutionary relationships of major animal groups. For clarity, a
number of minor phyla have been excluded. Grey fields indicate branches that have at least
ocelli with directional photoreception. Photoreceptors with associated screening pigment are
thus not present in Placozoa, Ctenophora (comb jellies), or Porifera (sponges). Sophisticated
visual systems with multi-purpose eyes (filled circles) have evolved in four groups only: spiders,
insects/crustaceans, cephalopods, and vertebrates (Land and Nilsson 2006). Interestingly, there
is at least one of these groups in each of the three major branches of bilaterian animals. The
presence of imaging eyes, and their main optical types are indicated by letters a–h, which refer
to the schematic diagrams below (a–h), modified from Land (1981a). Intermediates between (a)
and (c) are indicated by a+ in the phylogenetic tree. The schematic diagrams of eye types are
arranged in three columns, after the mechanisms used to form images: shadow (a, b), refracting
devices (c–f), and reflectors (g, h). The upper four eyes are single-chambered eyes, and the lower
four are compound eyes. The receptor cells are represented by striped ovals. The eye types are:
(a) pigment cup eye, (b) compound pigment pit eye, (c) aquatic camera-type eye, (d) terrestrial
camera-type eye, (e) apposition compound eye, (f) refracting superposition compound eye, (g)
concave-mirror eye, (h) reflecting superposition compound eye. The different eye types and their
function are explained in detail in Chapters 4–8. Because single-chambered and compound
eyes are fundamentally different solutions to spatial vision, the distribution of eyes suggests that
Urbilateria possessed pigmented photoreceptor cells, but no imaging eyes. Molecular similarities
between cnidarian and bilaterian eyes suggest that non-directional photoreceptors predated
Urbilateria. Note that many of the eyes indicated in the phylogenetic tree are extracephalic, and
clearly not homologous to paired cephalic eyes. Molecular and embryological cues also suggest
that vertebrate eyes have a complex evolutionary history, distinct from that of cephalic eyes in
other bilaterians.
The origin of vision 17

Gastropoda Polychaeta
(a,a+,c) (a,a+,c,e)
Bivalvia
Cephalopoda (a,b,g) Oligochaeta
(a,c)
Crustacea Annelida
(a,c,g,e,f,h)
Mollusca
Platyzoa Sipuncula
Insecta (a) Cephalochordata
(d,e,f)
Bryozoa Tunicata
Nemertini
Chelicerata Vertebrata
(d,e) (c,d)
Lophotrochozoa Chordata
Arthropoda
Echinodermata
Onychophora Kinorhyncha (a,b)
(a+)
Nematoda Enteropneusta
Tardigrada (a) Acoela
Ecdysozoa Deuterostomia

Cnidaria
Urbilateria (a,a+)
Porifera

Placozoa Ctenophora

Ancestral metazoan

(c)

(a)
(g)

(d)

(e)

(b)

(h)

(f)
18 Animal Eyes

intermediates still in existence, when eye evolution can be potentially so


fast. The key to this paradox is probably that the apparent intermediates
are really end products in the sense that evolution has proceeded to a point
where there is no further selection for improved spatial resolution. We have
to keep in mind that more visual information is only useful if the animal
can improve its behaviour on the basis of it. Species with different lifestyles
can exploit visual information to different degrees. There is, of course, also
a cost involved in making and maintaining eyes, and it is this final balance
that determines how much visual performance each species will benefit
from.

The diversity of eye design


Eyes have shaped the evolution of animals and their ecological roles
since the Cambrian explosion. The result is an enormous range of eye
types using pin-holes, lenses, mirrors, and scanning devices in various
combinations to acquire information about the surrounding world (Fig.
1.7). The reasons for this diversity are not immediately obvious, espe-
cially the reason for different solutions to the same problem. There are
two fundamentally different ways by which spatial vision can evolve
from a directional photoreceptor: either more photoreceptors are added
to exploit the same pigment shield, or the visual organ is multiplied in
its entirety. The two alternatives lead to simple (single-chambered) and
compound eyes respectively. In Fig. 1.8, the primitive eye of a clam illus-
trates a case which would probably turn into a compound eye if vision
were to improve any further. During the early stages of eye evolution
there is little difference between the efficiency of the two solutions—
single-chambered or compound. It is only later, when visual performance
is maximized for a constrained eye size, that the simple eye will turn out
to be a better solution (the relative merits of the different types of eye
will be explained in Chapter 3).
Comparing the embryological origin of animal eyes reveals that they
derive from different tissues in different animal groups (Fig. 1.9). The ver-
tebrate retina develops as an eye cup formed by frontal parts of the brain,
but the lens is formed by the skin. In the nearly identical eyes of octopus
and squid the lens and the retina both develop from the skin. A conse-
quence of the origin of the vertebrate eye is that it has an inverted ret-
ina, where axons emerge towards the inside of the eye, and not towards
the back, which would seem to be a more natural solution—this is why
human and other vertebrates have a blind spot in the eyes, where the
The origin of vision 19

Fig. 1.8 A group (a) of pigment-pit eyes from the clam, Anadara notabilis, illustrate the point of
evolutionary branching of compound and single-chambered eyes. A section through two of the pit
eyes (b) reveals a simple organization. Some of the epithelial cells in the pit are filled with screening
pigment and others are receptors with microvillar plumes projecting into the cavity of the pit. The
fact that there are many such eyes grouped together and that the pits are deep and narrow indicate
that further evolution towards improved spatial vision would in this case lead to a compound eye.
The closely-related ark clams do indeed have proper compound eyes. From Nilsson (1994).

optic nerve exits from the eye. The peculiar and unique features of verte-
brate eyes indicate that they have an evolutionary history that is equally
unique (Lamb et al. 2007). The sea squirts, which belong to a sister group
of the vertebrates, have no eyes as adults, but their larvae have a small
median ocellus with photoreceptor cells of the same ciliated type as in
vertebrate eyes. It is not unlikely that the sophisticated eyes of vertebrates
20 Animal Eyes

(a) (b) Lens-like


Optic body
nerve

Vitreous
Lens
body

Retina Retina

(c) (d) Cuticular


lens
Crystalline
cone

Vitreous
Lens
body

Retina
Retina

Ciliary photoreceptor cell Microvillar photoreceptor cell

Fig. 1.9 The composition of eyes in (a) vertebrates, (b) polychaete fan worms, (c) octopus and
squid, (d) insects and crustaceans. Although there are only few ways of making functional eyes, the
tissues and morphological components that are recruited vary greatly between animal phyla. The
vertebrate retina (a) is produced by the neural epithelium of the brain (light shading) and the lens
is formed by an invagination of the epidermal epithelium. In squid and octopus the entire eye is
formed as a double epidermal cup, with the bottom of the inner cup being the retina and its fused
opening producing the lens. The receptor cells are also fundamentally different in that they contain
the visual pigment in either modified cilia (ciliary receptors) or microvilli (rhabdomeric receptors),
and in the biochemistry of their transduction machinery. A consequence of the ontogenetic origin
of vertebrate eyes is that the receptor axons project towards the vitreous body and have to emerge
from the eye through a hole in the retina. The compound eyes of fan worms (b) and arthropods (d)
have likewise recruited different types of visual receptor cells, but more importantly they are formed
on different parts of the body: as paired structures on the first segment of the head in arthropods
and as multiple structures on the feeding tentacles of fan worms. These facts taken together clearly
indicate that at least these four cases evolved spatial vision independently, and arrived at two
different solutions—the camera eye and the compound eye. Modified from Nilsson (1996).

originated from a condition similar to that still remaining in sea squirt


larvae today.
Mollusc eyes come in many different forms, and both cephalopods
and gastropods display a range from lens-less eyes to eyes with excellent
lenses. In both cases the retina is of the everse type, with axons emerging
The origin of vision 21

from the back of the retina, but in cephalopods the lenses grow from an
epithelium dividing the lens in two halves, whereas the lens-producing
epithelium is peripheral in gastropods. Arthropod eyes have either an
everse retina, as in insect compound eyes, or an inverse retina, as in the
nauplius eyes of crustaceans. All these differences suggest that spatial
vision has evolved independently numerous times in different animal
groups.
Eyes can be less than a tenth of a millimetre, as in some water fleas,
and close to 300 mm in giant squid and the ichthyosaurs (extinct marine
reptiles). This enormous range of sizes, designs, and placement of eyes
reflect the versatility of vision, and it gives a clear indication that eyes
can evolve easily, recruiting whatever tissue is at hand, and become
superbly optimized for the lifestyle of the bearer. In the remaining chap-
ters of this book we work our way through the fundamentals of eye
design and explain the function and rationale of all the different types
of eye.

Summary
1. Most of the types of eye that we recognize today arose in a brief period
during the Cambrian, about 530 million years ago. The development of
better eyes coincided with increases in body size, speed, and armour, as
visually-guided predation became a common way of life.
2. Opsin-based light sensitivity evolved in a common ancestor of all animals.
Transduction mechanisms diverged early, and in the common ancestor of
bilaterian animals there were at least two different types. These molecu-
lar mechanisms and corresponding genetic control networks have been
modified and co-opted to form the wide range of cephalic and extra-
cephalic eyes of modern animals.
3. Eye evolution is driven by the evolution of visual tasks. Early animals
could only perform a few and simple behavioural tasks based on light
sensitivity, but over time, some animal groups acquired a growing list of
ever more complex visual tasks. This development has gone from non-
directional light sensitivity, via directional photoreceptors combined with
body movements, to coarse spatial vision, and then to finer spatial vision
with focusing optics.
4. The evolution of advanced eyes need not have taken huge periods of
geological time. It has been estimated that evolution from a patch of pho-
tosensitive tissue to an eye resembling that of a fish could have taken as
little as half a million years.
22 Animal Eyes

5. Eye structures responsible for spatial vision in vertebrates, cephalopods,


and arthropods have evolved independently, which is now reflected in
different embryological development of eyes in these groups, and in
the fundamental distinction between single-chambered eyes and com-
pound eyes.
2 Light and vision

Eyes are devices for extracting useful information from the light reflected or
emitted from objects in the world around us. Most of this book is devoted
to a detailed account of how this is done, but before embarking on that saga
we need briefly to explore some of the properties of light that are important
for vision.
Light usually travels in straight lines with little loss in air or clear
water. For an advanced eye with good resolution this means that the geo-
metrical features of an object can be represented in the pattern on the
retina, and also that the relative locations of different objects in the world
can be determined. Light thus supplies most of the information needed to
work out both where an object is and what it is. In addition to geometric
information, light provides other cues to the identity of objects. Light
interacts with matter in many different ways. It can be reflected, trans-
mitted, absorbed, or scattered, and all these transformations depend on
wavelength. This in turn means that most light is coloured, when seen
by an animal with the facility to detect these spectral differences. Some
animals, though not ourselves, make use of another physical property of
light—polarization—to work out the direction of the sun, and to detect
reflecting surfaces.
In this chapter we consider first what sort of energy light is, and what
cues it provides for vision; second, how much light is available in the envi-
ronment and how this is measured; and finally how the photoreceptors in
the eye capture light and signal its more subtle properties, such as wave-
length distribution and polarization structure.

23
24 Animal Eyes

The nature of light


It has never been easy to understand how light works. Isaac Newton
(1642–1727) thought that light was a stream of ‘corpuscles’ whose trajecto-
ries are what we think of as rays. Rays (lines that are straight in a vacuum
but which can be reflected by mirrors and refracted by prisms and lenses)
provide a very simple and convenient way of describing how images are
formed, so long as the structures that bend the rays are large compared
with the wavelength of light, which is about 0.5 μm. Some phenomena,
however, are not well described by ray optics. Interference effects, such as
the colours of bubbles and oil films, and ‘Newton’s rings’ (the circular pat-
terns made when a convex lens contacts a plane glass block) can only be
understood in terms of the interactions of waves. Newton’s contemporary
Christian Huygens first formulated the wave theory in a form that could
also take into account the ray-like behaviour of light (Fig. 2.1a). However, the
authority of Newton was such, even beyond the grave, that the wave theory
made little progress in the eighteenth century. It was Thomas Young’s dem-
onstration in the early 1800s that light passing through two narrow slits
produces an interference pattern that revived the wave theory and gave
it experimental solidity (Fig. 2.1c). The interference of sea waves passing
through gaps in breakwaters provides a helpful analogy for many of the
phenomena that involve the interference of light waves.
During the nineteenth century wave theory advanced greatly. Augustin
Fresnel refined Huygens’ idea that an advancing wavefront can be thought
of as made up of a series of emitters of new wavelets, by incorporating the
principle of interference (Fig. 2.1b). This was particularly helpful in explain-
ing diffraction (the behaviour of light near edges and apertures) which is
important in understanding the limitations of lenses. The question of what
constituted the waves that make up light was addressed by James Clerk
Maxwell, who showed that they could be described as transversely oscil-
lating electrical and magnetic fields that propagated at a finite speed (Fig.
2.1d). Later, in 1888, Heinrich Hertz confirmed Maxwell’s idea of the exist-
ence of electromagnetic radiation by producing and measuring it. We now
accept that light occupies a small waveband (wavelengths between 0.4 and
0.8 × 10 –6 m) in an electromagnetic spectrum that extends from γ-rays (10 –13
m) up to radio waves with wavelengths of many kilometres.
There were still phenomena that wave theory could not explain. One in
particular, the photoelectric effect in which light causes electrons to be emit-
ted from metal surfaces, seemed to require a theory in which light inter-
acted with matter as discrete packets of energy. This led Albert Einstein to
propose, in 1905, a quantum theory of light which incorporated elements
from both wave and corpuscle ideas. Light, according to this scheme, consisted
Light and vision 25

(a) (b)

i1
i2

n1 n2

(c) (d) electric field

magnetic
field wavelength (0.5 μm)

3×108 ms–1

Fig. 2.1 Aspects of the physical nature of light. (a) Refraction can be thought of as the bending
of a ray (thick line), or as the slowing down of a series of wavefronts (thin lines) as they enter
a higher refractive index medium. This slowing bends the wavefront, resulting in Snell’s law
(n1sini1 = n2sini2). Rays are perpendicular to wavefronts. (b) Wavefronts passing through an
aperture. In the Huygens–Fresnel scheme each point on the wavefront is an emitter of secondary
wavelets. These add in the direction of travel and cancel in other directions so that the plane
wavefront is retained, but at the edges of the aperture light spreads laterally, resulting in
diffraction. (c) Interference produced by Young’s slits. Light from a single source passing through
two narrow slits interferes to produce a pattern where wavefronts are in phase and add (dotted
lines) or are out of phase and cancel. This results in a pattern of light and dark stripes. (d)
Propagation of light according to Maxwell. Light consists of oscillating electric and magnetic fields
perpendicular to each other. Each element (photon) has a fixed electric field (E-vector) direction,
and a fixed wavelength, and propagates through space at a fixed velocity.

of massless particles whose energy was related to their vibration frequency


according to the expression E = hν, where h is Planck’s constant (which
has the magnificent value of 6.63 × 10 –34 Joule-seconds; Max Planck had
introduced the beginnings of quantum theory to explain black-body radia-
tion in 1900), and ν is the frequency of the radiation (for green light, about
6 × 1014 Hz). The minuteness of this quantity of energy, 4 × 10 –19 Joules, can
be illustrated in mechanical terms; it is the amount of energy liberated by
dropping a mass of 40 pg (4 x 10 –11 g) from a height of 1 μm (10 –6 m). The
detailed behaviour of photons remains deeply mysterious, even to physicists.
26 Animal Eyes

When they interact with matter, as, for example, when they are absorbed
by rhodopsin molecules, they behave as discrete packets of energy that can-
not be subdivided, but when travelling through space they can behave as
though they are divisible. In a famous repetition of Young’s slit experiment,
in which light levels were so low that no more than one photon could pos-
sibly have passed through the slits at any one time, a diffraction pattern
was formed beyond the slits that was the same as that formed at high light
levels. The implication has to be that single photons passed through both
slits, and interfered with themselves. This is not, on the face of it, consistent
with indivisibility of energy, and indeed that idea in its simplest form has
been abandoned. Modern ideas are couched in terms of the probabilities of
capturing a photon in a particular location, rather than its actual energy
distribution. Of the various gnomic utterances on this subject, one of the
best comes from W.L. Bragg, of X-ray diffraction fame: ‘Everything in the
future is a wave, everything in the past is a particle’. The reader who needs
to know more should consult a recent optics textbook such as Hecht (2001).
For the purposes of this book, however, we are mainly concerned with the
interactions of photons with matter, when they do behave as countable,
indivisible packets of energy, and we will not worry too much about the
intimate details of their behaviour in transit.
The ray, wave, and photon descriptions of light are not alternatives, and
at the end of the day the photon description has to subsume the other
two, just as the Huygens–Fresnel wave theory encompassed the earlier cor-
puscle-ray theory. However, in the same way that Newtonian mechanics
provides a much simpler and more compact way of dealing with ordinary
macroscopic events than the more complete theory of relativistic quantum
mechanics, so it is often more convenient to deal with light by the simpler
partial descriptions. So for our purposes image formation by lenses and
mirrors is adequately analysed by geometrical (ray) optics; wave optics are
needed to deal with the diffraction limit to the performance of lenses, the
behaviour of narrow waveguides such as photoreceptors, the behaviour of
multilayer mirrors and diffraction gratings, and wavelength and polariza-
tion properties of light; and the photon description is needed to explain
the way photoreceptor performance degrades at low light levels when the
number of photon ‘hits’ is inadequate to provide a good statistical sample
of the image.

Light intensity
The amount of light available for vision has important consequences for
what we are able to see: all aspects of vision degrade as the light gets poor,
for reasons explained in Chapter 3. It also affects the evolution of eyes
Light and vision 27

for different light environments: nocturnal and deep-sea animals tend to


have particularly large eyes so that they can capture as many photons as
possible from the surroundings. On a bright day, the number of photons
reaching the earth’s surface within the visible range is about 1020 per sec-
ond per square metre. This seems a very large number, given that photore-
ceptors are capable of detecting single photons, but when one remembers
that the dimensions of a photoreceptor are measured in micrometres, and
its cross-sectional area in square micrometres rather than square metres,
a factor of 1012 disappears straight away, and the numbers become more
manageable (Box 2.1). Bright moonlight is about a millionth as bright as
sunlight, and overcast starlight is about ten thousand times dimmer still
(Table 2.1). These extremes represent the total range over which human
vision is useable—an overall span of 1010. At the lower limit, when we
can just about see to move if thoroughly dark-adapted, the rate of photon
capture is very low indeed: about one per receptor per hour. Individual
photoreceptors are capable of giving a satisfactory signal over an intensity

Table 2.1

Luminance of a white card under various illumination conditions


cd.m–2 photons.m–2.sr–1.s–1 (555nm)
1020 Bright sunlight
104 0 Luminance of sea surface
Overcast sunlight as seen from different
depths in the clearest
1018
102 ocean water (Attenuation
Room light coefficient 0.032 m–1)
200
1016
1
Street light

1014 Moon light


10–2 400

1012 Star light


10–4
600
Absolute threshold
1010
10–6 of human vision

108
10–8 800

106
10–10
1,000 metres
104
28 Animal Eyes

range of about 105, so supplementary gain control mechanisms, including


iris mechanisms and pooling between receptors, are needed to extend the
working range in both directions.
In any one scene, the intensity range is nothing like as great as this.
Even black velvet reflects about 2 per cent of the incident light, so the maxi-
mum brightness range the eye will encounter is a factor of 50. One of the
main jobs of the dark and light adaptation mechanisms of the retina is to
ensure that under any particular lighting conditions the working range of
the retina is limited to this 50-fold intensity range, so that the full process-
ing capacity of the retina is used to register the scene. As the illumination
level changes (at dawn or nightfall, for example) the entire range has to
shift to a new central intensity level. In this way we are able to see a fully
detailed scene in bright daylight, or in roomlight a thousand times dimmer,
with very little difference in the perceived result.
Even in the clearest ocean water, blue light (which is absorbed least)
is reduced by a factor of 10 for every 70 metres depth, meaning that the
human threshold is reached at a depth of 700 metres. Fish with much larger
pupils, and some crustaceans with superposition optics (Chapter 8) may be
able to see down to 800 or 900 metres, but below that there is effectively no
light from the sun. Many animals at this depth do have eyes, but the source
of light they use is either their own luminescence or that of other animals.
There is a surprising amount of bioluminescence at a depth of 1000 metres,
where animals glow or flash to communicate, to seek food, or as a surprise
defence. In murky coastal waters light is attenuated much more rapidly, so
that little is available after a few tens of metres. There is little biolumines-
cence either, and the turbidity reduces its value in communication.

Box 2.1 Measuring intensity


Intensity itself is a rather vague term, and it is important to be clear
whether we are referring to a source that emits light (where the appropri-
ate terms are luminance or radiance) or a receiving surface (units are il-
luminance or irradiance). The reason that there are two sets of terms in
each case is that they measure light in quite different ways. The first (pho-
tometric) system is based on humans as detectors and has its roots in
comparisons made in the nineteenth century between different light
sources and a ‘standard candle’. This may seem archaic but it is still in
use; however, the standard is now no longer a candle, but is now defined
in terms of the watt. Most calibrations come from ‘secondary’ standards,
Light and vision 29

Box 2.1 Measuring intensity (contd.)


usually carefully calibrated tungsten light bulbs. The second (radiomet-
ric) system is based on physical energy measurements (watts, photons
per second) that can be traced to universal constants. One important ad-
vantage of the radiometric system is that it can take differences in wave-
length into account; the luminance system compares all sources of light to
a subjective ‘white’, which may be adequate for some human studies, but
is of much less value when studying other animals with vision that is
spectrally quite different from ours.
Figure 2.2 illustrates a surface emitting light (left) and one receiving
light. Appropriate photometric and radiometric definitions are given
below the figure. Let us consider first a radiometric system based on
photon numbers. To specify the radiance of a surface we need the number
of photons emitted per unit area per second. Since these are being emitted
into the whole hemisphere in front of the surface, we also need to specify
the size of the cone over which the photons are being measured. The ap-
propriate unit here is the steradian or unit solid angle, which is defined as
a conical sector of a sphere in which the area of the spherical surface is
equal to the square of the radius. Since the area of a sphere is 4πr2, it fol-
lows that 4π steradians make up a complete sphere, or put another way,
4π steradians surround a point. The angular width of a steradian is 65.5°
(not the same as a radian, the two-dimensional equivalent, which is 57.3°).
Thus the full units of radiance are photons per second per square metre
per steradian, or photons.s−1.m−2.sr−1. If we are concerned with monochro-
matic light, those units are sufficient, but if the light is spectrally complex
it is also necessary to specify how much of the spectrum is involved. This
can be done by breaking up the spectrum into units of wavelength (typi-
cally nanometres) and adding nm–1 to the preceding definition. The total
photon radiance is then given by the integral across the spectrum of all
the spectral elements. For the receiving surface the irradiance is the radiant
flux (photons per second) per unit area, so its units are photons.s–1.m–2.
A radiometric system using energy units (watts = joules per second)
is essentially the same as the photon number system, except that the
units are watts (or microwatts) rather than photons per second. The
conversion factor is Einstein’s equation, given earlier: E = hν = hc/λ,
where h is Planck’s constant, ν is frequency, c is the speed of light (3.108
m.s−1) and λ is wavelength (in metres). For photons in the yellow-green
region of the spectrum (555 nm) this works out as 3.6 × 10–19 joules. Thus
one watt of yellow-green light is equivalent to about 2.8 × 1018 photons
per second.
30 Animal Eyes

Box 2.1 Measuring intensity (contd.)

Fig. 2.2 Radiometric and photometric units applicable to light emission and light reception.
For details see text.

The luminance system is slightly different because it relies on the can-


dela (cd) as a unit of luminous intensity (a standardized equivalent of the
old ‘candle power’) which incorporates a solid angle in its definition.
A point source of one candela emits 4π lumens (lm) of luminous flux, i.e.
one lumen into each steradian surrounding the point. Thus an extended
source (such as a TV screen) which has a luminance of L candelas per
square metre, produces a flux of L lumens per steradian per square metre
of emitting surface. In bright sunlight a white card has a luminance of
about 3 × 104 cd.m–2 (Table 2.1). The sun’s disc itself is brighter by a factor
of nearly 105, about 1.6 × 109 cd.m−2. The illuminance of a receiving surface
has the units of lumens per square metre, which are also known as lux.
(There is a wonderful collection of archaic terms for intensity: stilbs, apos-
tilbs, phots, nits, foot-lamberts, etc. Here we stick to SI units as far as pos-
sible.) The lumen like the watt is a measure of power, and the two are
Light and vision 31

Box 2.1 Measuring intensity (contd.)


interconvertible. For light of the most visible wavelength in daylight
(555 nm) 1 watt is equivalent to 682 lumens. At the same wavelength
one lumen is equivalent to 4.09 × 1015 photons.
If we want to know how much light a surface receives from an emit-
ting surface at a distance d, we can do this by expanding the definition of
solid angle in the luminance units. Suppose the emitting surface pro-
duces L lm.m–2.sr–1. The solid angle involved here is the area of the receiv-
ing surface, divided by d2, the square of the radius of the sphere of which
the solid angle is a part (see above). Thus the definition of solid angle
contains within it the better known inverse square law. If the area of the
emitter is Ae and the receiver Ar, then the flux (F) at the receiving surface
will be:
2
F = LAe Ar /d lumen
and the illuminance (I) will be:
2
I = LAe /d lux
Radiance and irradiance are similarly related.

Contrast
In general, we and other animals are not particularly interested in the abso-
lute luminance of objects, but in the differences in luminance that define
their parts. We need to be able to recognize objects for what they are under
a wide range of lighting conditions, so the absolute light level actually needs
to be removed, as the visual information is processed. The feature of objects
that we need to register is their contrast, which is a measure of the extent
to which one part differs from another. For two surfaces whose absolute
luminances are L1 and L2, the contrast (C) is given by:

C = ( L1 − L2 )/( L1 + L2 )

The beauty of contrast, defined in this way, is that it is a property of the


object we are looking at, not the lighting conditions. Suppose L1 and L2
are two surfaces that reflect different proportions of the light that reaches
them, so that the luminance of L1 is 2 units and L2 is 1 unit. The contrast,
from the equation, is 1/3. If the light shining on them increases a hun-
dred-fold, the contrast will be 100/300, which is still 1/3. Contrast varies
32 Animal Eyes

between 1, if one surface is completely dark, to 0 if the surfaces have the


same luminance.
A number of processes in the retinae of animals ensure that we see
contrast rather than raw luminance. Adaptation mechanisms of various
kinds mean that the signal passed to the brain is more or less independent
of overall light level. The centre-surround organization of ganglion cells
means that they signal differences in brightness between adjacent parts of
the image, rather than ‘spot’ values of intensity. There must still be a few
neurons that measure intensity to tell us whether it is night or day, but that
is not the main job of vision.

Wavelength and colour


The range of wavelengths (λ) visible to humans lies between 400–700 nm
(0.4–0.7 μm), with some sensitivity up to 800 nm. This range encompasses
the colours of the spectrum famously described by Newton as violet, indigo,
blue, green, yellow, orange, red in increasing order of wavelength (Fig. 2.3a).
Most people are unhappy with indigo as a distinct colour, but we can all
agree about the rest. For many other animals, including birds, fish, and
many arthropods, the spectrum extends into the ultraviolet range from 400
to about 320 nm (UVA). In one jumping spider it extends further into the
UVB range below 315 nm (Li et al. 2008) where it is important in courtship
displays. Thrips (Thysanoptera) also respond to UVB light (Mazza et al.
2010). Many flowers have striking markings in the ultraviolet (UVA) range
that we cannot see, and which are for the benefit of pollinating insects
(Figs. 2.3d and 2.4). Some fish and butterflies have visual pigments with
maximum sensitivities up to 60 nm further into the red than human visual
pigments, so they can see into what, for us, would be the near infra-red.
Beyond this, in the micrometre range of wavelengths, is the infra-red radia-
tion given off by hot bodies. Some snakes can make use of these wave-
lengths for a form of thermal imaging. This involves temperature-sensitive
nerve endings in special pits near the eyes, not the eyes themselves, and
visual pigments are not involved. Snakes, which are cold-blooded, use this
sense to detect and home in on warm-blooded prey such as rats and mice.
The only other animals known to have special detectors of infra-red radia-
tion are certain beetles (Melanophila), which approach forest fires from dis-
tances of many kilometres. Their larvae are dependent on wood killed by
fire (Schmitz and Bleckmann 1998).
Objects in the world around us reflect different wavelengths of light to
different extents, and so the wavelength distribution in the light from these
objects can provide a valuable clue to their identity (Fig. 2.3b). Leaves reflect
most light in the range 500–600 nm, blue flowers between 350–500 nm, ripe
Light and vision 33

(a) (c)
100 420 498 534 564
200
Photons m–2 s–1 nm–1 (×1016)

Relative absorption (% max)


ULTRA
VIOLET INFRA
VISIBLE 50
100 yellow RED
green

orage
violet
blue

red

0 0
300 400 500 600 700 800 900 1000 300 400 500 600 700
Wavelength (nm) Wavelength (nm)

(b) (d)
1.0 100 340 450 540

Relative sensitivity (% max)


Reflectance

Pelargonium
Anemone Hypericum (red)
(white) (yellow)
0.5 50
Lobelia
(blue)
green
leaf

0 0
400 500 600 700 300 400 500 600 700
Wavelength (nm) Wavelength (nm)

Fig. 2.3 Environmental light and the photopigments that receive it. (a) The spectrum of light
reaching the earth’s atmosphere from the sun. Note that the visible spectrum occupies the region
where photons are most abundant. Data from Lythgoe (1979). (b) The spectral reflectances of
four flowers and a leaf. The flowers are illustrated in Plate 1. Note that the anthocyanin colours
of the red, yellow, and white flowers all act as long-wave passing cut-off filters. The same is true
for the blue, but it is the secondary peak at 450 nm that we see; the long-wave reflectance is too
far into the red. The leaf reflects a little in the green (it is the job of leaves to absorb not reflect)
and powerfully in the infra-red which is not visible to our eyes. Curves courtesy of Daniel Osorio.
(c) The absorption spectra of human rods (dotted) and the three cone types. It is possible to get
a rough idea of how much a particular colour would stimulate each cone type by seeing how
much overlap there is between the reflectance curve [e.g. (b)] and the absorption curves. Data
from Lythgoe (1979). (d) Spectral sensitivity curves of bee photoreceptors. These are essentially
similar to the human curves except that they were measured electrophysiologically, rather than by
absorption. Note that they extend into the ultraviolet, and are more evenly spaced than the human
cone curves. Data from Menzel (1979).

fruit 550–600 nm, and blood 600–650 nm. Being able to analyse in some
way the spectrum of light reaching the eye provides a useful tool for clas-
sifying different objects.
It is important to recognize that colour and wavelength are not the same.
Wavelengths themselves are colourless, and the colours we see are the sub-
jectively perceived result of our wavelength analysis. In the language of
philosophy, subjective colours (red, green, etc.) are qualia, whose nature we
cannot demonstrate to others. We may all agree that blood is red and leaves
34 Animal Eyes

Fig. 2.4 Ultraviolet markings on flowers and butterflies. Left : marsh marigolds (Caltha palustris)
seen by man as uniform yellow (above), have dark centres in the UV. Centre: the yellow butterfly
Phoebus rurina (male) has brilliant UV markings at the base of the forewings. Right : Bidens and
Coreopsis flowers in white and UV light. Redrawn from photographs in Eisner et al. (1969).

are green, but that does not guarantee that we all see the same colours with
our mind’s eye (ask yourself what colour a red–green colour-blind person
sees when you see orange). It merely says that we agree on their wave-
length distributions. It was a surprise in the 1960s, when it first became
possible to measure the sensitivity of single cones in the eye to different
wavelengths, to find that none of them was sensitive specifically to ‘red’
wavelengths (longer than 600 nm). The cone closest to ‘red’ is most sensi-
tive to a wavelength of about 564 nm, which corresponds to a spectral col-
our of yellowish-green (Fig. 2.3c). Red, of all colours, whose vividness is so
impressive, has no special receptor! Increasing redness is represented in the
cone signals as a decrease in the output of the 564-nm cones, and an even
greater decrease in the output of the 534-nm cones, so that for ‘true’ red
(wavelengths greater than 650 nm) only the 564-nm cones are active.
Confusion arises because we do use our colour names to describe spec-
tral wavelengths. Thus a wavelength of 580 nm is yellow. However, a yel-
low that is identical to us is produced by an appropriate mixture of light of
620-nm (red) and 540-nm (green) wavelengths. The colour we see depends
on the relative stimulation of our three cone types, and in this case the pure
wavelength and the mixture give the same stimulation ratios. Perceived col-
our is thus not an accurate guide to spectral composition. There are also
many colours we see that do not have corresponding single spectral wave-
Light and vision 35

(a) (b)
I=1 short long
Receptor response (% max)

Receptor response (% max)


100 100
I=0.8
A
I=0.6 A/B=C/D
50 50 =1.8
I=0.4 B

C
D
0 0
500 600 700 400 500 600 700
Wavelength (nm) Wavelength (nm)

Fig. 2.5 At least two visual pigments are needed for colour vision. (a) With only one pigment
the response of a receptor does not distinguish between intensity and wavelength. A 50 per cent
response could have been produced by any of the arrowed combinations. (b) With two different
visual pigments the ratio of stimulation (A/B or C/D) is specific to a particular wavelength, and
unaffected by intensity level.

lengths: purple, for example, is a mixture of long (red) and short (blue)
wavelengths. Colour science is an important but complex subject, and as
we are more concerned here with animals whose colour vision system is
not like our own, the interested reader should consult a text such as Mollon
and Sharpe (1983).
If we are uncertain about the relationship between perceived colour and
wavelength discrimination mechanisms in our own species, we should obvi-
ously be even more cautious in thinking about what sort of colour vision
other animals have. We can be certain, however, that a great many animals
do have it. The ability to discriminate lights with different wavelength distri-
butions depends on an animal possessing at least two visual pigments with
different spectral sensitivities. Then, as Fig. 2.5 shows, spectral colours of dif-
ferent wavelengths will give unique ratios of stimulation of the two pigments,
independent of the total stimulation; that is, the overall level of illumination.
With only one visual pigment, wavelength and intensity cannot be disen-
tangled from each other, and colour vision of any sort is impossible. (There
is an alternative, which is to have one visual pigment and several colour
filters. There are indeed such filters in some photoreceptors [the coloured oil
droplets in the retinae of birds and reptiles, for example] but their function
seems to be to ‘sharpen up’ the spectral sensitivities of the cone pigments,
rather than to create a colour vision system from a single photopigment.)
Thus if an animal possesses two or more visual pigments in its eye, there is a
prima facie case for thinking that it has colour vision of some kind. The great
majority of arthropods and vertebrates do indeed have at least two visual
pigments. Some have many more, the record being 15 in stomatopod crusta-
ceans (Marshall and Oberwinkler 1999). A selection is given in Table 2.2.
36 Animal Eyes

Table 2.2 Wavelengths of maximum sensitivity (lmax) for the photopigments of different
animals (nm)

Invertebrates

Annelids
Torrea candida 400, 560
Molluscs
Giant clam (Tridacna sp.) 360, 490, 540
Octopus (Octopus vulgaris) 540
Firefly squid (Watasenia) 470, 484, 500
Chelicerates
Horseshoe crab (Limulus polyphemus) 360, 530
Ctenid spider (Cupiennius salei ) 340, 480, 520
Jumping spider (Plexippus) 360, 520
Crustaceans
Water flea (Daphnia magna) 348, 434, 525, 608
Shore crab (Hemigrapsus sanguineus) 440, 508
Mantis shrimp (Gonodactylus, Odontodactylus) 12 types (312–710)
Mesopelagic shrimp (Systellaspis debilis) 410, 498
Isopod (Ligia exotica) 340, 470, 520
Insects
Dragonfly (Sympetrum rubicundum) 330, 430, 490, 520, 620
Cricket (Gryllus bimaculatus) 332, 445, 515
Backswimmer (Notonecta glauca) 345, 445, 560
Housefly (Musca domestica) 335, 355, 460, 490, 530
Honey bee ( Apis mellifera) 344, 436, 556
Desert ant (Cataglyphis bicolor) 350, 510
Hawkmoth (Deilephila elpenor) 345, 440, 520
Painted lady (Vanessa cardui ) 360, 470, 530
Swallowtail adult (Papilio xuthus) 360, 400, 440, 520, 600
Swallowtail larva (Papilio xuthus) 370, 448, 527

Vertebrates

Elasmobranchs LWS MWS SWS2 rod


Teleosts LWS MWS SWS2 SWS1 rod
Amphibians LWS SWS2 SWS1 rod
Reptiles LWS MWS SWS2 SWS1 rod
Birds LWS MWS SWS2 SWS1 rod
Most mammals LWS SWS1 rod
Marine mammals LWS rod
Primates LWS (2) SWS1 rod

Data for invertebrate visual pigments from Kelber (2006). Data for vertebrates from Bowmaker (2008).
Light and vision 37

Throughout the vertebrates there are five distinct families of visual


pigments: cone pigments LWS 495–570 (red/green), MWS (= RH2) 470–530
(green), SWS2 415–480 (blue), SWS1 355–450 (UV/violet); and rod pigment
(= RH1) 460–530 (blue-green). They are not all represented in the different
vertebrate groups (Table 2.2). LWS, MWS, SWS refer to long-, medium-, and
short-wavelength sensitive.
Amongst invertebrates the majority are dichromats or trichromats, often
with one pigment sensitive in the ultraviolet. Octopus, with a single pig-
ment, is one of the few well-documented cases of a truly colour-blind ani-
mal (Hanlon and Messenger 1996). In a few cases, for example the giant
clam Tridacna, the function of multiple pigments may simply be to improve
detection by sampling a wide spectrum, but in most other cases some kind
of colour vision may be inferred. In insects such as dragonflies and swal-
lowtail butterflies with five visual pigments, the colour vision system is
complex and sophisticated. Stomatopods (mantis shrimps) have 12 visual
pigments devoted to colour in the mid-band of each eye (see Plate 4), plus
another three or four in other regions of the eye (Cronin and Marshall
2004). They certainly have colour vision, but whether it works on the same
principle of ratio taking (Fig. 2.5), as in bees and humans, is not clear.
In vertebrates, the evidence from molecular sequencing of opsin pigments
indicates that the four main classes of cone opsin genes are present in
jawless fish such as lampreys, and so presumably they all evolved as early
as the late Cambrian, and thus the earliest fishes probably had tetrachro-
matic colour vision (Bowmaker 2008). These cone pigment families are all
present in the teleost fish, lizards, and birds, often with gene duplications
producing further pigments within each family. In other groups one or
more of the cone types has been lost. This is particularly true of mammals,
most of which are cone dichromats (Hunt et al. 2009). In humans, and
other Old World primates, the long wavelength pigment has duplicated to
give two pigments with maxima at 534 nm and 564 nm, providing trichro-
matic colour vision.
There are two parts to a visual pigment molecule: the chromophore and
the opsin protein to which it is bound. The chromophore is the part of
the rhodopsin molecule that receives the photon, and is one of four close
relatives of vitamin A. These have a long chain of alternating single and
double bonds, in which the bond between the 11th and 12th carbon atoms
reacts to the capture of a photon by changing from the cis to the trans con-
figuration (Fig. 2.6). This then initiates a series of biochemical events which
results in the closure of sodium channels and a hyperpolarization of the
cell (in vertebrates) or an opening of sodium or calcium channels (in most
invertebrates) and a consequent depolarization (reviews: vertebrates, Burns
38 Animal Eyes

Fig. 2.6 Left: diagrammatic section of a vertebrate rod, showing the discs of membrane that
contain the photoreceptor molecules. s, synapse; n, nucleus; e, ellipsoid (mitochondria). Upper
right: diagram of a rhodopsin molecule in the membrane, showing the seven helices that enclose
the chromophore group, retinal. C and N are the carbon and nitrogen termini of the opsin protein.
Lower right: the retinal molecule in its unstimulated (11-cis) and stimulated (all trans) form. The
light-sensitive double bond lies in the plane of the membrane. After Lythgoe (1979).

and Lamb 2004; invertebrates, Hardie 2006). The wavelength range that a
photopigment molecule responds to best depends partly on which of the
four chromophores is present, and partly on the structure of the protein
molecule (the opsin) that surrounds the chromophore (Fig. 2.6). It is now
known that a handful of amino acids in the region around the chromo-
phore can ‘tune’ it, so that it responds best to photons of higher or lower
energy. Thus, colour vision systems contain photopigments that possess
either different chromophores, or different opsins, or both. A good account
of the photochemistry of vision can be found in Rodieck (1998).
‘True’ colour vision is usually taken to mean that an animal can use
or learn to use not just the wavelengths that correspond to the peak sen-
sitivities (lmax) of the visual pigments, but also intermediate wavelengths
and wavelength combinations, by making use of stimulation ratios. Our col-
our vision is like this, and so is the colour vision of bees (Fig. 2.3d) which
can be trained to a wide variety of coloured stimuli. There are, however,
simpler systems, referred to as ‘wavelength-specific behaviours’ where the
outputs from the different photoreceptors seem to drive behaviour directly.
A good example of this is the cabbage-white butterfly Pieris brassicae where
Light and vision 39

the ‘open space’ escape reaction is driven by wavelengths around 370 nm in


the ultraviolet, the feeding reaction to wavelengths around 460 nm and also
600 nm (i.e. flower colours in the blue and red), and egg-laying by green
wavelengths around 540 nm (Scherer and Kolb 1987). These wavelengths
correspond closely with the peak sensitivities of butterfly visual pigments.
However, there are also indications that wavelength mixtures can be effec-
tive, and it seems likely that butterflies have some ‘true’ colour vision as
well as these wavelength-specific behaviours.

Polarization
Polarization is a property of light that we are unable to detect, but whose
use is commonplace in the animal kingdom. As indicated in Fig. 2.1d, the
electric field of a photon lies in a particular plane, and a photon will only
excite a photopigment molecule if the direction of this vibration, and the
orientation of the excitable double bond in the photopigment molecule (the
11-cis bond of the chromophore group) lie in the same plane. In the discs
that make up the photoreceptors of vertebrate rods and cones the photopig-
ment molecules lie in a plane perpendicular to the incoming light, but in
all possible orientations within that plane (Fig. 2.7b). That means that the
receptor cell has no means of knowing what the direction of the electric
vector of the photon it received might have been. In the microvillous recep-
tors of invertebrates the situation is different. A long tube, such as a micro-
villus, covered with a photopigment-bearing membrane has, just from its
geometry, a 2:1 preponderance of chromophore groups aligned parallel to its
long axis (Fig. 2.7c and d). This means that microvillous (or ‘rhabdomeric’)
receptors have a built-in capability to respond selectively to light polarized
in a particular plane. To make a system that can actually determine the
direction of polarization of the light reaching the eye requires two or more
groups of receptors with their microvilli aligned in different directions, and
a neural system that is able to work out the ratio of the responses. This is
a very similar problem to that of colour vision (Fig. 2.5) and no doubt the
neural solution is similar. Many insects and some crustaceans are capable
of this kind of analysis.
Light from the sun is unpolarized; that is to say, it contains photons
whose electric fields are in all possible orientations. However, two proc-
esses—scattering and reflection—distinguish between photons with differ-
ent electric field directions, and result in polarized light. Both processes are
useful to animals. As the sun’s rays pass through the atmosphere fine par-
ticles scatter out blue light, and also preferentially scatter light polarized in
a plane at right angles to the ray-path from the sun (Fig. 2.7a). This results
in a pattern of polarization in the sky which is determined by the sun’s
40 Animal Eyes

Fig. 2.7 Polarized light and its reception. (a) The pattern of polarization in skylight. The E-vector
directions are concentrically arranged around the line joining the sun to the ‘anti-sun’ 180° away.
The polarization is most intense at 90° from the sun (open arrows). When the sun is not visible,
an insect can infer its direction from small regions of the polarization pattern. (From Rossel 1989.)
(b) The random distribution of chromophore molecules in a rod disc means that a rod cannot
distinguish between photons with their E-vectors in different planes, when light reaches the disc
from its normal direction. Light from the side, however, is only absorbed if polarized parallel to the
disc membrane, because that is the orientation of the chromophore molecules (see Fig. 2.6). Open
arrows, light direction; filled arrows, E-vector direction. (c) The finger-like microvilli of invertebrate
rhabdoms have a preponderance of chromophore molecules aligned parallel to their long axes.
This is most easily demonstrated with the square section in (d). Here it is clear that there are twice
as many molecules aligned in the direction a–a than in either of the other orthogonal directions. In
some microvillous receptors specifically involved in polarized light reception the molecules are not
randomly arranged in the membrane, but specifically aligned with the microvillar axis.

position, and even if the sun is obscured by cloud the polarization pattern
largely persists. This pattern can thus be used instead of the sun as a navi-
gation aid, a role which has been thoroughly demonstrated in bees and ants
(Rossel 1989) and suspected in many other animals.
Non-metallic reflecting surfaces, water for example, also polarize light.
At one particular angle (Brewster’s angle; 53° for water) the polarization is
Light and vision 41

Fig. 2.8 Examples of natural polarization. Left: photographs of a water surface and a matt grey
card with a polaroid filter aligned parallel (above) and perpendicular (below) to the water surface.
Right: polarization as pseudo-colour. Three leaves (a matt sage leaf, a bay leaf, and a shiny
cotoneaster leaf) photographed through polaroid filters as in the left-hand photographs. Note
that the brightness order of the leaves reverses as the polaroid cuts out the reflection from the
shiny leaves.

complete, so that the reflected light is all polarized in one direction (par-
allel to the surface), and the transmitted light is all in the plane at right
angles. The glare from water can be a nuisance to us, so we often cut it out
with polaroid sun-glasses that selectively absorb light polarized parallel to
the water surface (Fig. 2.8). Some water bugs, however, make use of this
polarized reflection specifically for the purpose of finding water when their
particular pool dries up. Rudolf Schwind (1983) used a sheet of polaroid to
mimic a water surface, and found that water boatmen (Notonecta) would
crash land onto the polaroid with the same enthusiasm as they would dive
into a real water surface. Both bees and water bugs have special regions
of the eye containing receptors with microvilli aligned in particular direc-
tions, in a pattern apparently designed to extract the necessary polarization
information.
Polarization vision has also been implicated in communication. Both
cuttlefish (Mollusca) and mantis shrimps (Crustacea) have specific patterns
on conspicuous parts of the body that are only visible to a polarization-
sensitive viewing system. Cuttlefish are known to have polarization vision
(Talbot and Marshall 2010a), and mantis shrimps have been shown to be
able to learn polarization patterns (Marshall et al. 1999). In addition, man-
tis shrimps and some beetles have the ability to detect circularly polarized
42 Animal Eyes

light. Although circular polarization is not likely to be relevant to many


animals, recent studies on its role in mantis shrimp vision are so intrigu-
ing and compelling that we discuss it here in Box 2.2. The physics is a little
challenging.

Box 2.2 Circularly polarized light


As well as plane polarization, circular and elliptical polarization are also
of some importance in animals. Plane polarized light can be thought of as
being made up of two components perpendicular to each other (Fig. 2.9a
and b). In plane (linearly) polarized light these components are in phase
with each other, but under certain circumstances they can become out of
phase, and then their combined resultant no longer vibrates in a single
plane but traces out a spiral (Fig. 2.9c). If the components are exactly 90º
out of phase the spiral, seen end-on, is a circle, and the light is said to be
circularly polarized. If they are out of phase by less than 90º the spiral
traces out an ellipse, hence elliptical polarization.
In the cuticle of certain beetles the chitin molecules are parallel to each
other, and arranged in sheets with each layer slightly rotated relative to
the ones above and below it. This produces circular polarization in the
reflected light. This phenomenon had been regarded as mildly interest-
ing, but unimportant for vision. Recently, however, it has been shown that
the jewel scarab beetle (Chrysina gloriosa) not only differentially reflects
circularly polarized light, but that the animals respond differently in their
flight orientation to linearly and circularly polarized light (Brady and
Cummings 2010). It seems likely that C. gloriosa use circular polarization
to communicate with conspecifics while remaining cryptic to predators.
The mechanism of detection is not known.
The most impressive and best worked out case of circular polarization
occurs in stomatopod crustaceans (mantis shrimps) which both produce
and detect this form of light (Chiou et al. 2008). Mantis shrimps have a
band across the equator of each eye consisting of four rows of ommatidia
devoted to colour vision and two (rows 5 and 6) to polarization (see Plate
4 and Chapter 9). The latter have their main rhabdoms made of alternat-
ing bands of parallel microvilli at right angles (Fig. 2.10a), and these can
potentially resolve plane polarized light as indicated in Fig. 2.7c. How-
ever, above each main rhabdom, in the light path, is another short ellipti-
cal rhabdom (R8; Fig. 2.10a), which is birefringent (i.e. it has different
refractive indices for light whose E-vectors are in different planes), and
Light and vision 43

Box 2.2 Circularly polarized light (contd.)


(a) (b) (c)

90°

(d)

¼ wave plate

Fig. 2.9 Circular polarization. (a) Linearly polarized light with the E-vector in a single plane.
(b) As (a) but decomposed into two components at right angles. (c) As (b) but with a 90º
phase shift between components. The resultant is now no longer in a single plane but
becomes a spiral which appears circular from end-on. (d) Action of a quarter-wave plate.
Components of a linearly polarized wave from the left are retarded by different amounts and
so emerge with a 90º phase shift. The light becomes circularly polarized as in (c). Circularly
polarized light travelling right to left becomes linearly polarized. From Land (2008).

behaves as a quarter-wave plate (Fig. 2.9d). This has the effect of retarding
one of the components of circularly polarized light by a quarter-wave-
length relative to the other, which brings the two components back into
phase and gives rise to linearly polarized light whose plane can then be
resolved by the main rhabdom (R1-7) (Fig. 2.10b).
Circularly polarized light can be right- or left-handed, depending on
the sense of the spiral (Fig. 2.9c), which in turn depends on whether the
phase difference is +90º or −90º. It seems that the mid-band ommatidia
in the mantis shrimp can differentiate between the two types of circular
polarization. This ability arises because the R8 receptors in rows 5 and
6 have their microvilli at right angles to each other, and so introduce
opposite phase differences. As a result, the underlying R1-7 receptors
respond differentially to left-handed or right-handed circularly polar-
ized light.
44 Animal Eyes

Box 2.2 Circularly polarized light (contd.)


(a) (b)

R8

Fig. 2.10 Circular polarization detector in


the eye of a stomatopod. (a) Arrangement
of microvilli in ommatidial rows 5 and 6 of
R1-7
the mid-band. (b) R8 acts as a quarter-wave
plate (Fig. 2.9d), and converts circularly
polarized light to linearly polarized light,
which can be detected by the main rhabdom
(receptors R1-7).

The R8 receptors are themselves sensitive to linearly polarized ultra-


violet light, with E-vectors oriented parallel and perpendicular to the
mid-band in the two rows (Kleinlogel and Marshall, 2009). Thus omma-
tidia of mid-band rows 5 and 6 may be capable of supplying information
about both plane polarized (R8) and circularly polarized light (R1-7).
Special regions of the bodies of mantis shrimps, notably the uropods
and telson, are used in sexual display. In Odontodactylus species these
regions differentially reflect left- and right-hand circularly polarized
light, but only in the males (Chiou et al. 2008). It seems that mantis
shrimps may use circular polarization as a private channel of communi-
cation; so far, no other organism is known to have the appropriate optical
machinery for resolving circularly polarized light.

Summary
1. Light can behave as rays, as waves, or as streams of particles. For most
optical purposes a description in terms of rays is adequate, but several
phenomena, including the resolution of images, can only be explained by
wave interference. At low light levels the quality of vision depends on
the statistics of particle (photon) numbers.
2. Human vision extends over an intensity range of about 1010. In gen-
eral, visual systems detect contrast rather than intensity, where contrast
Light and vision 45

is the difference in intensity of two surfaces divided by their sum. It


depends on the reflectances of objects rather than the intensity of illu-
mination.
3. Objects reflect light of different wavelengths to different extents, and this
is the basis of colour. Colour vision requires at least two visual pigments
that are maximally sensitive to different wavelengths.
4. Polarization vision is common in animals. Light is polarized by scat-
tering in the atmosphere, and the pattern produced can be used as a
navigation aid. Water surfaces and other non-metallic reflectors also
polarize light. Detection requires that the photopigment molecules are
appropriately aligned in the photoreceptor membrane. Mantis shrimps
can, in addition, distinguish right-handed from left-handed circularly
polarized light.
3 What makes
a good eye?

Fundamentals
Eyes are unique amongst the sense organs because we know enough about
the physics and chemistry of vision to be able to say with some certainty
why they are built the way they are. Of course, they were not designed, as
one would design a camera or telescope, but evolved over millions of years.
Nevertheless, both evolution and technology have to obey the same set of
physical rules. Image-forming lenses, for example, have to be made using
the principle of refraction by a transparent high refractive index material,
whether the lens evolved in an octopus or fish, or was designed by Leitz or
Nikon. The differences come in the materials: biological lenses are gener-
ally constructed from protein rather than glass, and mirrors are made from
guanine multilayers rather than silver. It is chemistry rather than physics
that distinguishes biology from technology. In this chapter we explore these
physical constraints on eye evolution. We will make the fairly bold claim
that it is sensible to approach eyes in essentially the same way that an opti-
cal engineer might evaluate a new video camera. We can say what most
of the components are for and how well they are likely to perform, and
also establish criteria for judging the performance of an eye as a whole.
Thus this chapter is intended as something of a tool kit for interpreting eye
structure, and for providing a basis for comparing the performances of the
different types of eye that will be the subject of later chapters.
Eyes supply information about the nature of the light distribution in the
environment. For a hawk this information needs to be very fine-grained, but
for a flatworm it can be coarse. Although we cannot say that the flatworm’s
simple pigment cup eye is less successful, in an evolutionary sense, than

46
What makes a good eye? 47

the hawk’s, we can nevertheless say that the hawk’s eye is better, because
of the much greater quantity of information it is capable of supplying to
its bearer. If we are to employ ‘information supply’, albeit loosely, as our
basis for judging the quality of an eye, what yardsticks should we use? We
will leave aside for the moment the capacity to distinguish wavelength and
plane of polarization; these are features more of the molecular organiza-
tion of the receptors than of the structure of the eye itself (see Chapter 2).
It is generally agreed that there are two features of an eye’s function that
between them summarize its performance, and which are independent of
the eye’s optical type. These are resolution and sensitivity. By resolution we
mean the precision with which an eye splits up light according to its direc-
tion of origin. This is a combination of the quality of the image provided
by the optics, and the fineness of the mosaic of retinal detectors. Sensitivity
refers to the ability of an eye to get enough light to the receptors for them
to make full use of the eye’s potential resolution. For animals living in dim
environments, sensitivity is every bit as important as resolution.
Before examining in detail the features of an eye that make for good per-
formance, we will first look briefly at the way that resolution and sensitivity
interact, and the reasons why both are important. Figure 3.1 is an imagi-
nary eye with rather poor resolution and a dim image, intended to show in
exaggerated form the problems that all eyes face. Two point sources of light
outside the eye are brought to a focus on the retina by the lens, where they
give rise to distributions of light that are no longer point-like, but blurred
and spread out over several receptors (blur circles). There are many possible
reasons for this spread. For example, the optical system might fail to bring
all rays to a single focus (aberration), or light might be scattered by the
media of the eye. Even if the eye is perfect in these respects, there remains
a fundamental source of blurring known as diffraction, which is inescap-
able, and arises from the wave nature of light (see Figs. 2.1 and 3.5 below).
This will be discussed later in the chapter, but basically the smaller the
aperture of the eye compared with the wavelength of light, the worse the
problem is, so that in the tiny optical systems of insect compound eyes, for
example, diffraction is particularly serious (Chapter 7). The degree of blur
resulting from these defects limits the quality of images of all kinds, and in
doing so also establishes how fine the retinal mosaic should be. There is no
point in the retina having receptors much smaller than the blur circles that
make up the image. Roughly speaking, all the information contained in the
image is extracted when two receptors occupy the half-width of the light
distribution in a point source image, more or less as shown in Fig. 3.1. Thus
the poorer the image quality the fewer the number of receptors needed to
take in all the information the image offers. A coarser mosaic than this
will waste image detail (and can be said to ‘undersample’ the image), and
48 Animal Eyes

a finer mosaic will have more receptors than necessary (‘oversampling’), so


there is a clear optimum. Most eyes do indeed show this expected match
between image quality and retinal ‘grain’.
Figure 3.1 also illustrates the effect of low light levels, by showing (black
dots) how many photons each receptor captures from the image. Light is
quantal, and the smallest packet of light energy, the photon, is indivisible
when it is caught by a rhodopsin molecule: it is either present or not present
(Chapter 2). This means that at low light levels there is much statistical
uncertainty, represented in Fig. 3.1 as the variable numbers of photon cap-
tures in receptors supposedly each receiving the same average amount of
light. If more photons were available then the photon number distributions
would come increasingly to resemble the optical distributions, but if fewer
were present the situation would become much worse, with only an occa-
sional photon reaching any of the receptors that image1 each point source.
Thus low light levels corrupt the image by introducing uncertainties in
photon numbers. One can think of this producing a kind of statistical blur,
which adds to the blur caused by diffraction or imperfect optics, and which
has similar deleterious effects on the eye’s ability to resolve. At low levels
this often means that it is better to employ large receptors, in order to get
a reasonable statistical photon sample, than it is to have small receptors to
sample the image finely. This trade-off between resolution and sensitivity is
one we shall meet repeatedly, particularly in animals that have to operate
over a range of light levels.

Fig. 3.1 Limits to resolution. An imaginary eye


showing the blurring of the image by imperfect
optics, the way the image is sampled by the
retinal mosaic, and the uncertainty resulting
from low numbers of photons (dots).

1
Throughout this book we will use ‘image’ both as a noun and as a verb mean-
ing ‘to form an image’. ‘Focus’, as a verb, will be used to mean altering the position
of the image to bring objects at different distances to a focus, i.e. to effect accom-
modation.
What makes a good eye? 49

We have seen that both wave and particle aspects of light affect eye per-
formance. Its wave nature imposes a fundamental limit to image quality,
and, as we will see later, to receptor size as well; and its quantum nature
determines the certainty with which light can be measured. With these con-
straints in mind, the rest of this chapter will be used to explore the features
of eyes that enable and limit their capabilities.

Resolution
The retinal sampling frequency
The two features of an eye that set a limit to the detail that can be
resolved in bright light are the fineness of the receptor mosaic and the
quality of the image. How can we best compare the effects of these rather
different attributes? It turns out that one of the best measures that can
be applied to both is their capacity to resolve a grating of dark and light
bars. In the case of the receptor mosaic, it is a well-established finding
that a grating can be properly resolved if the image of each adjacent dark
and light stripe falls upon a separate receptor. This means that the period
(the distance between the centres of two adjacent dark or light stripes) of
the finest resolvable grating in the image is equal to twice the receptor
spacing.
When dealing with objects outside the eye and images within it, it is
often most convenient to deal with angles rather than distances, as the
same angular measurements apply to both. In single-chambered eyes like
our own there is always a point in the eye called the nodal point that rays
pass through without being bent by the lens. For example, in an eye that
forms an image with a simple curved cornea the nodal point will be at the
centre of curvature of the corneal surface, because rays passing through
that point will meet the surface at a right angle, and so will not be bent by
refraction. The significance of the nodal point is that one can draw straight
lines through it connecting object and image points, and so work out the
relative sizes of objects and images directly by the principle of similar tri-
angles (Fig. 3.2a). A small object of size O at a distance U from the eye
makes an angle of α = O/U radians at the nodal point (a radian is the
angle made by an arc of a circle one radius in length at the circle’s centre: 1
radian = 180°/π, or 57.3°), and inside the eye the image I subtends the same
angle α at the nodal point. The best definition of the focal length ( f ), for
our purposes, is the distance from the nodal point to the image of a distant
point. Then the equation:

O/U = a = I / f (3.1)
50 Animal Eyes

(a) f
O

a
I U
N

Fig. 3.2 Objects and images. (a) A distant (b) f


object subtends the same angle α inside
and outside the eye when the ray passes
through the nodal point N. The focal length Δf
(f ) is the distance from the nodal point to
the image. (b) The finest grating that an eye s
can resolve has an angular period of 2∆ ϕ,
where ∆ϕ is the inter-receptor angle (s /f ) at
the nodal point, and s is the separation of
the receptor centres.

summarizes the relations between object and image, provided the object is
a long way away. A particularly important angle, because it determines the
fineness with which the image is sampled, is the inter-receptor angle, s/f,
where s is the spacing of the receptor centres. The symbol Δϕ will be used
for this angle (Fig. 3.2b).
We can now apply eqn (3.1) to the grating resolution of the eye. The
finest resolvable grating has a period of 2s on the retina. Expressed as an
angle in either image space inside the eye or object space outside, this is
2s/f radians. It is often more useful to speak of a grating’s spatial frequency
(the reciprocal of the period, in cycles per radian) because the frequency
increases as the resolution improves, whereas the period decreases. The
spatial frequency with which the retina samples the image is the sampling
frequency, designated by (greek nu) vs. Thus:

sampling frequency (n s ) = f /(2s) = 1/(2Δf ) (3.2)

This equation suggests that there are two ways to increase the sampling
frequency, and so improve the eye’s resolution. The focal length f might
be increased, or the receptor separation s decreased. It is not possible to
decrease s below about 2 μm, because receptors narrower than this become
leaky to light, as discussed later. Once this limit is reached, as it is in many
animals, the only way to improve matters is to increase the focal length,
and this necessarily means having a larger eye.
What makes a good eye? 51

Table 3.1 The resolution of a selection of animal eyes

Name Maximum resolvable Equivalent Method Ref.


spatial frequency (cycles inter-receptor angle
per radian) (degrees)

Aquila (eagle) 8022 0.0036 B, A 1


Man (fovea) 4175 0.007 B, A 2
Octopus 2632 0.011 A 2
Portia (jumping spider) 716 0.04 A 3
Cat 573 0.05 B 4
Goldfish 409 0.07 B 5
Aeschna (dragonfly) 115 0.25 A 2
Hooded rat 57 0.5 B 4
Worker bee 30 0.95 B, A 2
Leptograpsus (crab) 19 1.5 A 6
Pecten (scallop) 18 1.6 B, A 2
Lycosa (wolf spider) 16 1.8 A 5
Littorina (sea snail) 6.5 4.5 A 2
Drosophila (fly) 5.7 5 B, A 2
Limulus (horseshoe crab) 4.8 6 A 6
Nautilus (cephalopod) 3.6 8 B, A 2
Cirolana (deep-sea isopod) 1.9 15 A 6
Planaria (flatworm) 0.8 35 A 2

Methods: A, anatomical; B, behavioural. Where the behavioural methods give a lower resolution than the
receptor separation, the behavioural result is used. In vertebrate eyes pooling may result in reduced resolution.
References: 1. Reymond (1985); 2. Land (1981a); 3. Land (1985); 4. Charman (1991); 5. Nicol (1989);
6. Land and Nilsson (1990).

Table 3.1 shows the resolution of the eyes of a variety of animals, expressed
in terms of both the inter-receptor angle Δϕ, and the sampling frequency vs.
They range from flatworms with a sampling frequency of about 1 cycle per
radian, up to eagles with about 8000 cycles per radian.

The optical cut-off


As the detail in a scene becomes finer, the more difficult it is to resolve;
the leaves of distant trees lose their identity in the overall texture. One can
think of the world, from an optical point of view, as consisting of gratings
of a complete range of spatial frequencies. Evidently the highest spatial fre-
quencies, representing the finest detail, do not survive the process of vision.
It might be that the retinal mosaic fails to sample them fully, as we have
discussed already, but that is not the only reason. The optics of the eye
also attenuate, and ultimately cut out the highest frequencies. One of the
best ways to illustrate this is by measuring the contrast (modulation) transfer
52 Animal Eyes

Object

1.0
Image
Contrast ratio

nCO = D/l

0
0 1.0
Spatial frequency (n)

Fig. 3.3 The contrast transfer function. The graph shows what happens to the contrast of
gratings of different spatial frequency when they are imaged by a diffraction limited, but otherwise
perfect, lens. As the gratings get finer (higher v) the contrast in the images decreases until it
reaches zero at the cut-off frequency (vco). The ordinate is the ratio of the image contrast to that of
the object. The insert shows that the effect of the lens is to convert a high-contrast object into a
lower contrast image.

function (Fig. 3.3). This is a graph that shows how the contrast of a grating
is reduced on passing through a lens, as a function of spatial frequency.
[Contrast is defined in Chapter 2: for a grating, it is the difference in inten-
sity of the light bars (Imax) and dark bars (Imin), divided by their sum, i.e.
Contrast = (Imax − Imin)/(Imax + Imin). Dividing by (Imax + Imin) makes contrast
independent of the overall light level]. The effect of an optical system is
always to reduce the contrast in the image, compared with the object grat-
ing that gave rise to it (insert, Fig. 3.3), and this reduction is greatest for the
highest spatial frequencies. Eventually, as Fig. 3.3 shows, a spatial frequency
is reached where there is no contrast at all in the image, and this is known
as the optical cut-off frequency (vco).

The diffraction limit


It is diffraction that sets the cut-off frequency. Other optical imperfections,
not being properly focused for example, may cause contrast to be reduced at
all spatial frequencies, but they do not necessarily change the cut-off itself.
Diffraction is thus of key importance in understanding both image quality
and eye design. It arises from the wave nature of light. When light from
a distant point object, such as a star, reaches a lens, the parallel rays are
bent by refraction so that they come together at a single focus in the image
plane. An alternative, and more accurate, description of the same process
is to say that light from the star reaches the lens as a wavefront (‘rays’
What makes a good eye? 53

Ray Wave
q = l/D

1.0

Airy
disc
Intensity

l/D

q = 1.22 l/D

0
–2 –1 0 1 2
Angle (q)

Fig. 3.4 Light distribution in the Airy disc. According to wave optics, the image of a point source
is a diffraction pattern known as the Airy disc, which has the intensity distribution shown. Its width
depends inversely on the aperture diameter D. The angle on the abscissa is given in multiples of the
half-width of the Airy disc, λ/D radians, where λ is the wavelength of light. The inserts show the
meaning of θ in terms of ray optics, and the way a point image is formed according to wave optics.

are arbitrary lines at right angles to this front, see Fig. 2.1a). On passing
through the lens the central region of the wavefront is delayed more than
the edge regions, because it passes through more of the optically dense
material. The result is that the emerging wavefront is no longer flat, but
curved into a part-spherical shape, centred on and progressing towards the
focus (insert, Fig. 3.4). At the focus the various parts of the wavefront meet
and as they pass through each other they interfere. Components that are
in phase with each other will reinforce, whilst those that are exactly out of
phase will cancel, giving rise to a pattern at the focus that is not a point
(as supposed by ray theory), but a diffraction pattern. In the simple case of a
point source object and a circular aperture this pattern has a central bright
spot known as the Airy disc (after its discoverer) and has the form shown
in Fig. 3.4.
A convenient measure of the size of the Airy disc is its half-width, i.e. its
width (w) at half maximum intensity (Fig. 3.4). This turns out to be almost
identical to the distance of the first dark ring discussed in Box 3.1. For a
lens of focal length f this is given by:

w = fq = f l / D (3.3)

The larger the value of w, the wider the image of a point, or more colloqui-
ally the more blurred the image. Roughly speaking, objects whose images
54 Animal Eyes

Box 3.1 The origins of the Airy diffraction pattern


A full derivation of the Airy diffraction pattern is available in textbooks of
optics, but it is so important for understanding the limits of vision that an
indication of how it comes about is needed. In Fig. 3.5a two rays from the
converging wavefront interfere in the region of the focus. These come
from two points, X and Y, in the centre of each half of the aperture (the
whole aperture can be thought of as made up of a series of similar pairs of
points). We then ask the question: ‘How far from the focus do we have to
go before the image becomes dark?’. This ‘first dark ring’ can be thought
of as defining the edge of the image of the point object. For a point A in the
centre of the image the distances from X and Y in the aperture are the
same, so the waves will be in phase, they will interfere constructively, and

Aperture (D)
(a)

z = l/2
X
Intensity

q
D/2 A
q

B Airy
Y disc

(b)

narrow wide wide narrow


aperture disc aperture disc

Fig. 3.5 Diffraction and the image. (a) Construction to show why the image of a point
source becomes dark away from the axis. When there is a half-wavelength (λ/2) difference
in path length between the rays reaching the image from the two halves of the aperture,
destructive interference occurs, and the image at B will be dark. The angle corresponding to
the distance AB in the image is θ, which for a circular aperture is equal to 1.22 λ/D (Fig. 3.4).
(b) Interference patterns illustrating why wide apertures (right) produce narrower ‘Airy discs’
than narrow apertures (left).
What makes a good eye? 55

Box 3.1 The origins of the Airy diffraction pattern


(contd.)
A will be bright. However, at point B the distances are no longer the same,
and if the difference in the lengths of the paths from X and Y is equal to
half a wavelength of light (λ/2) the two waves will be exactly out of phase
and will interfere destructively, and so B will appear dark. On Fig. 3.5 we
can see that the triangle converging on B differs from that converging on
A by being tilted through an angle θ and by having an extra short segment
z in one of its sides. The length of z will determine what kind of interfer-
ence (constructive or destructive) occurs at B. z is shared by another trian-
gle containing X and Y and the base of the triangle converging on B, also
tilted through an angle θ. The distance between X and Y is half the aper-
ture diameter (λ/2) so the angle θ in radians is given by z ÷ D/2. If B is to
be dark, z must be equal to λ/2. Substituting this for z gives θ = λ/D. This
is now the angular position, relative to the centre of the lens, of the dark
ring marking the edge of the bright image. It can be converted to distance
in the image plane by multiplying by the focal length, as in eqn (3.1). In
spite of the simplifying assumptions, this result is very close to that of
Airy’s complete calculation, which gave θ = 1.22 λ/D, for a lens with a
circular aperture.

are larger than w will be resolved, but smaller images will not, because
they are blurred out. This is reflected in the contrast transfer function (Fig.
3.3) by the fact that the finest resolvable spatial frequency—the cut-off fre-
quency (vco)—is simply the reciprocal of the Airy disc half-width:

cut -off frequency (n co ) = 1/w = D/( f l ) (3.4)

In other words, the finest grating that the optics can resolve has a period
equal to the half-width of the image of a point source.
Equation (3.3) shows that angular image size, θ (= w/f ), which determines
resolution, is inversely proportional to aperture diameter. The bigger the lens
the smaller the value of θ, and the better the resolution. This is an important
and strangely counterintuitive conclusion. One might think that scaling up
an optical system and making the aperture larger would cause the width of
the image-disc to grow in proportion; but in fact the opposite is true (Fig.
3.5b). This is why astronomers need big telescopes to resolve small closely-
spaced stars. By the same token, it is the reason why insect eyes, whose lens
diameters are measured in micrometres, resolve so poorly.
56 Animal Eyes

By way of example, we can use eqn (3.3) and eqn (3.4) to make a com-
parison between the theoretical resolution limits of the eye of man and of
a bee. The human eye has a pupil about 2 mm wide in daylight, so that
for a wavelength of 0.5 μm (blue-green), θ comes to 0.00025 radians, 0.014°,
or 0.86 minutes of arc. The corresponding cut-off frequency is 70 cycles
per degree, which is very close to the sampling frequency of the retinal
mosaic, about 60 cycles per degree (vs, eqn 3.2). The compound eye of a bee,
however, has facets that are only 25 μm in diameter. This is smaller than
the human pupil by a factor of 80, and consequently the resolution must
be 80 times worse, with θ about 1.1°. To get a feeling for what this means,
your little finger nail covers about 1° with the arm extended. It is easy to
use this to imagine how blurred the bee’s visual world would be, compared
with our own.

Other optical defects


Although diffraction is the ultimate limit to resolution, which can only be
improved upon by making the aperture of the eye bigger, there are several
other ways that resolution may be compromised. The most important are
focus, spherical aberration, and chromatic aberration, and they all occur in
animal eyes (Fig. 3.6). Near objects are brought to a focus further from the
lens than distant objects, so in large eyes particularly it is important that
the optical system can adapt in some way to object distance. This process
of accommodation may be accomplished by changing the power of the lens
(in man) or by moving the lens (in fish). An out of focus image is degraded
because point sources produce ‘blur circles’ on the retina which, like the
Airy disc, depress the contrast transfer function (Fig. 3.3).
Spherical aberration is the name given to the blurring that occurs because
a simple spherical surface does not bring all rays together at a single focus.
Rays furthest from the axis of the lens are refracted too much, and fin-
ish up in front of the focus for rays near the axis, again resulting in a
blur circle which is larger than the Airy disc. This is potentially a serious
problem for biological lenses, but animals get round it in one of two ways.
They may make the optical surfaces non-spherical, and indeed the human
cornea is not spherical but hyperbolic in shape to avoid just this problem.
A common alternative is to make a lens which is not optically homogene-
ous, as glass is, but which has a gradient of refractive index from the centre
(high) to the periphery (low). The result is that the outer zones of the lens
refract less than they would in a glass lens, and with the correct gradient
of refractive index all imaging rays can be brought to a single point. That
fish lenses have this construction, and correspondingly excellent optics, has
What makes a good eye? 57

Blur
Focus defect circle

Retina

Spherical aberration

Fig. 3.6 Other optical defects. Top:


image not in focus on the retina.
Middle: spherical aberration, in
which outer rays are focused closer
to the lens than rays near the axis.
Chromatic aberration
Bottom: chromatic aberration, where
different wavelengths (red and blue) B
are focused at different distances. R
In each case the result is a blur
R
circle that adds to the blur due to
diffraction. B

been known since the studies of Matthiessen in the 1880s (see Chapter 4).
The human lens has inherited this design from our fishy ancestors, and cor-
rects its own spherical aberration this way. Thus the human eye has both
non-spherical (cornea) and inhomogeneous (lens) correction mechanisms.
Chromatic aberration is caused because short wavelength blue light is
refracted more strongly than long wavelength red light. This occurs in bio-
logical materials just as in glass, and means that the blue image in the
human eye is almost 0.5 mm in front of the red image. No animal eye seems
to have emulated the achievement of the early telescope makers in making
an achromatic lens by using a combination of materials. However, there
are other solutions. Humans partially evade the problem by using only a
relatively narrow range of wavelengths in the middle of the spectrum for
high-acuity vision, so that our resolution in the blue end of the spectrum
is poor—little more than a ‘colour wash’. Fish and some other vertebrates
are a little more subtle, and have lenses with multiple focal lengths. This
ensures that each cone type has an in-focus image for at least a proportion
of the light reaching it (see Chapter 4).
Unlike diffraction, where eye size is a virtue, these other problems get
worse as eyes get bigger. The reason is that the blur circles caused by aber-
rations scale with the focal length of the eye, so that, uncorrected, an eye
of 1 cm focal length would have blur circles 10 times as large as in an
58 Animal Eyes

eye of the same design, but with a 1 mm focal length. However, recep-
tors do not in general scale with focal length, but have much the same
diameter whatever the size of the eye. Thus the potential resolution of the
eye, measured by the inter-receptor angle Δϕ(= s/f ), should improve as the
focal length increases. However, it can only do so if focus defects and other
aberrations are minimized, so that blur circles do not get much larger than
receptor diameters. For structures with short focal lengths, for example
the ommatidia of apposition compound eyes with focal lengths of about
100 μm, these defects are negligible compared with diffraction; no insect
needs a mechanism for focusing its eyes. They become noticeable at a focal
length of a millimetre, and serious when this reaches a centimetre. In all
vertebrates and also the cephalopod molluscs these three kinds of optical
problem have been addressed, in one way or another. A contractable pupil
is particularly important in dealing with optical defects, as it can be used
to strike an appropriate compromise between diffraction (wide pupil) and
aberrations (small pupil). This compromise changes with intensity, as bright
light favours high acuity, but in dim light the priority is to obtain adequate
photon numbers (see also Chapter 5, Fig. 5.10).

Photoreceptor optics
So far, this section has only dealt with resolution in terms of the quality
of the optical image, but the ability of the eye to transmit the information
contained in the image also depends on the size of the photoreceptors—as
well as on their spacing as we have already discussed. If a receptor has a
diameter that is narrower than a line in the finest grating that the eye can
resolve, then it will be able to measure the intensity (strictly illuminance,
see Fig. 2.2) of that line accurately. If, however, if it is much wider, it will
swallow up that line and several others, and signal an unresolved aver-
age intensity for the grating. For eyes whose function is to resolve well in
daylight narrow receptors are therefore essential, and this is indeed what is
found. Cones in human eyes are about 2 μm in diameter, which is almost
exactly the width of a single line in a just-resolved grating of 70 cycles per
degree. In a bee’s eye the receptors are also about 2 μm wide, but because
the focal length of a facet in a bee’s eye is so short (about 100 μm), the angle
involved is much larger, just over 1° (α in eqn 3.1). As in humans, this is
close to the optical resolution limit imposed by diffraction.
Why should receptors not be narrower still? As we have just seen, an
eye’s ability to resolve depends, among other things, on the angle sub-
tended by single receptors. As this is equal to s/f radians (Fig. 3.2) it would
seem that an eye could be made smaller by reducing its focal length ( f ),
without losing resolution, provided the receptor diameter (d ≈ s) could be
What makes a good eye? 59

reduced at the same time. This doesn’t happen, however. The narrowest
receptors in vertebrates and in insects are about 1 μm wide. The main
reason for this seems to be that as the width of a photoreceptor begins
to get close to the wavelength of visible light (0.3–0.8 μm), the receptor is
no longer able to hold the light within it by total internal reflection (Fig.
3.7a), and it becomes inefficient and ‘leaky’. Like diffraction, this is a phe-
nomenon associated with the wave nature of light. In narrow light-guiding
fibres, which is what photoreceptors are, the trapped light forms interfer-
ence patterns which are known as waveguide modes (Fig. 3.7b, Plate 3);
these are similar in nature to the standing waves in organ pipes, their
acoustic equivalent. The light in these modes is not uniformly distributed,
and in particular the single mode found in the narrowest fibres has a sub-
stantial part of its energy actually outside the fibre (explanations of this
are given by Snyder 1979, and van Hateren 1989). Not only is this light
unavailable for capture by the rhodopsin molecules inside the fibre, but
it can also be absorbed by external structures such as screening pigment
granules, or even by adjacent receptors. When this happens there is ‘cross-
talk’ between receptors, and resolution suffers. The practical consequence
is that there is nothing to be gained by having receptors narrower than 1
μm, and this in turn sets a lower limit to focal length, and hence the size,
of an eye with a given resolution.
In narrow receptors the first waveguide mode retains the polarization
of the incident light, which means that the polarization properties of the
receptors are determined principally by the way the photopigment mole-
cules are packed into the cell membrane. This remains true even in wider

(a) (b)

qcrit
Fig. 3.7 Receptor optics. (a) In a wide receptor
(left) light is trapped by total internal reflection.
This occurs only up to the critical angle (θcrit),
which is given by arcsin(n1/n2), where n1 and
n2 are the refractive indices outside and
inside the receptor, typical values of which
are 1.34 and 1.36–1.40. (b) In very narrow n1 n2
receptors (diameter < 2 μm) the light behaves
as a waveguide mode, and has a distribution
in which some travels outside the structure.
This can be caught by neighbouring receptors
(stipple).
60 Animal Eyes

receptors, where total internal reflection depolarizes off-axis light to some


degree (see Fig. 2.7).
In the retinae of vertebrates light has to pass through the various neural
layers of the inner retina before reaching the rods and cones, and although
these layers are transparent they are not optically homogeneous, and so
cause some scattering of light. It appears that in mammals, the Müller
cells—glia-like cells which traverse the whole depth of the retina from
the vitreous to the receptors—act as light guides that provide a scatter-
free path, transferring the image from the surface of the inner retina to
the receptors without degradation (Franze et al. 2007). The situation in the
fovea of primates is solved in a different way: most of the neural material of
the inner retina is moved beyond the foveal periphery, leaving the receptors
in the centre optically unencumbered.

Light absorption by photoreceptors


Photoreceptors are typically long and narrow (the photopigment-bearing
outer segments of human rods are about 25 μm long and 1–2 μm wide,
and contain about 108 rhodopsin molecules). The proportion of light that
a receptor absorbs depends on its length. Typically, vertebrate photorecep-
tors made of discs absorb about 3 per cent of the incident light for every
μm of their length, and invertebrate receptors made of microvilli absorb
about 1 per cent per μm. To absorb 90 per cent of the light reaching it a
vertebrate receptor would need to be 77 μm long, and an insect receptor
230 μm. These numbers are fairly typical of receptors in the two groups
(human rods are rather short). The relationship between absorption and
receptor length is logarithmic rather than linear because with increasing
distance down the receptor there is less light left to absorb. The proportion
of the incident light absorbed by a receptor of length L can be found from
(1 − e–kL) if the light is monochromatic (which is roughly true of the deep
sea where only blue light penetrates), or from [kL/(2.3 + kL)] for white light,
typical of terrestrial conditions. k is the absorption coefficient—the propor-
tion of the light absorbed per micrometre, if L is measured in micrometres.
Definitions of some frequently encountered terms related to absorption are
given in Box 3.2.
The length of a receptor also affects its spectral sensitivity, as a result
of ‘self-screening’. Pigment early in the light path absorbs most of the light
close to the wavelength of maximum sensitivity, meaning that the remain-
ing pigment absorbs more of the remaining light, further from the peak.
This broadens the spectral sensitivity of the receptor as a whole. A useful
discussion of receptor absorption, and in particular the way it depends on
wavelength, can be found in Warrant and Nilsson (1995).
What makes a good eye? 61

Box 3.2 Terms related to absorption


Absorption or attenuation describes the extent to which light, or some other
form of energy, is absorbed on passing through a given length of a sub-
stance (Fig. 3.8):
− kL
I = Ioe

Fig. 3.8 Attenuation of light passing through


an absorbing medium (see text).

where Io is the incident intensity, I the intensity after passing through


a length L, and k (the absorption coefficient) is the proportion of light
absorbed per unit length: for example 3 per cent (0.03) per micrometer
for a vertebrate rod.
Transmittance is the ratio of the transmitted to incident light energy:
T = I / Io, T is also equal to e−kL
Related to T is absorptance, the fraction of energy absorbed, i.e. 1− T,
or (Io – I) / Io
Absorbance, or optical density is a logarithmic measure of absorption, and
is the negative logarithm of transmittance:

A = − log 10 ( I /I o )

The advantage of using absorbances is that they add, so that the


combination of neutral density filters of 0.2 and 0.4 has an optical den-
sity of 0.6.

Resolution and eye design


We are now in a position to use the physical principles outlined in the
preceding sections to draw some firm conclusions about the relationship
between an eye’s size and construction, and the resolution it provides. A
satisfying way of doing this is to try to ‘design’ an eye to a particular speci-
fication. If this can be done, using these principles, we can be reasonably
sure that nothing important has been left out.
Imagine a small vertebrate with a single-chambered lens eye similar in
design to our own. This animal is a herbivore feeding in bright daylight
(this avoids problems of photon scarcity to be discussed in the next section).
It needs to resolve grass at, say 3 m, which approximates in angular terms
to a 10 cycles per degree grating. Converting from degrees to radians gives
62 Animal Eyes

a retinal sampling frequency vs of 10 × 57.3, or 573 cycles per radian, and


from eqn (3.2) this means that f/(2s) = 573. Waveguide considerations mean
that the receptor separation s cannot be much less than about 2 μm (1 μm
receptors and 1 μm spaces), so that the focal length f must be at least 573 ×
4 μm, or 2.29 mm. One would expect that the retinal sampling frequency
would match the optical cut-off frequency vco quite closely, as in the human
eye, avoiding either ‘unused’ resolution on the one hand or superfluous
receptors on the other. vco is thus also 573 cycles per radian, and from eqn
(3.4) this means that D/λ = 573. If λ is 0.5 μm, it follows that the eye must
have an aperture diameter D of 1.15 mm. Thus the main features of this
fictitious eye—its focal length, aperture diameter, and receptor diameter—
all follow from the tasks that evolution has assigned to it, and the particular
physical principles that apply to eyes.

Sensitivity
The consequences of low photon numbers
The statistical uncertainties associated with small photon numbers mean
that at low light levels the potential resolution of an eye cannot be realized,
as indicated in Fig. 3.1. The first clear demonstration that human rod recep-
tors actually detect individual photons was made by Hecht, Schlaer, and
Pirenne in 1941, and Fig. 3.9, from Pirenne’s book Vision and the eye (1967),
is based on that study. All four figures show the image on the retina of the
same bright field containing a dark circular patch, but at different levels of
illumination. In I the light level is so low that only 6 of the 400 receptors
receives a photon: this is approximately the situation at the human thresh-
old of vision. (Although single receptors are capable of detecting single
photons, the brain requires a ‘safety factor’ of about 6, so that responses
are not made to spontaneous rhodopsin activations.) In II the light level
is ten times higher, but still the dark patch is invisible, disguised in the
‘noise’ of the random background of photon hits. By III a gambler might
be prepared to guess that that there was a dark region in the field, but it
is only by IV, 1000 times the threshold level, that the dark patch stands out
with certainty. This demonstration makes it clear just why it is that we see
so badly in the dark.
At higher light levels than those illustrated in Fig. 3.9 it becomes pos-
sible to distinguish different shades of grey, and ultimately small contrasts
such as occur in the images of gratings near the resolution limit (Fig. 3.3).
To resolve these gratings, the ability to detect contrasts of a few per cent
is essential. How much light, or more precisely how many photons per
receptor, are needed to detect a particular contrast? This question was first
What makes a good eye? 63

I II

III IV

Fig. 3.9 Effect of low photon numbers. The four panels show a square of 400 receptors in the
human retina with sample distributions of photon captures at the threshold of vision (I) and three
higher light levels, with a factor of 10 increase between each (II–IV). The dark disc in the centre
only becomes reliably detectable at intensities between 100 and 1000 times threshold. From
Pirenne (1967).

studied in the 1940s by Hugo deVries and Albert Rose, and the general
answer they reached was that the minimum detectable contrast was pro-
portional to the reciprocal of the square root of intensity (the Rose–deVries
law). In Box 3.3 we see how this rule arises from the statistics of photon
capture, and then discuss its implications for vision.
It is impressive how big some of the photon numbers, predicted by eqn
(3.5), have to be. If the contrast in a grating is 0.5 (50 per cent) the number
required is 1/0.52 = 4. With a contrast of 10 per cent the number is 100, but
when the contrast is down to 1 per cent, which humans can easily detect,
the number is 10 000. These numbers refer to photons collected within one
64 Animal Eyes

Box 3.3 How many photons are needed to detect


a given contrast?
What has to be established is whether or not a real difference in intensity
between two stripes in a grating, represented in the retina as a differ-
ence in photon numbers captured by the receptors, is larger than the
‘noise’ level, i.e. the statistical fluctuations in the numbers of photons
arriving at the receptors. Fortunately, in this kind of statistics (Poisson
distribution), noise and signal size are closely related. The variation in
photon numbers, measured as the standard deviation, σ(n), of repeated
samples, is equal to the square root of the mean number, n, in the sample,
i.e. σ(n) = √n. This property is common to many ‘noisy’ processes, for
example current fluctuations in resistive circuits where small numbers
of electrons are involved. Contrast (C) in a grating was defined earlier
as the difference in intensity (ΔI = Imax − Imin) between pairs of stripes,
divided by the sum of the intensities, i.e. C = ΔI/2I, where I is the aver-
age intensity. In a single sample pair we can replace intensities with
photon numbers, which gives us Δn/2n where n is the average photon
number. For a brightness difference to be regarded as real, ordinary sta-
tistical reasoning suggests that the difference between the samples, Δn,
should be greater than the standard deviation σ(n), or, to give 95 per
cent certainty, 2σ(n) (illustrated in Fig. 3.10). Thus a difference is detect-
able if Δn > 2σ(n). To reach the answer, we need to do two things to this
expression: divide both sides by 2n, and replace σ(n) by √n. This now
gives Δ(n)/2n > 2√n/2n. The left-hand side is, on average, equal to ΔI/2I,
which is the contrast C, and the right-hand side tidies up to simply 1/√n.
So the final result is:

Fig. 3.10 Photon statistics


and contrast detection. The
figure shows the way photon
samples will be distributed in
receptors that image two areas
of slightly different brightness.
Probability

If the average difference in dn


photon numbers (d n) is greater
than twice the standard s(n)
deviation of each distribution
(s (n)), the difference in
brightness can be reliably

detected. Average photon number (n)
What makes a good eye? 65

Box 3.3 How many photons are needed to detect


a given contrast? (contd.)
2
C > 1/√ n, or n > 1/C (3.5)

The first expression is a version of the Rose–deVries law mentioned ear-


lier, and the second tells us how many photons are needed, per receptor,
to detect particular contrasts.

1.0
Contrast ratio

νmax νco

0
0 1.0
Spatial frequency (ν)

Fig. 3.11 Resolution and contrast loss. Low photon numbers limit the minimum detectable contrast.
The effect of this is to set a ‘floor value’ to the contrast ratio in the contrast transfer function (see
Fig. 3.3) which in turn limits the maximum detectable spatial frequency (vmax) to a fraction of the
cut-off frequency (vco). In this case raising the minimum contrast ratio to 0.32 reduces the maximum
frequency to about 58 per cent of the bright light value.

‘integration time’ of the eye: roughly speaking, this is the time it takes for
a receptor to respond fully to a change in intensity, and it is typically 0.1
seconds or less. Thus at low contrast each receptor would require photon
numbers around 105 per second. This is still an underestimate, because for
a variety of reasons only a proportion of the photons reaching the eye from
a scene are actually absorbed by rhodopsin molecules. In humans this is
around 10 per cent, which means that the photon numbers needed to detect
a 1 per cent contrast are close to a million per second per receptor. This is
a very large number, and the obvious next question is: ‘How many photons
does the world provide for us to see with?’.

Available photon numbers


The radiance (R) of a white card in bright sunlight (a measure of the number
of photons it emits), is about 1020 m–2.sr–1.s−1, in room light it is about 1017,
in moonlight 1014, and in starlight 1010—the absolute threshold for human
66 Animal Eyes

vision. (The meaning of the units is explained in Chapter 2.) These numbers
seem enormous, but they reduce by a factor of 1012 in going from square
metres in outside space to the dimensions of photoreceptors which are meas-
ured in square micrometres. Similarly the cones of light accepted by single
receptors are typically less than 1 square degree, and as there are 3283
square degrees in a steradian (which is a cone 65.5° across), this reduces
photon numbers by a further 103 or more. This cuts down the final numbers
available to receptors to a million per second or less, bringing them into
the range within which photon numbers start to limit contrast detection.
This leads to a very important conclusion: eyes are ‘photon starved’—in the
sense that they are unable to exploit their potential capabilities—at all light
levels except bright daylight.
In addition to limiting contrast detection, low photon numbers also
reduce acuity. This is most easily explained by considering what happens
to the contrast transfer function (Fig. 3.11). If the minimum detectable con-
trast is increased by low photon numbers, then this is equivalent to raising
the baseline of the graph so that it cuts off the bottom of the curve. Thus
with only 10 photons per receptor per integration time available, the con-
trast limit will be 32 per cent (according to eqn 3.5) and that will have the
effect of limiting the maximum detectable spatial frequency to less than 0.6
of the value in bright light, i.e. the cut-off frequency. This is basically why
fine work requires high light levels.

Making eyes more sensitive


From what has been said, it is clear that the more photons an eye can cap-
ture the better. This is important at normal light levels, but the pressures
are even greater for nocturnal animals (moonlight is a million times dim-
mer than sunlight), and those that live at depth in the ocean, where even
in the clearest water light is reduced by a factor of 10 for every 70 m. We
can call this ability to capture photons an eye’s sensitivity, and define it as
the number of photons (n) caught per receptor when the eye views a scene
of standard radiance (R).
There are basically two features that make an eye sensitive: these are the
pupil diameter D and the angle in space over which each receptor accepts
light (Δρ). For present purposes Δρ is given by d/f, the angle the receptor’s
diameter makes at the eye’s nodal point (see Fig. 3.12). Receptor length can
be important too, as discussed in the earlier section ‘Photoreceptor optics’,
and the term Pabs is added here to take into account the proportion of pho-
tons entering the receptor that are absorbed by the photopigment (usually
between 0.1 and 0.9; see ‘Photoreceptor optics’, above). The sensitivity S is
then given by:
What makes a good eye? 67

2 2
S = n/R = 0.62D Δr Pabs (3.6)

This equation is quite easily derived from photometry, and a full explana-
tion can be found in Land (1981a). (The factor of 0.62 is (π/4)2, and arises
because both aperture and receptors have circular cross-sections. Note
that for small angles Δρ2 is a solid angle in steradians). What is impor-
tant here is that there is really only one variable that can safely be var-
ied to improve sensitivity and that is the aperture D. Increasing Δρ will
also increase sensitivity but at the expense of resolution since in a single
chambered eye Δρ and the inter-receptor (sampling) angle Δϕ are almost
the same (Fig. 3.2).
Let us look at how, in practice, an eye might become more sensitive
(Fig. 3.12). Initially the aperture could be increased: in going from a day-
light diameter of 2 mm to 8 mm at night, the human pupil increases the
eye’s sensitivity by a factor of 16. In really nocturnal animals such as owls
and opossums the pupil is almost as wide as the eye itself. Obviously,
however, D cannot be greater than the eye diameter, so ultimately it must
be eye size that limits sensitivity. Any further sensitivity increase requires
a larger eye to accommodate the larger pupil, and indeed most nocturnal
animals have large eyes. However, increasing the eye size, and hence the
focal length f, will actually decrease Δρ, which is given by d/f, so to reap
the rewards of the larger eye the receptors must be made correspondingly
wider.

Δρ

Fig. 3.12 Increasing an eye’s sensitivity. The sensitivity of the eye on the left can first be improved
by widening the aperture (D) to the maximum possible (centre). After that, the only way to
increase photon capture without changing resolution (constant acceptance angle ∆ρ = d/f radians)
is to scale up all three parameters (D, d, and f ) together.
68 Animal Eyes

Table 3.2 The sensitivity (S ) of a selection of animal eyes

Name Sensitivity Light habitat Ref

Cirolana (marine isopod) 4200 Deep sea 1


Oplophorus (decapod shrimp) 3300 Deep sea 2
Lampanyctus (lantern fish) 247 Deep sea 3
Dinopis (ogre-faced spider) 101 Nocturnal 2*
Limulus (horseshoe crab) 83–317 Coastal mainly nocturnal 1*
Ephestia (moth) 38 Nocturnal/crepuscular 2*
Onitis aygulus (dung-beetle) 31 Nocturnal/crepuscular 4*
Phronima (hyperiid amphipod) 38–120 Mid-water 1
Man (peripheral rod pool) 18 Crepuscular 2*
Octopus 9.7 Coastal sea-floor 5
Pecten (scallop) 4.0 Coastal sea-floor 2*
Bufo (toad) 4.0 Mainly diurnal 6*
Leptograpsus (shore crab) 0.5 Diurnal 1*
Onitis ion (dung beetle) 0.35 Diurnal 4*
Worker bee 0.32 Diurnal 2*
Phidippus (jumping spider) 0.04 Diurnal 2*
Man (fovea in daylight) 0.01 Diurnal 2*

References to original data: 1. Land and Nilsson (1990); 2. Land (1981a); 3. Warrant, Collin, and Locket (2003)
(assumes 25μm ganglion cell fields); 4. McIntyre and Caveney (1998); 5. Hanlon and Messenger (1996); 6. Warrant
and Nilsson (1998). * Values recalculated for white light using method given in 6.
The monochromatic light formula S = 0.62 D 2 ∆ρ2 (1 − e−kL) was used for the five deep-sea species (no *), for all
the others (*) the white light formula S = 0.62 D 2 ∆ρ2 (kL(2.3 + k L)) was used. ∆ρ is obtained from d/f, the receptor
diameter divided by the focal length.

In vertebrates’ eyes the receptors themselves are not particularly large in


big eyes. What tends to happen instead is that small receptors are grouped
into larger units at the level of the retinal ganglion cells, so that the effec-
tive receptor diameter is increased (spatial summation). This arrangement is
often quite flexible, so that the size of the receptor ‘pool’ can vary with light
level, allowing a trade-off between high resolution in daylight (small effec-
tive Δρ) and high sensitivity at night (large effective Δρ). Another strategy
is to collect photons over a longer period of time (temporal summation). As
with spatial summation there is a penalty, in this case the increased move-
ment blur that results from the lengthened ‘shutter time’. Nevertheless, when
used appropriately, spatial and temporal summation can be very effective
in augmenting the purely optical adaptations summarized in eqn (3.6). For
example, it has been estimated that, with optimal summation, the locust
eye can extend its visual range down to light intensities 100 000 times dim-
mer than that provided by the optics alone (Warrant 1999).
Even without taking spatial and temporal summation into account, the
range of sensitivities that different eyes obtain by varying the parameters in
eqn (3.6) is remarkably large. For the human eye in daylight (D = 2000 μm,
What makes a good eye? 69

Δρ = 1.2 10-4 rad, Pabs = 0.31) S is 0.01 μm2.sr, whereas at the other extreme
the deep-sea isopod crustacean Cirolana (D = 150 μm, Δρ = 0.78 rad, Pabs =
0.51) has a value for S of 4200 μm2.sr. If both eyes were looking at the same
scene, the crustacean would capture 420 000 times as many photons per
receptor, a ratio not far short of the million-fold difference between day-
light and moonlight, although well short of the total range of usable human
vision (1010). Sensitivity figures for a range of animals are given in Table 3.2.
In general there is excellent agreement between the value of S, and the light
regime in the animal’s habitat. Diurnal and surface-living animals tend to
have S-values below 1, for crepuscular and mid-water animals S is in the
range 1–100, and for nocturnal and deep-water animals it is between 100
and 10 000. Another selection of sensitivity values, showing the same trend,
is given by Warrant and McIntyre (1990).

Conclusions
A ‘good’ eye can be defined as one that resolves well under a variety of
lighting conditions, and we are now in a position to see what anatomical
features make this possible. The first point is that such an eye will have
to be reasonably large, for three reasons. A long focal length is needed to
obtain a low minimum resolvable angle and a high retinal sampling fre-
quency (eqn 3.2); a wide aperture is needed to reduce diffraction, and thus
ensure a high optical cut-off frequency (eqn 3.4); and a wide aperture is also
needed to get enough light into the eye to ensure adequate photon num-
bers, and thus good contrast detection in dim light (eqns 3.5 and 3.6). Large
absolute eye size benefits both resolution and sensitivity, so it is no surprise
to find an evolutionary trend towards large eyes in all animals that require
good eyesight. Humans, hawks, and dragonflies have large eyes in order to
resolve well, whereas cats, owls, and moths use eye size more to improve
sensitivity. Not surprisingly, hunters in the deep sea, requiring both resolu-
tion and sensitivity, sometimes have huge eyes. The largest recorded eye is
that of a deep-sea squid, and it had a diameter of nearly 30 cm. Conversely,
low-acuity eyes operating in daylight can be less than a millimetre wide.
The differences between diurnal and nocturnal eyes are mainly in the
size of the aperture and the angle in space (Δρ) over which each receptor
accepts light. Human eyes have relatively small pupils, with an F-number
( f/D as in photography) between 8 in daylight and 2 at night. Diurnal
insects such as bees typically have F-numbers of about 2. In fishes and in
nocturnal terrestrial vertebrates the F-number is closer to 1, and in some
arthropods such as moths and lobsters it can be as low as 0.5. Since image
brightness varies as (1/F-number)2, this means that the optics of a lobster
eye at night have 256 times the light-catching power of a human eye in
70 Animal Eyes

daylight. There are no advantages for a diurnal eye in having a pupil larger
than is needed to prevent diffraction from limiting image quality. Indeed,
there are disadvantages, because other defects such as spherical and chro-
matic aberration become worse. But in the dark high resolution becomes
unusable, and the need for photons is paramount, which dictates that the
aperture should be as large as possible.
In daylight there are plenty of photons, and the narrower the recep-
tors are the better, because this means that the eye can have a short focal
length for a given resolution (eqn 3.2), and so be physically small. Because
there is a lower limit of about 1 μm to the receptor diameter, imposed by
waveguide optics (Fig. 3.7), this means that focal lengths cannot become
vanishingly small either. There is thus a minimum size an eye must have,
however bright the light. In the dark wider receptors are favoured, because
this increases the angle (Δρ) over which they capture photons. This means
that in an eye of a fixed size resolution will theoretically be reduced if
sensitivity is increased. However, in dim conditions fine resolution is any-
way unusable (Fig. 3.11), so the compromise between resolution and sen-
sitivity favours wider receptors. There seems to be a practical upper limit
to receptor diameter of about 25 μm; this is found in lobsters and some
other crustaceans. Such receptors will accept 100 times more photons than
a 2.5 μm cone from the human fovea. The clever way of managing this
trade-off between resolution and sensitivity, so that the eye has the best
resolution available to it at different light levels, is to have small receptors,
but to pool them into larger assemblages in darker conditions. There is a
good deal of evidence to suggest that this is what occurs in the eyes of
vertebrates and in some arthropods.

Summary
1. Eyes can be characterized by their resolution and sensitivity. Resolution
is the fineness, in angular terms, with which the optical environment is
sampled. Sensitivity is quantifiable as the number of photons a receptor
receives when the eye is viewing a scene of standard luminance.
2. Resolution depends on the sampling density of the retinal recep-
tors and also on the quality of the optical image. This quality can be
affected by defects of focus, and by spherical and chromatic aberra-
tion. It is ultimately limited by diffraction (interference of light waves
in the image). The larger the aperture of an eye, the smaller the effect
of diffraction.
3. Because of waveguide effects photoreceptors cannot be made narrower
than 1–2 μm without compromising resolution. This means that improved
What makes a good eye? 71

resolution can only be achieved by increasing the focal length of the opti-
cal system.
4. In dim light the ability to detect contrast is limited by the numbers of
photons that receptors can obtain. The smaller the number of photons
caught the worse the statistical quality of the image. Photon numbers are
maximized in high sensitivity eyes by the use of high relative apertures
(aperture diameter/focal length) and wide receptors. However, wider
receptors will compromise resolution.
5. In general either an improvement in resolution or an increase in sensitiv-
ity requires an increase in the size of the eye.
4 Aquatic eyes: the
evolution of the lens

Evolutionary origins
Life began in the sea, and we, as land-living animals, have features of our
eyes that reflect that watery ancestry. In particular we have a lens with
a peculiarly inhomogeneous structure, which supplements the ray-bending
power of the cornea. In terrestrial animals it is the curved air–fluid inter-
face of the cornea that performs most of the optical work of bringing light
to a focus, but in aquatic animals this surface has no optical function. It
exists of course, but with fluid on both sides it has no capacity to refract
light. Thus, with rare exceptions, the lens is the only optical structure capa-
ble of producing an image in water.
The vertebrate fossil record has, unfortunately, very little to say about
the origins of the eyes or their optical systems. The lampreys, relatives of
the jawless ostracoderm fishes of 450 million years ago, have eyes that are,
for all practical purposes, the same as those of other modern fishes (Nicol
1989). Other modern relatives of the earlier chordates such as Amphioxus
have pigmented photoreceptors, but nothing resembling a real eye. However,
amongst the molluscs it is possible to make out a series of eyes of modern
forms that at least provides a clue to the early evolution of eyes of the sin-
gle chambered type. Fig. 4.1 shows such a series. In the limpet Patella the
eye is a V-shaped pigmented pit containing receptors. Each receptor has an
acceptance angle of 90° or more, restricted only by the shadowing effect
of the pigment behind it. Pit eyes like this are common throughout the
‘lower phyla’, and enable an animal to locate lighter or darker regions of
the environment. In many gastropods, the abalone Haliotis for example, the
mouth of the pit is drawn in to give the eye a more spherical shape, and

72
Aquatic eyes: the evolution of the lens 73

Fig. 4.1 Examples showing two possible directions of eye evolution in the molluscs, starting with
the pigmented pit eye of the limpet, Patella. Top row: pinhole eyes in the abalone Haliotis and the
cephalopod Nautilus. Lower row: lens eyes in the land snail Helix and the shore-living gastropod
Littorina (Patella, Haliotis, and Helix from Hesse (1908), Nautilus from Young (1964), Littorina from
Newell (1965)). The Nautilus eye is about 10 mm across, the others are all less than 1 mm.

a narrower opening, restricting the acceptance angle of each receptor to


perhaps 10°. While this results in an improvement in the eye’s resolution, it
is obvious that to pursue this line any further will produce eyes in which
less and less light reaches the image. Thus this is not a particularly good
evolutionary route to follow. The only animal to have pursued this route to
its logical conclusion is the ancient cephalopod mollusc Nautilus. A much
better solution is to evolve a lens. In the snail Helix this is simply a ball of
jelly which converges the light rays a little, though not enough to form a
sharp image (Fig. 4.1). However, in the periwinkle Littorina, and many other
gastropod molluscs, the lens has evolved into a sophisticated structure with
a graded refractive index, and excellent image-forming capabilities.

Pinhole eyes: giant clams and Nautilus


In contrast to the paired eyes of gastropod molluscs which are borne on
the head, many bivalve molluscs have eyes around the edge of the mantle
edge that serve as ‘burglar alarms’, sensing movement at a distance and
enabling the animal to shut its shell before a passing predator has a chance
to attack. These may be optically quite sophisticated, using concave mirror
74 Animal Eyes

optics or with a compound eye structure (see Figs. 6.2 and 7.2). However,
the simplest of these eyes, found in giant clams (Tridacna spp.) are pinhole
structures. Up to a thousand of these eyes are located around the man-
tle edge, and each consists of a pit, about 0.5 mm wide and deep, with
a 0.1-mm aperture (Fig. 4.2). Each eye contains about 250 receptors with
sensitivities in three spectral ranges, including ultraviolet (Wilkens 1984).
The receptors lie at the base of the pit, and each views an angle, through
the aperture, of about 16.5º. The receptors respond to dimming, and the
animal will withdraw its siphons and close its mantle if the shadow of a
hand passes over it. However, they will also respond to the appearance of
an object that casts no shadow, provided it occupies an angle in the field of
view that is comparable to or larger than the acceptance angle of a recep-
tor (Land, 2002). Thus a 10-cm fish will trigger a response at about 42 cm,
and this presumably gives sufficient warning for the clam to close and thus
protect its vulnerable mantle tissue. One might ask why clams have not
evolved better eyes – a 16.5º threshold is hardly keen eyesight. The answer
may simply be that this is good enough, and that to detect smaller or more
distant objects would result in many more false alarms to creatures or
debris that present no threat.
What distinguishes the Nautilus eye from other lens-less eyes is its size
(Fig. 4.1). Most of the lens-less eyes we have mentioned so far are a frac-
tion of a millimetre in diameter, with a few hundred receptors. In Nautilus,
however, the eyes are nearly a centimetre in diameter, comparable in size
with the lens-containing eyes of Octopus, or indeed of many fish. Thus
these are serious eyes, and this impression is reinforced by the discovery
that the pin-hole pupil can vary its diameter with light intensity, between
0.4 and 2.8 mm. The eye is also equipped with a series of muscles that
rotate it in such a way as to stabilize its vertical axis against the rock-
ing motion of the animal as it swims. We know little about the functions

Fig. 4.2 Pinhole eyes of giant clams. a) Mantle of Tridacna squamosa, showing the eyes (from a
photograph by Nick Hobgood). b) Three eyes from Tridacna maxima showing the apertures (from
Land 2002). c) Acceptance angle (16.5º) of a receptor in an eye of T. maxima.
Aquatic eyes: the evolution of the lens 75

of vision in Nautilus, but the animal does have an optomotor response


(Fig. 4.3); that is, it can be made to rotate or swim in circles by rotating
a striped drum around it (Muntz and Raj 1984), indicating that its visual
system can detect motion. The finest stripe pattern that will produce this
response subtends between 11° and 22°, which is roughly what one would
predict with a partly open pin-hole. The real function of this behaviour
is to prevent the eye or body from rotating when a stationary background
is present, and it may be that it serves to stabilize the swimming of the
animal as it browses along the reef.
The weakness of the Nautilus eye is that the pin-hole arrangement is a
very unhappy compromise. To improve the resolution to anywhere near the
levels of a lens eye means decreasing the size of the pupil to a diameter
that hardly lets in any light at all. With a 0.4-mm pupil the angle in space
over which a single receptor accepts light is about 2.3°, roughly compara-
ble with the resolution of the eye of a small insect. However, the image is
then dimmer than the image in a fish eye by a factor of about 400. Since
opening up the iris results in a disastrous loss of resolution, the animal is
trapped in a visual world that is by most standards unusably dim or unus-
ably blurred.
The way out of this is to evolve a lens which sharpens the image by
focusing rather than by shading. The real enigma of Nautilus is that it has
not managed this, in spite of having had nearly 500 million years to do so.

Fig. 4.3 Nautilus in an optomotor apparatus


that elicits circular swimming when the stripes are
rotated. The dish containing the animal is 25 cm
across. From Muntz and Raj (1984).
76 Animal Eyes

Other cephalopods (octopus, squid, and cuttlefish) have excellent lenses very
much like those of fish (see Fig. 4.8) and numerous gastropods, worms and
arthropods have also managed this feat. It should not be difficult, because
it can proceed in small steps, all of them representing an improvement (see
Chapter 1).

Under-focused lens eyes


It is not uncommon among invertebrates such as gastropod molluscs or
polychaete worms to have pigment cup eyes where the eye chamber is
filled entirely by a lens-like body. The eye of the snail Helix (Fig. 4.1) is an
excellent example. These eyes are intermediate between lens-less cup eyes,
such as those of flatworms (Fig. 1.4c), and real camera-type eyes, such as
those of fish and cephalopods. The bearers of these intermediate types of
eye are generally not very swift animals, their visual responses are often
unimpressive, and their eyes are small (less than a millimetre). For obvious
reasons, this type of eye has not attracted much scientific interest, although
it certainly deserves attention because it offers an insight into the origin of
animal lenses and the evolution of camera-type eyes.
It is somewhat surprising that the best studied intermediates between
cup eyes and camera-type eyes are found in a class of jellyfish, known as
cubozoans, or box jellyfish. These agile jellyfish have sensory structures,
called rhopalia, at four positions, close to the margin of the bell. Each rho-
palium contains two different lens eyes and two different pairs of lens-less
pit eyes (Figs. 4.4a and 1.1e). In total, each animal carries 24 eyes of four dif-
ferent types. The eye-bearing rhopalium is attached by a flexible stalk, and
contains a heavy crystal, which causes it to passively orient, so that all eyes
keep a constant vertical angle irrespective of the orientation of the jellyfish.
By this mechanism, the smaller one of the two lens eyes on each rhopalium
(ULE, Fig. 4.4a) is always pointing straight up towards the water surface
(Garm et al. 2011), whereas the larger eye (LLE) is pointing obliquely down-
wards and inwards so that it sees the under-water world partly through the
animal’s own transparent body.
The lens eyes are used only for a few different tasks that do not require
very acute vision. In a species inhabiting Caribbean mangrove swamps
(Tripedalia cystophora), the four large lens eyes that point obliquely down-
wards are used to avoid collision with mangrove roots and other large
objects, whereas the smaller upward-pointing eyes help the jellyfish to locate
the edge of the mangrove swamp by detecting the mangrove canopy seen
through the water surface (Garm et al. 2007, 2011). Even though the lens
eyes are very small, they appear geometrically almost perfect, and resemble
fish eyes. It even turns out that their spherical lenses contain a refractive
Aquatic eyes: the evolution of the lens 77

index gradient (Nilsson et al. 2005). The focal length is about 3 lens radii,
but that is not where the retina is located in these eyes. Instead, the retina
occupies the space from just below the lens to about two lens radii from
the lens centre (Fig. 4.4b). The consequence of this arrangement is that light
from each point in the surrounding world will spread over a large patch of
retina, and vision will be badly blurred (Fig. 4.4c).
Other species of box jellyfish, such as the east Pacific Chiropsella bronzie
(Fig. 1.1e), have even weaker graded-index lenses, with focal lengths of up
to 5 lens radii (O’Connor et al. 2009). With a retina occupying the space
between 1 and 2 lens radii, the visual acuity is even worse than in the

(a)
Stalk

ULE

PE
SE

Statolith
LLE

(b) (c)

Retina

Pigment cup

Fig. 4.4 (a) Drawing of one rhopalium of Tripedalia cystophora, seen from the side (see also
Fig. 1.1e for a photograph of the same structure in a related species). LLE, lower lens eye; PE, pit
eye; SE, slit eye; ULE, upper lens eye. (b) Ray path in a geometrically faithful model of the lower
lens eye of T. cystophora. The lens is too weak to form a focus inside the retina, but it converges
the beam to allow for a wider aperture. (c) Computer modelling of a portrait of one of the authors
as it would be seen by the jellyfish eye.
78 Animal Eyes

Caribbean box jellyfish. Why do animals have graded-index lenses with


potentially excellent imaging properties, when the retina is placed much
too close to the lens to be in focus? In the box jellyfish case the answer is
straightforward: the blurred image is desirable because it passes the low
spatial frequencies needed for the animals’ visually guided behaviours, and
removes higher spatial frequencies that are not useful for controlling these
behaviours.
This, however, does not explain the presence of a lens. The same resolu-
tion as in the jellyfish lens-eye could easily be obtained in a lens-less pig-
ment cup eye by just having a smaller aperture. If the lens was removed
from the jellyfish eye of Fig. 4.4b, the blur spot on the retina would more
than double in diameter and the resolution would drop correspondingly.
By reducing the lens-less aperture to slightly less than half the diameter,
the original resolution would be restored, but the aperture area, and thus
the amount of light entering the eye would drop by a factor of about five.
We see from this that the primary benefit of introducing a weak lens is
improved sensitivity rather than improved resolution, and this will result in
better contrast sensitivity, faster vision, or vision at lower intensities. Thus
an eye with a weak lens has a considerable advantage over a lens-less cup
eye, where image brightness must always be traded for resolution.
The reason why box jellyfish, polychaete worms, and many gastropod
molluscs seem to have halted their eye evolution at the stage of an under-
focused lens eye is probably that these animals only need vision for low-
resolution tasks (see Chapter 1), and for such purposes, an under-focused
system is an excellent solution. The images produced by these eyes are not
particularly well resolved (Fig. 4.4b), but by rather small modifications of the
retinal position or the refractive index gradients in the lens, the resolution
can be precisely tuned to support different visual tasks, and the large aper-
tures will make vision possible over an extended range of ambient intensi-
ties. The reason that under-focused lens eyes are not more common than
they are is probably that the path to focused lens eyes and high-resolution
vision is quite straightforward. By making the lens more powerful, it is pos-
sible to gain resolution with a maintained brightness of the retinal image.
Before we continue to focused lens eyes (camera-type eyes), it is worth
returning to the very first stages of lens evolution. Before there were lenses,
there must have been eyes with cells or extracellular material inside the
retinal cup, from which a lens could start to develop. In the pinhole eye
of Nautilus there is no such material (Fig. 4.1), which possibly explains why
they never evolved lenses. There must have been some reason favouring
material inside the retinal cup before this material became dense enough
to focus light. Indeed, the pigment cup-eyes of some species of polychaetes,
gastropods, and box jellyfish are filled with very soft tissue that is unlikely
Aquatic eyes: the evolution of the lens 79

to have any focusing properties at all. There are at least two reasons for
filling the retinal cup. One is to cover the photoreceptor cells with a filter
protecting against photo-damage by short wavelength light. Another is to
provide a mechanical support around which the retina can grow to form
a stable cup. It is thus probable that ocular lenses have a complex evolu-
tionary history, starting with non-optical functions, followed by a role for
increasing sensitivity, and finally allowing for high spatial resolution.

Forming a sharp image


Producing a lens that will perform well in water is not quite as easy as it
may seem at first. It turns out that a lens made simply of a glass-like mate-
rial (dry crystalline protein for example) will not produce an image of good
enough quality, nor have a focal length short enough to be really useful.
The focal length needs to be kept short in relation to the size of the lens
to keep the eye as a whole reasonably small. This means that the radii of
curvature of the surfaces have to be small, which in turn makes a spherical
shape for the lens more or less obligatory. However, spherical lenses have
serious defects. The worst is known as spherical aberration (Fig. 3.6), in
which rays at a distance from the axis of the lens are bent through too great
an angle to come to the same focus as the on-axis rays, and the result is a
blur circle on the retina rather than a sharp image (Fig. 4.5a). This would be
wide, with a spherical lens, and the image would be very poor. The other
problem is that the lens would have a rather long focal length. A single sur-
face of radius r, separating two media of refractive indices n1 and n2, forms
an image at a distance of rn2/(n2 − n1). A spherical lens, where light encoun-
ters two surfaces, both of radius r, has a focal length ( f ) equal to half this:

f = 0.5rn2 /(n2 − n1 ) (4.1)

The refractive index (n2) of a dry protein such as the crystallin found in
lenses is about 1.53, and with sea-water (n1 = 1.34) as the outside medium,
the focal length of a lens made of such material would be 4 lens radii.
In fact, this is much longer than the focal lengths of real lenses in fish
and cephalopods. It has been known since the studies of Matthiessen in
the 1880s that the lenses of fish as well as cephalopods and marine mam-
mals nearly all have focal lengths of about 2.5 lens radii, a number that has
become known as Matthiessen’s ratio.
Clearly, a spherical lens made of homogeneous protein does not fit with
what we know of fish lenses, namely that they are of excellent optical qual-
ity and short focal length. This apparent contradiction interested a number
of nineteenth-century scientists including James Clerk Maxwell, who came
80 Animal Eyes

Lens core(1.52)
(a) (c)
1.50

Refractive index (n)


1.45

(b) 1.40

Lens cortex (1.38)

1.35
sea water (1.34)
centre periphery

Redial position in lens

Fig. 4.5 (a) Paths of rays through a homogeneous lens of refractive index 1.66, showing how
rays far from the axis are refracted too much (spherical aberration). (b) A lens with the same focal
length as (a), but with a gradient of refractive index, and a maximum index of 1.52 in the centre.
Note that rays are bent continuously and come to a common focus. (c) Form of the gradient in a
fish lens, capable of producing an image free from spherical aberration. (a) and (b) from Pumphrey
(1961); (c) based on Jagger (1992).

up with the idea that such lenses must have a gradient of refractive index,
highest in the centre and lowest near the periphery. Matthiessen had shown
that there was such a gradient in fish lenses, and believed that its form was
that of an inverted parabola, with the refractive index falling as the square
of the distance from the lens centre (Fig. 4.5c). Matthiessen, it turns out,
was not far wrong in his guess, although more recent theoretical studies
have suggested that there are other functions that give a somewhat better
performance in terms of the correction for spherical aberration (for a review
see Jagger 1992).
What does the refractive index gradient achieve? In the first place it
changes the pattern of refraction from a discrete bending of the rays at
each interface to one in which rays are bent continuously within the body
of the lens. The effect on spherical aberration is that the outermost rays,
which travel shorter distances within the lens, are bent relatively less than
they are at the interfaces of the homogeneous lens (Fig. 4.5b). Given the
correct gradient, all rays can be brought to a focus at the same point, for
light of a single wavelength. The shorter focal length is achieved because
Aquatic eyes: the evolution of the lens 81

continuous refraction results in greater total ray-bending than does 2-sur-


face refraction. In fact, an f/r ratio of 2.5 can be achieved in a gradient
index lens with a central refractive index of 1.52, whereas the same ratio
would require a homogeneous lens to have an index of 1.66. The real value
of the short focal length of fish lenses lies in the effect this has on light-
gathering power. In photographic terms the F-number of the eye (focal
length/diameter) is 1.25, which gives an image 2.6-times brighter than the
image behind a homogeneous protein lens (n = 1.52) with an F-number of
about 2.
The other important defect of biological lenses in general is chromatic
aberration, in which light of shorter wavelengths is brought to a focus
closer to the lens than longer wavelength light (Fig. 3.6). This means that
a single retina at a fixed distance from the lens cannot be in focus for all
wavelengths simultaneously. For animals that have only one visual pigment
(deep-sea fish and most cephalopod molluscs, for example) this is barely
a problem. However, shallow water fish have excellent colour vision, and
typically they possess four cone types whose wavelengths of maximum
sensitivity cover a 250-nm range from ultraviolet to red. One way round
the problem would be to place the different cone types at different dis-
tances from the lens, and there is some evidence for this. However, the
distances involved are quite large (up to 10 per cent of the average focal
length, or 1 mm in a 10-mm focal length eye) and cone separations as great
as this are not physically possible in a thin retinal sheet. It is now clear
that some fish use another method. This is to produce lenses with multiple
focal lengths, brought about by variations in the basic Matthiessen gradi-
ent (Kröger et al. 1999). The way this works is shown in Fig. 4.6. For light
of a single wavelength, the inner zones of the lens bring light to a closer
focus than the outer zones (this effectively means that the lens is over-
corrected for spherical aberration). However, for white light with a range
of wavelengths the images from inner and outer zones have a spread of
focal lengths, because of chromatic aberration. This means that the posi-
tion of the image for short wavelengths formed by the outer zone can be
made to coincide with the image for long wavelengths formed by the inner
zone. This in turn allows cones with different wavelengths of maximum
sensitivity to receive in-focus images in the same plane. This is not a per-
fect solution, because these in-focus images are contaminated by light from
the other out-of-focus images, and so will have reduced contrast compared
with a perfectly corrected monochromatic image. However, this is better
than not having a sharp image, and it seems that a wide variety of teleost
fish and many other vertebrates have opted for this solution. Figure 4.6
is somewhat over-simplified; in the species studied by Kröger et al. (1999)
there were three distinct images corresponding to the sensitivity maxima of
82 Animal Eyes

F1
B R
B R
F2

Plane of sharp focus


for blue and red rays

Fig. 4.6 Method used by some fish to overcome chromatic aberration. By slightly varying the
refractive index gradient (Fig. 4.5c) the lens produces several sharp images at different distances.
Although each of these images suffers from chromatic aberration, their locations can be
adjusted so that the images for different wavelengths coincide as shown. This means that cones
with different spectral sensitivities can be arranged in a single layer, and each receive a sharp
image. F1 and F2, foci from different lens regions. B and R, foci for blue and red light. Based on
Kröger et al. (1999).

the three cone types, rather than the two shown in the figure. A problem
with a multifocal lens of this kind is that when a circular pupil is used to
restrict the light entering the eye this will interfere with the chromatic cor-
rection by progressively occluding the outer zones. One way round this is
to use a slit pupil, which still allows all zones to be sampled. This appears
to be one reason for the common occurrence of slit pupils among terrestrial
vertebrates (Malmström and Kröger 2006).
The ‘Matthiessen lens’ is a winning design, not only in terms of its opti-
cal performance, but also its evolutionary popularity. Because an f/r ratio
of around 2.5 immediately tells one that a spherical lens has a gradient
structure, it is easy to survey the animal kingdom for occasions on which
this type of eye has evolved. It evolved in the fish, presumably once, in the
cephalopod molluscs (Figs. 4.7b and 4.8), and possibly more than once in
the gastropod molluscs as it is found in both the pulmonates and the proso-
branchs. The pulmonate fresh-water snail Lymnea has a small (100–200-μm
diameter) eye with an excellent lens and a complex retina that includes
a distinct fovea-like pit (Bobkova et al. 2004). Among prosobranchs the
most remarkable eyes are found in the carnivorous heteropod sea-snails
(Pterotrachea, Oxygyrus) with large spherical lenses and long, narrow scan-
ning retinae (Fig. 4.7a).
Matthiessen lenses also evolved independently in two unlikely places:
among the annelid worms in the alciopids, a family of polychaetes that
have become active carnivores in the marine plankton; and just once in the
Crustacea, in the copepod Labidocera (Fig. 4.7c) where the males have a pair
of lenses that share a line-like retina of 10 receptors (Land 1988).
Aquatic eyes: the evolution of the lens 83

Fig. 4.7 Photographs of spherical lens eyes in molluscs and a crustacean. (a) Eye of Pterotrachea,
a carnivorous sea-snail (Heteropoda). The eye is 3 mm long, with a long narrow retina only six cell
rows wide. (b) Unusual eyes of a mid-water squid (Histioteuthis). The two eyes are different sizes.
The larger ‘telescopic’ eye has a yellow lens and is directed towards the sea surface, the smaller
eye has a clear lens, a wider field of view, and is directed downwards. Partly dissected. The upper
eye has a diameter of 9 mm. (c) The pontellid copepod Labidocera is probably unique among
crustaceans in possessing a pair of eyes with Matthiessen’s ratio lenses. The top of the small eye-
cup containing the retina is visible beneath the lens, and to the right of the lens is a thin striated
muscle (arrowed ) which moves the eye-cup (see Chapter 9). At the bottom of the picture is the
third (ventral) eye, which has a quite different construction. Lens diameter 145 μm.

Eyes of fish and cephalopods


The similarities between these two groups of swimming animals provide
some of the best known examples of convergent evolution. Of these, the
structure of the eye is perhaps the most astonishing (Fig. 4.8). Packard
(1972) put it like this:
. . . the modern cephalopod eye, with its single chamber, lens, ciliary
body, iris, hemispherical retina, cartilaginous sclera and external
argentea is also the most clamorously vertebrate-like structure of
the cephalopod organization.
Because cephalopods and vertebrates have very separate evolutionary ori-
gins, we can be certain that the similarities exist because both groups have
hit upon, and perfected, the same engineering solution to the problem of
seeing well in the marine environment.
There are crucial differences, however. The retina in cephalopods has
a structure in which the photoreceptors have their photopigment-bearing
regions directed forwards, towards the light, whereas in vertebrates they
are (for some quirk of development) situated at the back of the eye with
the photosensitive region pointing away from the light (Fig. 4.8). It is often
84 Animal Eyes

OCTOPUS COD

Optic lobe

LIGHT LIGHT

Axons to brain
Rhabdomeric
receptor
Ganglion
cells

Amacrine
cells
Bipolars
Horizontal
Microvilli
cells
Supporting
cell

Glia Efferent
nerves Cone
Rod
Afferent fibres Rod discs
to optic lobe
Pigment layer

Fig. 4.8 Convergence between the eyes of cephalopod molluscs and fish. The overall structure of
the eyes is very similar (top; Octopus from Young, 1964, cod (Gadus) from an engraving by D.W.
Soemmerring, 1818). Both eyes are large, 10 mm or more in diameter, and have spherical lenses
whose centres are about 2.5 lens radii from the retina. The lower figures show that the retinae are
completely different. In cephalopods the receptors are ‘rhabdomeric’, i.e. they are composed of
photopigment-containing microvilli. Very little neural computation is done in the retina; this occurs
in the optic lobe behind the eye. In the vertebrate retina (right) the rods and cones have receptive
regions (the outer segments) composed of discs which carry the photopigment. In front of the
rods and cones (with respect to the light path) are two layers of neurons in series (the bipolars and
ganglion cells) with horizontal cells and amacrine cells forming lateral connections between each
layer. Note that the receptors in Octopus point towards the light, but the vertebrate receptors
point away. Octopus from Young (1964), vertebrate retina from various sources.
Aquatic eyes: the evolution of the lens 85

remarked that the vertebrate retina is the wrong way round, and this is
true, but because the overlying cells and nerve fibres in the vertebrate
retina are reasonably transparent, the optical handicap is small. A great
deal of processing occurs in the vertebrate retina, with its three sequential
layers of nerve cells (the receptors, bipolars, and ganglion cells) separated
by the two layers of laterally extending neurons—the horizontal and ama-
crine cells (a good account of the structure and function of the fish retina
can be found in Nicol 1989). In the cephalopod retina there is no such
layered arrangement, and most of the processing occurs outside the eye, in
the optic lobe of the brain. The receptors themselves are different, too, with
the photopigment carried on microvilli in cephalopods (this is the typical
arrangement for most invertebrates), but contained in disc-like structures
in the rods and cones of fish. In many cephalopods the microvilli of the
receptors are arranged in orthogonal directions, and this is known to pro-
vide sensitivity to the plane of polarized light (Talbot and Marshall 2010a,
2010b; see Chapter 2). Cephalopods generally have only one visual pigment
and are colour-blind (Messenger 1991). The exception is the Japanese fire-
fly squid, Watasenia scintillans which has three pigments based on differ-
ent chromophore groups rather than different opsins (see Chapter 2). Most
fish, on the other hand, have a range of visual pigments in their cones
extending from the ultraviolet through to the red region of the spectrum,
and excellent colour vision.
The lenses in the two groups are very similar in their optical proper-
ties, but they are not constructed the same way. The fish lens is a single
structure surrounded by living cells, but cephalopod lenses develop in two
parts, with the front and rear regions separated by a sheet of live cells.
Remarkably, both parts have similar refractive index gradients, as required
in a lens well corrected for spherical aberration.
In both fish and cephalopods there are muscles associated with the eye.
External eye muscles are concerned with moving the eye in its orbit, and
internal muscles focus the lens and adjust the iris. It seems astonishing
that similar arrangements should be found in the two groups, but again
the reason seems to be that the design simply requires them. Large eyes
have to be stabilized, or motion blur will wreck the excellent resolution
obtained by having a fine-grain retina and a long focal length. Similarly,
depth of focus becomes smaller as eyes get bigger, making focusing mecha-
nisms essential. In fish, as in other vertebrates, six muscles move the eyes—
one pair for each axis of rotation. In the cephalopods the pattern of the
muculature is less obvious; 13 muscles have been described in the cuttlefish
Sepia, although this may boil down to six functional groups. In Octopus this
86 Animal Eyes

six-group structure is more obvious. Like the semi-circular canal system


of vertebrates, the statocyst in Octopus provides information about the ani-
mal’s own rotation, and this, together with information about image motion
from the eye itself, is used to counter-rotate the eye as the animal turns.
The effect is to keep the image more or less still on the retina, the body
effectively rotating around the stationary eye. However, the eye has to move
from time to time, and in both cephalopods and vertebrates this is achieved
by a fast flick-like movement known as a saccade, during which the eye is
effectively blind. In both cases the strategy seems to be to minimize the
time that the eye is moving relative to the surroundings, with consequent
blurring of the image (see Chapter 9).
In fish there is a variety of focusing mechanisms involving movement of
the lens (see Chapter 5, Fig. 5.9). Lampreys and bony fishes have a ‘nega-
tive’ accommodating mechanism in which the resting eye is focused for
near objects, and muscular action shifts the focus to more distant objects
by moving the lens back towards the retina. Uniquely among teleosts the
sandlance (Limnichthyes) has a corneal lenticle with real optical power, and
it accommodates in part by changing the corneal curvature. In cartilagi-
nous fishes and amphibians muscle action has the opposite effect to that
in teleosts, with ‘positive’ accommodation moving the lens away from the
retina and so bringing nearer objects into focus (Walls 1942). Cephalopods
seem to have both types of mechanism, different sets of muscles moving
the lens in either direction, but how this arrangement works in practice is
still not clear (Messenger 1981).

Matching eye to environment


Most eyes with spherical lenses look remarkably similar, mainly because
Matthiessen’s ratio fixes the proportions of the lens and eye-cup. However,
they certainly differ in size if not in shape. The smallest and largest both
belong to molluscs. The eye of the pond snail Lymnea, which has a per-
fect Matthiessen lens, is only about 0.15 mm across, whereas the larg-
est giant squid for which there is reliable information (a colossal squid
Mesonychoteuthis hamiltoni caught in 2007) had an eye 27 cm across—the
size of a dinner plate. There are also less well documented reports from
the nineteenth century of giant squid eyes as large as 40 cm. The largest
fish eyes are found in swordfish and tuna, where they attain a diameter of
about 10 cm. Larger eyes still, 20–30 cm in diameter, were present in deep-
diving ichthyosaurs which died out about 90 million years ago (Montani
et al. 1999). Large size can buy either high acuity or high light-gathering
power, and it seems here that it must be the latter, because no marine ani-
mal exploits resolution anywhere near the diffraction limit of the spherical
Aquatic eyes: the evolution of the lens 87

lens. Eye size probably determines the extent to which fish can hunt into
the night, or to what depths they can usefully operate. In the case of the
giant squid one could argue that catching prey at the end of tentacles sev-
eral metres long, deep in the ocean, requires both reasonable resolution and
high sensitivity.
Adaptation to the nature of the fish’s environment is seen most clearly
in the retina, and specifically in the way the ganglion cells are distributed.
The output from the eye consists of the fibres of the optic nerve, which
are the axons of the ganglion cells. The requirements of flexibility and
economy of space limit the number of optic nerve fibres to about a million
(compared with something like a hundred times as many photoreceptors),
making this a real bottleneck in the visual pathway. This is why there
is need for economy in the ganglion cell distribution, with the greatest
numbers associated with parts of the image where there is the most infor-
mation. Figure 4.9 shows the ganglion cell distributions in the dissected
out retinae of two fish from different habitats. One of these, Cephalopholis
miniatus, lurks in crevices in the coral reef, and the other Lethrinus chrys-
ostomas lives over an open sandy bottom. Cephalopholis has a small region
of high ganglion-cell density in the temporal, forward pointing, region of
the retina. To provide a forward field of view the lens has an ‘aphakic’
space in front of it (Plate 1). In contrast, Lethrinus has a long high density
‘visual streak’ which reflects the fact that most of the interest in this fish’s
world lies close to the horizontal plane. A similar streak is found in the
retinae of grassland mammals such as rabbits, and sea-birds where the
surface of the ocean only occupies a narrow horizontal strip in the vis-
ual field. One particularly interesting specialization of this kind occurs in
the surface-feeding fish Aplocheilus lineatus which has two parallel visual
streaks separated by about 40°. Apparently this gives the fish two views
of its prey, which might be a drowning insect. One streak views the water
surface from below, while the other looks out of the water just above the
edge of ‘Snell’s window’, the horizon for refracted rays, and thus sees the
upper part of the prey (Fig. 4.10). Other teleost fish with distinct foveal
regions, with a high density of cones and ganglion cells, include blennies,
sea-horses (Hippocampus), sandlances (Limnichthes), and archer fish (Toxotes).
The sea-horses, pipefish, and sandlances have independently moveable
eyes, rather like chameleons, with the central fovea directed laterally and
used in feeding to pick out small prey items either swimming or on sea-
weed. Archer fish famously spit at insects in the air above them, taking
into account of the refractive bending of light at the air–water interface.
They have a foveal region in the ventral part of the retina, aligned with
their preferred spitting direction, where the numbers of both cones and
ganglion cells are increased (Temple et al. 2010).
88 Animal Eyes

D
D 1
2
3
1

1.5 T N
N
2
3
T
2 5 4 3 2
1.5 4
1
1

V
V
Celphalopholis Lethrinus

Fig. 4.9 Patterns of ganglion cell density, reflecting the nature of the visual habitat, in the
retinae of two fish. Cephalopholis, which lives in crevises in coral reefs, has a small forward-
pointing ‘area’ of high density (see also Plate 1). Lethrinus, which swims over sand, has a
pronounced horizontal ‘visual streak’ corresponding to the lateral view of the bottom. The
ganglion cells are the outputs from the retina (see Fig. 4.8) and their distribution gives a good
indication of an animal’s visual priorities. Numbers are densities in thousands per square
millimetre. N, T, D, V are nasal, temporal, dorsal, and ventral regions of the retina. Note that the
lens inverts these relations, so that the temporal retina at the rear of the eye views the forward
direction. Based on Collin and Pettigrew (1988).

The deep sea, below a few hundred metres, provides a rather special
environment. Little light penetrates from the ocean surface, and what does
is predominantly blue, and limited to a relatively narrow cone around the
vertical (Lythgoe 1979). Nevertheless, many fishes still use this light to hunt
by, presumably sighting potential prey by the dark silhouette cast against
the dim residual skylight. (The evidence for this comes chiefly from the care
that mid-water fishes and crustaceans take to disguise their silhouette with
downward-pointing photophores, which emit light at an intensity adjusted
to the downwelling light. See Herring (2002)). Where photons are scarce, the
detectability of prey will depend on the amount of light reaching the preda-

Fig. 4.10 Left: head of the surface-feeding fish Aplocheilus with lens of the eye removed,
showing the two visual streaks on the retina. From Munk (1970). Right: illustrating how an object
in the surface film can be viewed both above and below the surface.
Aquatic eyes: the evolution of the lens 89

tor’s retina, and this puts a premium on the size of the lens. Big lenses imply
big eyes, but many predatory mid-water fish have managed to economize
on space by using so-called tubular eyes (Fig. 4.11a and b). Optically these
are cut-down versions of normal eyes with a reduced, upward-pointing field
of view of about 60°, rather than the 180° typical of ordinary fish eyes. By
dispensing with the part of the visual field where there is essentially no
light, except perhaps the flashes of luminescent animals, the fish manages to
incorporate a massive lens into an eye of supportable size. As ever, there are
some deep-water cephalopods that have evolved the same trick, for exam-

(a) M (b)
B

Cornea
Lens

Lens
pad
Accessory
L retina

Main
retina

(c) (d)
lateral

Ret ‘eye’
cho
Scl

Fig. 4.11 (a) The deep-sea fish Scopelarchus showing the upward-pointing tubular eyes, and the
wide binocular overlap (B) between the monocular fields (M) of the main retinae. L is the lens pad.
(b) Section of the Scopelarchus eye, with the outline (dashes) of a conventional eye which has a
lens of the same focal length. The main retina images the residual downwelling daylight, whilst the
lens pad, composed of light-guiding plates, throws some sort of an image of the dark sector of the
field onto the accessory retina. It is a reasonable assumption that the main retina is used to detect
silhouettes against the surface, and the accessory retina has the less demanding task of detecting
luminescing animals against a dark background. Based on Marshall (1979). (c) Double eye of
Bathylychnops exilis. The downward-pointing secondary eye has the same retinal layers as the
primary eye, but the lens is formed out of the sclera. Ret, retina; Cho, choroid; Scl, sclera. (d) Head
of the benthic fish Ipnops sp., in which the eyes are reduced to flattened plates with no optical
system. (b) and (c) modified from Lockett (1977) after figures by Ole Munk.
90 Animal Eyes

ple, the tubular-eyed octopus Amphitretus pelagicus (Marshall 1979), and the
remarkable squid Histioteuthis (Fig. 4.6b) which has a large upward-point-
ing eye, and a smaller downward-pointing one. Deep-sea fish eyes show a
range of other curious modifications including multiple banks of receptors,
light-emitting photophores in or near the eye, and ‘accessory’ retinas asso-
ciated with a variety of unusual optical devices. In Scopelarchus (Fig. 4.11b)
this structure is called a lens-pad, and behaves as an array of light-guides,
but in other species the accessory retina receives an image from a mirror
(Dolichopteryx; see Chapter 6), or even, in Bathylychnops, from a second lens
(Fig. 4.11c) (Lockett 1977; Collin et al. 1998). The probable function of these
structures is to provide coverage of the visual field below the animal. This
will be dark, with occasional flashes from luminescing animals, and these
should be easy to see even with less than perfect optics (Land 2000). In
terms of optical reduction the ultimate state is reached by the strange bot-
tom-dwelling fish Ipnops murrayi, whose eyes are lens-less plates of retina,
covering the flat upper surface of the front of the head (Fig. 4.11d). Partial or
complete loss of the optical system without loss of photosensitive tissue is
not uncommon in benthic animals, including not only fish but crustaceans
such as hydrothermal vent shrimp (Chamberlain 2000). Because each pho-
toreceptor has a 180º acceptance angle these eyes are particularly sensitive,
but of what value is sensitivity in the complete absence of resolution?
One particularly interesting development is the discovery that at least
one deep-water fish (Aristostomias) has a red-absorbing visual pigment even

(a) (b) (c)

Fig. 4.12 (a) Photograph of the ventral eye of a male Pontella, showing the triplet lens and
parabolic front surface of the first lens component. For a side view of the animal see Land (1984).
(b) Optical construction showing how the parabolic surface enables all parallel rays to come to a
point image. Replacing the parabolic surface with a spherical one (c) results in an image that is no
longer sharp.
Aquatic eyes: the evolution of the lens 91

though the only light penetrating to those depths is blue, and that it also
has a red-emitting photophore. This fish, it seems, has its own private
wavelength either for communicating with conspecifics, or lighting up the
surroundings as an aid to predation (Partridge and Douglas 1995).

Eyes with non-spherical lenses


There are remarkably few aquatic eyes of the single-chambered type that
do not contain single spherical lenses. There are one or two, however,
where the required ray-bending is achieved by refraction at a number of
surfaces, more in the manner of a multicomponent camera lens. A particu-
larly impressive system is found in the copepod Pontella, where a total of
six surfaces in three lenses are used to produce the image (Land 1984). This
eye is the ventral component of the typical tripartite ‘nauplius’ eye and,
as in other copepods, contains a retina with very few receptors, in this
instance only six. This apparent simplicity is the more remarkable because
of the amazing development of the eye’s optics (Fig. 4.12). In the male there
are three lenses, one attached to the eye-cup itself, and another two in the
animal’s rostrum. In the female, curiously, the most anterior component is
missing, making the lens a doublet rather than a triplet. Seen from below
it is clear that while most of the surfaces are approximately spherical, this
is not true of the first surface, which is distinctly parabolic. Ray tracing
through the lenses, assuming them to have a uniform refractive index of
1.52, gives the result shown in Fig. 4.12b. With a spherical front surface the
system as a whole gives a poor image, with obvious spherical aberration
(Fig. 4.12c), but with the parabolic surface this disappears, giving a well-
corrected point image. From an optical standpoint, it seems that Pontella has
hit upon the alternative way of avoiding the perils of spherical refracting
surfaces: it has made an aspheric lens, rather than a graded index one.
The eyes of another group of copepods—Sapphirina, Copilia, and their
relatives—have intrigued biologists for well over a century, and it is not
hard to see why (Fig. 4.13). Each eye has a pair of lenses. The larger anterior
lens is part of the carapace, and it throws an image onto a second smaller
lens attached to the front of a tiny retina containing 5–7 receptors. The
design is thus somewhat like a pair of telescopes, each with objective and
eyepiece lenses. The other reason why these eyes have merited so much
attention concerns the way they move. The rear part of each eye, including
the second lens, moves sideways in the body, through an angle of about
14°, as measured from the front lens. In Copilia the eyes move together, but
in opposite directions, fast medially and slower laterally. The rate varies
between 0.5 and 10 Hz (Gregory 1991). Although the scanning movements
92 Animal Eyes

Cuticular
lens

Second
lens
A B
Optic
Receptors nerve

Striated
muscle

Fig. 4.13 Left: photograph of the eyes of the copepod Sapphirina. The front lens of each eye
throws a large image onto the plane of the second lens which collects light into the small cluster
of receptors behind it. Right diagram of the left eye of the related copepod Copilia. The striated
muscle scans the whole of the rear assembly of receptors and lens back and forth along a track
indicated by the line AB. Based on Exner (1891).

of the eyes increase the effective field of view of the retina, which on its
own is only about 3° across, they still only enable the animal to scan a tiny
line in the surrounding space. One suggestion has been that Copilia’s prey
may consist of vertically migrating planktonic animals, which are detected
as they swim through the horizontal scan line. This would then, in princi-
ple, provide the point-like retinal detectors with a two-dimensional field of
view as in a conventional eye: one dimension resulting from the scanning,
and the other from the movements of the prey itself. Unfortunately, there
are no direct observations of the behaviour or eye movements of Copilia in
its natural marine environment.
It is worth mentioning in this context that one fish, the sandlance Limnicthyes
fasciatus, also splits its refraction between four surfaces by making use of a
thickened corneal ‘lenticle’ with a relatively high refractive index (1.38). This
tiny but remarkable fish, with its independently-moveable eyes, catches cope-
pods and other plankton with a rapid, visually-guided lunge. The lenticle can
change shape during accommodation and forms part of a very fast focusing
mechanism. In conjunction with a rather weak lens, the lenticle brings the
nodal point of the optical system towards the front of the eye, thus increasing
the focal length and magnifying the image (Pettigrew et al. 1999).
Aquatic eyes: the evolution of the lens 93

Summary
1. Lens eyes evolved from pigmented pit eyes independently in at least four
phyla. In a few cases, notably the cephalopod Nautilus, lens-less pinhole
eyes have been retained. Eyes also exist in which there are weak, under-
focused lenses, notably in cubozoan jellyfish. It is argued that their
advantage over lens-less pit eyes lies in increased sensitivity rather than
resolution.
2. In most eyes in which lenses have evolved these lenses have an approxi-
mately parabolic gradient of refractive index, falling from a maximum in
the centre. This produces a short focal length and a minimum of spheri-
cal aberration. Manipulation of this gradient has allowed for partial cor-
rection of chromatic aberration in fishes.
3. The large eyes of fishes and cephalopods provide a remarkable instance
of convergent evolution. They have separately evolved a variable iris, eye
muscles to stabilize the eyes, and focusing mechanisms, but the structure
of the retina is totally different in the two groups.
4. The retinae of fish from different environments have specializations in
the ganglion cell layer of the retina. Eyes of deep-sea fish often have a
tubular shape which permits a large lens in a relatively small eye.
5. In the copepod crustaceans there are a number of examples of com-
pound lenses that use multiple elements and aspheric surfaces, instead
of a single inhomogeneous sphere.
5 Lens eyes
on land

A new optical surface


When they emerged from water, the early land vertebrates would have found
that their eyes had a new optical arrangement. The cornea, which in water was
simply a tough transparent membrane protecting the front surface of the eye-
ball, became an image-forming structure in its own right, rivalling the lens in
its ability to bring rays of light to a focus. In water the cornea has little or no
optical effect, because it has a fluid of the same refractive index on both sides.
On land, however, the front surface is in air, so there is now a large refractive
index difference, across which rays are bent by refraction. It turns out that the
ray-bending power of a fish lens and a cornea in air are quite similar. Optical
theory states that if the radius of curvature of a surface is r, and the refractive
indices on the two sides are n1 and n2, then the focal length f of the surface is
given by the formula f = n1r/(n2 – n1). This means that there is a focused image
of distant objects at a distance f from the centre of curvature of the surface (see
Fig. 5.2 later in this chapter). For a cornea in air the outside refractive index n1
is 1, and inside the eye n2 is about 1.34, so that f becomes r/0.34, or about 3r. In
the last chapter we saw that a fish lens of radius r has a focal length of about
2.5r, so the focal lengths of corneas and lenses with the same radius are quite
comparable.
An eye with both a cornea and a fish-type lens has too much focusing
power, and if the first proto-amphibian to come on land had done nothing
about this it would have been very myopic (Fig. 5.1). Far away objects would
be very blurred, but objects at very much shorter distances, which are
focused further from the optics, would be sharp. The blurring would be
comparable to what happens to our vision when we go swimming without

94
Lens eyes on land 95

Terrestrial eye Aquatic eye

AIR
Fig. 5.1 Effect of the medium on image
formation in terrestrial and aquatic eyes. A
terrestrial eye in water is under-focused, and an WATER
aquatic eye in air over-focused.

goggles; in this case, however, we lose the power of the cornea (which now
has fluid on both sides) and become hyperopic, which means that we do
not have clear vision at any distance.
To look in detail at the ways that animals have adjusted their eyes to life
in air, and sometimes to life in both air and water, we will have to deal
with combinations of surfaces and lenses. For this reason the next part of
this chapter is a primer on the optics of spherical surfaces, which will be
useful in trying to understand not only how our single-chambered eyes
work but also later when we deal with compound eyes (Chapters 7 and 8).
This section ends with Box 5.1 in which we work out the focal length and
image position in the human eye. This is quite tough going, but for anyone
who needs to get to grips with optical systems (biological or otherwise)
that make use of multiple surfaces it provides an appropriate tool kit. In
the next three sections of the chapter we return to biology, and examine
first the range of vertebrate eyes that use corneal optics, exploring such
topics as focusing mechanisms and ecological adaptations in the process.
There is then a short section on amphibious eyes that have to be made to
function in both air and water. Finally we explore the eyes of those inver-
tebrates, principally the spiders, which also employ a cornea to form their
images.

Basic optics of cornea and lens


To work out how an eye will perform we usually want to know where the
image is and how large it is, for an object whose size and distance are known.
For a single curved surface there are well-known formulae for making these
calculations, and these are given here. Where more surfaces are involved, the
calculations can become very complicated. However, it is usually possible to
‘reduce’ a complex image-forming structure to a single equivalent surface
with the same optical power, and then the simple formulae can be applied.
In Box 5.1 we show how this can be done for the human eye, using methods
that can be applied to any eye with a combination of curved surfaces and
lenses.
96 Animal Eyes

n1 n2

r f
F
C

Fig. 5.2 Image formation by a curved cornea. An image of a distant object is formed at F,
situated a distance f from the centre of curvature C, and f ʹ from the surface itself. These distances
are given by eqns (5.1) and (5.2); r is the radius of curvature of the surface, which separates media
with refractive indices n1 and n2.

Curved surfaces separating two media of different refractive index form


images, provided the higher refractive index is on the concave side (Fig. 5.2).
The position of the image of an object at infinity can be calculated either
from the centre of curvature of the surface, or the surface itself. If we con-
sider an eye whose main optical surface is a cornea separating two media
of refractive indices n1 and n2, these two distances, f and f ‘ are given by:

f = n1r/(n2 - n1 ) (5.1)

f' = n2 r/(n2 − n1 ) (5.2)

where the radius of curvature of the surface is r. Notice that f ’/f = n2/n1. When
the first medium is air (n = 1), which is the usual case, these equations
become:

f = r/(n2 − 1) (5.1a)

f' = n2 r/(n2 − 1) (5.2a)

Although it seems more sensible to measure image position from the surface
itself (f ʹ), it turns out that f (also known as the first focal length or the posterior
nodal distance) is the more useful measure. The reason for this is that rays of
light passing through the centre of curvature are not bent by the surface,
because they cross it at right angles (Figs. 5.2 and 5.3). This means that an
object in the outside world, and its image behind the surface, make the same
angle at the centre of curvature, and this in turn means that one can use the
principle of similar triangles to work out the size of the image of any object. In
Fig. 5.3, if object and image sizes are O and I, and their distances from the cen-
tre of curvature are U and V, then:

I/O = V/ U (5.3)
Lens eyes on land 97

n1 n2
O
U V
a
C a F
(Nodal point) I
u v

Fig. 5.3 Relations between object (O) and image (I ) at a curved surface. A ray passing through
the centre of curvature (C) is not bent by the surface, so image and object are related by the two
similar triangles with angle α. Sizes and distances of the object and image are related by eqns (5.4)
and (5.6).

The ratio of the sizes of object and image (I/O) is often referred to as the mag-
nification, m. If the object distance U is large (say >100 times V) then the image
will be very close to the focal point for an object at infinity, so V can be replaced
by f, in which case:

I/O = f/ U (5.4)

O and U are usually easily measured, so that the image size I can be found if f
is known. This equation applies to any optical system, provided the right
value for f is used.
For example: a grating of black and white lines, with a spacing between
the black lines of 5 mm, is just distinguishable from grey at 17 m. How far
apart are the images of the lines on the retina? We know that the focal
length of the average human eye (the cornea/lens combination) is 16.8 mm.
Applying eqn (5.4) then gives I = 5 × 16.8/17 000 mm, or 4.9 μm. This is
approximately twice the separation of cones in the fovea, so that each black/
white stripe pair in the image has two cones to receive it.
An important concept in dealing with any optical system is the nodal
point. This is a point on the axis and is defined as the point of intersection
of straight lines connecting points on the image with points on the object
(which is assumed to be at a large distance). These may be rays in simple
systems, but in more complex systems this is not necessarily the case (Fig.
5.4); however, the definition just given still applies. The distance from the
nodal point to the image of a point at infinity is by definition the focal
length f (Fig. 5.4). Sometimes it is easy to decide where the nodal point is.
In a simple refracting cornea it is at the centre of curvature, as we have
seen. For a thin lens in air the nodal point is at the centre of the lens. For
a fish lens in water it is again at the lens centre, because the lens is spheri-
cal. However, for more complex systems involving thick lenses it is neces-
sary to find the posterior nodal point by ray-tracing, i.e. working out where
rays go surface by surface, and a method for doing this is given later in this
98 Animal Eyes

section. In the human eye the situation is fairly simple: for most purposes
the system can be regarded as having a nodal point 16.8 mm in front of the
focus for distant objects.
Besides making it possible to use eqns (5.3) or (5.4) to work out image
sizes from object sizes, the nodal point also allows one to specify both
image and object sizes in terms of the angles (α in Figs. 5.3 and 5.4) that
they subtend at the nodal point, as these angles are the same. If O is small
compared with U, then O/U is an angle in radians (if O = 1 and U = 10
then the angle is 0.1 radians, which is 0.1 × 180°/π, or 5.73°. If this angle is
greater than about 10º then the conversion from radians is no longer accu-
rate, and the appropriate angle is arctan O/U).
For objects that are closer to the eye the image moves deeper, behind the
focal point for rays from infinity (for optical purposes infinity is a few
metres away for an eye like ours). Equation (5.4) does not apply, and/must
be replaced by the actual image distance. This is calculated from:

n2 / v − n1 /u = (n2 − n1 ) /r = 1/f (5.5)

where the object and image distances u and ν are now measured from the sur-
face itself, not the nodal point (this is why the symbols have been changed
from upper to lower case; u = U – r, ν = V + r in Fig. 5.3). With air outside, eqn
(5.5) becomes:

n2 / v − 1 /u = (n2 − 1) /r = 1/f (5.5a)

It is important in using this equation to stick to an appropriate ‘sign conven-


tion’. The most straightforward is the Cartesian convention familiar from
graphs, where distances to the right of the surface are taken to be positive. So

I
α N α
α F
f

Fig. 5.4 Definition of focal length f. Rays from a distant object making an angle α with the
optical system produce an image of size I at F. The ray in image space that makes an angle α with
the axis and passes through the image, must also pass through the nodal point N. The distance
from N to F is the focal length f. If α is small then α (in radians) = l/f. If α is known from
measurements in object space, then f can be found from the image size I.
Lens eyes on land 99

in Fig. 5.3 u is negative and ν, r, and f are positive. If the refracting surface were
concave to the left, however, r would become negative, as would f. To work
out the image size using u and ν, the corresponding equation to (5.3) is:

I/O = (v /n2 ) / (u/n1 ) (5.6)

or if the object is in air,

I/O = v /un2 (5.6a)

As an example, if a book is at 40 cm from a human eye, which makes no focus-


ing effort (perhaps because the reader is over 50), how far behind the eye’s
focal plane will the text be focused? We require ν from eqn (5.5a), where u is
−400 mm and/has a typical value of 16.8 mm (we are assuming here that the
eye consists just of a cornea with this focal length, and an internal refractive
index of 1.336). Rearranging eqn (5.5a) gives ν = n2/(1/f + 1/u), from which ν
= 23.43 mm. The focal point for an object at infinity is f ‘ = n2f behind the front
surface, i.e. 22.51 mm, so if the retina is in focus for objects at infinity, the
image of the text will lie 0.92 mm behind this. This will produce a seriously
degraded image. With a 3 mm pupil, the image of a point source on the retina
will be a blur circle about 0.12 mm wide—half the width of the fovea.
In dealing with systems that have more than one surface, it is often con-
venient to work with powers rather than focal lengths. These are reciprocals
of focal lengths (1/f ) and they can be added together. Thus if two adjacent
surfaces, or thin lenses, have powers P1 and P2 (i.e. 1/f1 and 1/f2), the com-
bined power Pcomb is given by:

Pcomb = P1 +P2 (5.7)

and the corresponding focal length is 1/Pcomb. The unit of optical power is the
dioptre (symbol D), which is 1/f when f is measured in metres. The optics of
a typical human eye give a focal length of 16.8 mm, corresponding to a power
of nearly 60 D. Of this the corneal power is about 40 D and the lens 20 D. In
optometry powers can be used to specify both optical defects and the lenses
needed to correct them. Thus if someone has 5 D of myopia, meaning that
the optics focus too far forward, this will require a −5 D negative lens to cor-
rect it.
When the elements of an optical system are separated, as they often are
in real eyes, the power of the system is somewhat less than simple addition
suggests. For separated surfaces the power formula becomes:

Pcomb = P1 +P2 − dP1 P2 / n (5.8)


100 Animal Eyes

where d is the distance between the surfaces, and n is the refractive index in
that space. The powers of the individual surfaces can be found from their focal
lengths, by applying eqn (5.1).
For systems that are more complicated than a single thick lens, to which
eqn (5.8) applies, the position of the focus can be found by ray tracing.
Basically, one starts with parallel rays coming from the left (as in Fig. 5.2)
and applies eqn (5.5) to each surface in turn. When the image formed by
one surface is located, it becomes the object for the next, and so on. This is
foolproof, but sticking to the sign convention is crucial for success! In Box
5.1 this method is applied to the human eye. It can be used to determine
the imaging properties of any eye of the lens/cornea type.

Box 5.1 A model of the human eye


We can illustrate how to use ray tracing to find the position of the image in
the human eye, and its focal length, with a model that has served optome-
trists and ophthalmologists well for nearly a century. It rejoices in the name
of the Gullstrand simplified (No. 2) schematic eye, named after the Swedish
ophthalmologist who devised it, and is illustrated in Fig. 5.5a. Similar
model eyes have been devised for a number of vertebrates, including the
goldfish, frog, turtle, pigeon, rat, cat, and monkey (Martin 1983; Charman
1991). The Gullstrand model consists of a cornea and a lens, all having radii
of curvature close to those of real eyes (see Fig. 5.5a). There are thus three
surfaces to consider. (The human lens, like fish lenses, is not homogeneous,
but Gullstrand chose a single refractive index of 1.413 that would provide a
homogeneous lens with the same power as a real lens.) To determine the
image position relative to the rear surface of the lens we apply eqn (5.5) to
each surface in turn. In this calculation the subscripts of u, ν, and r all refer
to the surface (1–3) at which refraction takes place. Refractive indices are
labelled from air on the left (1) to the vitreous space behind the lens (4). For
light rays from infinity reaching the cornea we have:

n2 / v1 − n1 /u1 = (n2 − n1 )/ r1

i.e.

1.336 / v1 −1 / ∞ = (1.336 − 1)/7.8

hence

v1 = 31.01 mm.
Lens eyes on land 101

Box 5.1 A model of the human eye (contd.)


The object distance for the next interface (u2) is shorter than this by
the distance separating the cornea and the front of the lens (3.6 mm) so
that u2 = 27.41 mm. So at the front surface of the lens:

n3 / v 2 − n2 /u2 = (n3 − n2 )/ r2

i.e.

1.413 / v 2 −1.336 / 27.41 = (1.413 − 1.336)/10.0

hence

v 2 = 25.03 mm.

The front and rear lens surfaces are also separated by 3.6 mm, so the object
distance u3 for the next surface is 21.43 mm. At the rear lens surface:

n4 / v 3 − n3 /u3 = ( n4 − n3 )/ r3

i.e.

1.336 / v 3 −1.413 / 21.43 = (1.336 − 1.413)/ − 6.0

hence

v 3 = 16.96 mm,

which is the distance from the rear surface of the lens to the focus.
In a system of several surfaces, the distance from the rear surface to
the focus is not the focal length ( f or f ‘), although in this case it is for-
tuitously similar to f. The focal length ( f ) is defined by the magnifica-
tion of the image. If an object at infinity subtends an angle α at the eye,
then the formula I/f = tan α (which is essentially the same as eqn 5.4)
gives the equivalent focal length, for any image-forming system (Fig.
5.4). The method is to work out the size of the final image by taking
the size of the initial image and multiplying it by the magnifications m
of each succeeding surface using eqn 5.6 (mk = I/O = (νk /nk+1)/(uk/nk), at
the kth surface). All the values for the lengths involved are available
from the preceding calculation of the position of focus. From the defi-
nition of focal length just given:
102 Animal Eyes

Box 5.1 A model of the human eye (contd.)


I f /f e = tana

where If is the size of the image formed by the final (3rd) surface, and fe is
the equivalent focal length of the whole system. Working through the
interfaces, beginning with the cornea we have:

I1 /f1 = tana

where f1 = f1’/n2 = ν1/n2, so that I1 = ν1 (tan α)/n2. At the next interface I2 =


m2I1, where m2 = I2/O2 = (ν2/n3)/(u2/n2). Similarly at the third and final
interface: I3 = m3I2, where m3 = (ν3/n4)/(u3/n3). The final result is that

I 3 = v1 (tan a )/ n2 . (v 2 /n3 ) / (u2 /n2 ) . (v 3 /n4 ) / (u3 /n3 )

Then, since fe = I3/tan α, the tan α terms cancel, and the final expression for
fe is:

f e = v1 /n2 ⋅ (v 2 /n3 ) / (u2 /n2 ) ⋅ (v 3 /n4 ) / (u3 /n3 )

which reduces to

f e = (v1 v 2 v 3 )/(u2 u3 u4 )

Substituting the values from the previous calculation gives:

f e = (31.01 ⋅ 25.03 ⋅ 16.96)/(27.41 ⋅ 21.43 ⋅ 1.336)

f e = 16.77 mm.

We now have the position of the image, 16.96 mm behind the rear
surface of the lens, and also the equivalent focal length, 16.77 mm.
Notice that these calculations show that the nodal point of the eye is
just behind the rear surface of the lens. For most practical purposes,
this is all one needs to know about an optical system to work out
where images will fall, and how big they will be. The focal length can
be used to work out the sizes of images using eqn (5.4), and changes
in image position with object distance can be found from eqn (5.5),
taking n2 as 1.336, the refractive index of the rear chamber of the
eye.
Lens eyes on land 103

Box 5.1 A model of the human eye (contd.)


What we have effectively done is to reduce the rather complex optics
of the human eye to a single air-fluid interface with a focal length of
16.77 mm. Because the refractive index is specified (1.336) the radius of
curvature of the fictitious surface is given by eqn (5.1a), and it comes to
5.63 mm, rather less than the actual cornea, because, of course, it has
had the power of the lens added to it. The surface must be situated a
distance r in front of the nodal point, i.e. 22.40 mm from the image,
and its position on the axis is known as the principal point of the sys-
tem. This ‘reduced’ eye is shown in Fig. 5.5b.
The methods given above can be applied to any eye that uses lenses
or lens-cornea combinations, including the ommatidia of apposition
compound eyes.

(a) r1= r 2 = r 3 =
7.8 10.0 – 6.0
3.6 3.6 v3 = 16.96
v1 = 31.01

P N FE FC

f = 16.77

n1 = n2 = n3 = n4 =
1.0 1.336 1.413 1.336
10 mm

(b) n = 1.0 n = 1.336

r N FE

r = 5.63 mm

Fig. 5.5 (a) Dimensions of the Gullstrand (No. 2) schematic human eye. P, principal plane; N,
nodal point: FE, focal point of the eye, for an object at infinity; FC, focal point of the cornea
on its own. Further explanation in the text. (b) Reduced eye. For most purposes the optical
system in (a) can be replaced by a single refracting surface, radius r, centred on the nodal
point. The surface is situated at the principal plane in (a).
104 Animal Eyes

Variations on the lens/cornea theme in land


vertebrates
To solve the problem of excessive optical power, land vertebrates could have
done a number of things. They might have abandoned the lens altogether
and adopted the cornea as the sole image-forming structure, or they could
have kept the lens and flattened the cornea so that it had no power, or they
could have retained both but shrunk the eye to fit the shorter focal length of
the combined system. The last of these possibilities, or something like it,
does occur in nocturnal mammals; but most of the reptiles, birds and mam-
mals have opted for a compromise, in which the lens is retained, but with
much less power than the ancestral fish lens. The lens and cornea then divide
the optical power between them: in humans this ratio is about 1 to 2. In opti-
cal technology it is usually a good idea to split the required refraction
between several surfaces, because the optical defects (aberrations) of several
weakly curved surfaces are usually less than those of a single surface of
much stronger curvature. Retaining a weaker, flatter lens, together with a
not too curved cornea, may have been a way of obtaining images of high
quality.

Sizes and shape of eyes


Figures 5.6 and 5.7 illustrate the variety of eye shapes among land verte-
brates. The basic design of the eyes of reptiles, birds, and mammals is
very similar, with a hemispherical rear chamber, a biconvex lens, and a
cornea with a pronounced curvature. There are differences in the ‘house-
keeping arrangements’ between the classes; for example, the nature of
the vascular supply varies and so does the mechanism of accommoda-
tion. However, the main differences that are apparent in Fig. 5.7 are not
linked to phylogeny but to lifestyle. These major variations are of three
kinds: there are differences in overall size, differences in the shape and
relative size of the lens related to nocturnality, and a tendency to rede-
velop a spherical lens and a flat cornea in species that have returned to an
aquatic life.
Why are some eyes bigger than others? The answer seems obvious: big-
ger animals have bigger eyes. This is true to some extent (see Hughes 1977,
p. 654) but it is certainly not the whole answer. Birds tend to have much
bigger eyes for their body size than mammals, and smaller mammals have
relatively bigger eyes than larger ones. Zebras, for example, have larger eyes
than elephants, and even some whales. The largest eye of any land animal
is that of the ostrich, with a diameter of 50 mm compared with 40 mm for
a horse and 24 mm in man (Martin 1985).
Lens eyes on land 105

Fig. 5.6 Photographs of the eyes of various terrestrial vertebrates. (a) Mouse lemur (Microcebus
murinus): typical large nocturnal eyes with wide pupils. (b) Elephant seal (Mirounga leonina):
flattened corneas are an adaptation to amphibious life (see Fig. 5.8). (c) Chameleon (Chamaeleo
oshaughnessyi ): turret-like diurnal eye with a small pupil. (d) Tokay gecko (Gecko gecko): active day
and night; notched slit pupil partially open (see Fig. 5.11). Photographs: (a) and (c) David Haring,
Duke University Primate Center; (b) Johnny Johnson (Bruce Coleman Collection); (d) Kim Taylor
(Bruce Coleman Collection).

There are two optical reasons for having a large eye: resolution for acute
vision, and sensitivity for vision in dim conditions (Chapter 3). Good reso-
lution requires a large eye to provide a long focal length, so that the angle
between receptors is as small as possible (see Chapter 3, eqns 3.1 and 3.2).
This has to be matched by good image quality, which requires a large lens
to provide a small diffraction blur-circle (see Chapter 3, eqn 3.3). Both these
conditions imply that the larger the eye the better the resolution, and this
is why primates and birds of prey have large eyes. Horses, however, have
large eyes, but do not have the same need for high-resolution vision, and
indeed their resolution is not remarkable, about 2.5 times worse than man.
The other explanation must be that horses are partly nocturnal. The large
eye is then needed to achieve a wide aperture for capturing photons, as
discussed in Chapter 3. People familiar with horses say that they can pick
106 Animal Eyes

Fig. 5.7 Variations in the structure of eyes from animals with different terrestrial lifestyles (not
on the same scale). Nocturnal animals have the biggest lenses and diurnal animals the smallest;
animals active day and night (arrhythmic) have intermediate eyes. Adapted from Walls (1941).

their way through difficult terrain at light levels where the rider can barely
see the ground. Animals that need both high resolution and high sensitiv-
ity have particularly large eyes. The reason for the ‘tubular’ shape of owl
eyes is that hemispherical eyes with such long focal lengths and wide aper-
tures would not fit into the head. Owls have squeezed them in by removing
much of the peripheral part of the globe, but there has been a price to pay;
owl eyes cannot move more than a few degrees around any axis, despite a
full set of eye muscles.
Animals that are active day and night, such as horses, owls, and many
mammalian carnivores, have eyes in which the lenses have a diameter of
about 0.4–0.5 times the diameter of the eye itself. In truly diurnal animals,
for example, monkeys and parrots, the ratio is lower, between 0.3 and 0.4
(Fig. 5.7). However, in nocturnal animals that rarely emerge in daylight,
Lens eyes on land 107

such as the house mouse, opossum, and bush baby the lens diameters are
0.6–0.8 times the eye diameter (Fig. 5.6a). These differences are of relative
not absolute lens size, and are concerned with getting as bright an image as
possible for a given size of eye. A large almost spherical lens, combined
with a strongly curved cornea, gives a very short focal length, and com-
bined with a wide aperture this gives the eye a very high light gathering
power (image brightness is proportional to (D/f )2, where D is aperture
diameter and f focal length). In photographic terms a house mouse has an
F-number ( f/D) of about 0.9, compared with about 2.0 for a human with a
wide open pupil. The mouse’s image is brighter by a factor of nearly 5.
Generally speaking the power of the cornea is relatively more important in
diurnal eyes, and the lens in nocturnal eyes.
Animals such as seals have spherical lenses for a different reason. Their
problem is that, having returned to water, they no longer have an optically
useful cornea. They therefore require a much more powerful lens, and that
means a spherical lens, just as in fish (Fig. 5.8). However, they are not wholly
aquatic, and when they come onto land the reappearance of a strong cornea
would make them very myopic. One solution is a flattened cornea with lit-
tle power in either medium, and as the comparison with the lynx shows,
this is the direction that the seal eye has gone (Fig. 5.6b). Thanks to the
combination of flattened cornea and slit pupil, harbour seals (Phoca vitulina)
do indeed have similar acuity in air and in water, at least in daylight (Hanke
and Dehnhardt 2009). Incidentally this is one reason why baby seals look so
appealing. Their eyes are like limpid pools, not because of the purity of
their souls, but because the corneas are flat.

Fig. 5.8 Return to the sea. The eyes of seals have a flattened cornea and a spherical lens, like the
eyes of fish (see Fig. 4.8). Their terrestrial relatives, such as the lynx, have a domed cornea and a
thinner, weaker lens. From the famous set of engravings of the eyes of mammals, birds and fish by
D.W. Soemmerring (1818) De Oculorum Hominis Animaliumque. Vandenhoek and Ruprecht,
Göttingen. The drawings are all of the lower hemisphere of the left eye.
108 Animal Eyes

Accommodation, a new function for the lens


In fish, with a spherical lens of fixed focal length, the only available mech-
anism for focusing is for the lens to move bodily towards or away from the
retina, just as in a camera. In mammals, birds and reptiles, however, the
lens is deformable and so can change its focal length. In mammals, relaxa-
tion of the elastic capsule surrounding the lens causes its surfaces to bulge,
which decreases their radii of curvature and so increases their power (Fig.
5.9). Paradoxically, the relaxation of the capsule—which allows close
focusing—comes about by an increase in tension in the ciliary muscle, and
hence the eye is focused for distance when the muscle is relaxed. In a
young person the radius of curvature of the front face of the lens can halve,
from 10 mm to 5 mm, although the change in the rear surface is much less
pronounced. The result is an increase in the power of the eye as a whole
by 8.6 dioptres. This is equivalent to putting a lens with this power in
front of the unaccommodated eye (eqn 5.7), and as such a lens would have
a focal length (1/P) of 0.116 m, the effect is to enable the eye to focus on a
point 11.6 cm away. Sadly, lens elasticity declines more or less linearly
with age, and for most people the lens has lost all focusing power by about
the age of 55.
Accommodation in reptiles and birds is slightly more complicated. One
set of muscles (Brücke’s muscle) pushes the ciliary body inward on the lens
as it contracts, so deforming it and increasing its power, while another mus-
cle system (Crampton’s muscle) pulls on the cornea in a way that reduces
its radius of curvature. The corneal mechanism is particularly important in
birds. Chameleons use their focusing mechanism to judge the distances of
insect prey (Harkness 1977), and accommodate particularly fast, with a
mechanism based on lens deformation (Ott and Schaeffel 1996). Remarkably,
the chameleon lens at rest has negative power, implying a refractive index
profile quite different from the usual ‘highest in the centre’ gradient. This
gives the combined optical system of cornea and lens a very long focal
length, and a correspondingly magnified image. Other features related to
the chameleon’s lifestyle, which involves searching for insects amongst foli-
age, are the extreme mobility of the turret-like eye (Fig. 5.6c) and the pres-
ence of a distinct high-resolution fovea.
A type of accommodation that involves no movements or deformations
of the lens is apparently found in horses and rays (elasmobranch fishes).
The mechanism is known as a ‘ramp retina’ and works on the principle that
the upper part of the retina will always be imaging the lower (closer) part
of the field of view, whereas the lower retina will image distant objects. The
upper retina thus needs to be further away from the nodal point than the
lower part, leading to a retina to whose spherical shape has been added a
Lens eyes on land 109

Teleost Elasmobranch and amphibian

Protractor
lentis
Retractor Suspensory
lentis ligament
muscle

Reptile and bird Bird


Crampton’s
muscle
Bony
Brücke’s ossicle
muscle

Mammal

Ciliary
muscle

Fig. 5.9 Accommodation mechanisms in vertebrates. The dashed lines show the results of
contraction of the accommodatory muscles. In teleosts the lens is drawn bodily towards the retina,
accommodating for distant objects. In elasmobranchs and amphibians the lens moves away from
the retina, accommodating for near objects. In reptiles and birds the lens can be deformed by
Brücke’s muscle which pushes on the lens via the ciliary body. In birds contraction of Crampton’s
muscle pulls on the cornea, decreasing its radius of curvature. In mammals the lens also deforms,
but this occurs by the elasticity of the capsule around it. Contraction of the ciliary muscle relaxes
the tension on the structures supporting the lens, allowing it to deform. In reptiles, birds, and
mammals the actions of the accommodation muscles all permit near objects to be brought into
focus.
110 Animal Eyes

backward slope, or ramp. Doubts have been raised about the reality of ramp
retinas, because the differences in retinal distance involved are much
smaller than the early diagrams suggested (Sivak 1976). Nevertheless, ‘lower
field myopia’ is well established in ground foraging birds, where it keeps
the retina focused on the ground plane, and birds appear to have mecha-
nisms that do indeed adjust the relative positions of retina and focal plane
during development (Schaeffel et al. 1988). Another ‘static’ form of accom-
modation is found in fruit-bats. Here the retina is deformed by a series of
conical papillae that ensure that adjacent local regions are at different dis-
tances from the lens, and so are in focus for objects at different distances.
Further information on accommodation can be found in Ott (2006).

Corneal shape and spherical correction


Spherical aberration is not just a problem for lenses, as we saw in Chapter 4. It
afflicts spherical air-water surfaces in much the same way, over-focusing rays
more and more as the distance from the axis increases. One cure for corneal
spherical aberration is to make the surface aspherical, with the outer regions
having lower curvature (and hence relatively less power) than those close to
the axis. This is what happens in the human eye; the periphery of the cornea
has a radius of curvature twice that of the central region. The overall shape
that produces a point image has an elliptical profile, and the human cornea
approximates to this.
There is a price to be paid for having a non-spherical cornea. Because the
spherical symmetry of the old aquatic eye has been lost, the aspheric eye
has a single optical axis along which the optics are corrected, and image
formation is particularly good. However away from this axis the cornea
presents a tilted profile and the image quality gets rapidly worse. A conse-
quence of this is the highly centred visual system of primates, including
man, where ‘good’ vision is concentrated in a central foveal region only 1°
across. To use this effectively we have a very sophisticated eye movement
system that finds objects of interest in the periphery and centres them for
foveal scrutiny (see Chapter 9). Fixation of this kind is relatively uncom-
mon, even amongst mammals.
It seems that the lens in man corrects its own aberration in the same way
as the lenses of fish (Chapter 4), by having a refractive index gradient
(Millodot and Sivak 1979). It is, after all, their descendant. So each refracting
structure looks after itself: the cornea by being aspheric and the lens by
being inhomogeneous. Something rather more interesting occurs in the rat,
and other mammals with nocturnal-type eyes in which the large spherical
lens forces the cornea to be more or less spherical too (Chaudhuri et al.
1983). The cornea has thus little scope for an aspheric correction—which
Lens eyes on land 111

might in any case be inappropriate because most of these animals need a


large field of view without major variations in image quality across it.
Instead the lens has a gradient that makes it over-corrected, like a fish lens
but more so, so that it corrects not only its own aberration but that of the
cornea as well. And because the system is spherical the correction works
across the whole field.

Form and function of the pupil


Most land vertebrates have a very active pupil that changes size rapidly in
response to changes in illumination of the retina. This is in contrast to the situ-
ation in many fish, where the pupil, if active at all, may take minutes to open
and close, and is often activated directly by light rather than neurally, via the
retina.
The human pupil changes in diameter from about 2 mm in sunlight to 8
mm in the dark, giving a maximum theoretical difference in sensitivity of 16
times, though for various reasons this reduces in practical terms to about 10
times. Compared with the full range of lighting conditions over which the
eye operates (about 1010) this is very little. It seems that the function of the
pupil in man is not so much to compensate for changes in brightness as to
obtain the best compromise between resolution and sensitivity. In bright
light it also limits the cone of light reaching the retina to match the accept-
ance angle of the cones. A pupil diameter of 2–3 mm is optimal in the
sense that it provides the best resolution the optics can support (Fig. 5.10).

60
(20% modulation)
50
Spatial frequency (c/deg)

40
DIFFRACTION ABERRATION
30

20

10

0
0 1 2 3 4 5 6 7
Pupil diameter (mm)

Fig. 5.10 Effect of pupil diameter on resolution in humans. The graph shows the spatial
frequencies that provide 20 per cent modulated images at different pupil diameters in the human
eye, which is a convenient measure of resolution. The optimum pupil diameter is 3 mm. Data from
Charman (1991).
112 Animal Eyes

Smaller apertures result in a poor performance because diffraction reduces


the spatial cut-off frequency (Chapter 3), and with apertures bigger than
about 3 mm other defects such as spherical aberration become progressively
worse, again reducing resolution. As light levels drop, resolution starts to
become limited by photon noise rather than optical quality (Fig. 3.11), and it
then pays to open the pupil to admit more light, as the resulting decrease in
optical acuity will no longer be noticed.
Nocturnal animals have wide aperture eyes that are intrinsically more sen-
sitive than our own, and these require protection in daylight to prevent the
bleaching of all the photopigment. The muscular mechanics of the circular
pupil mean that it cannot close down beyond a certain limit, and the alterna-
tive is a slit pupil, which can close much further. In the cat eye for example
(Fig. 5.11) the change in pupil area between dark and light is 135-fold, a ten
times greater range than in man. In gecko eyes the two margins of the slit
have a series of paired notches (Fig. 5.6d), and when the pupil closes these
match up to give a set of small pin-holes (Fig. 5.11). Compared with the fully
open nocturnal pupil this cuts down the light by more than a thousand-fold,
enabling geckos to hunt in daylight without damage to the retina. Slit
pupils may be horizontal, as in some sharks, or more commonly vertical, as

Primate Cat Horse

Hyrax

Gecko Catfish

Fig. 5.11 Pupil shapes in vertebrates. Top row: round and slit-shaped pupils in mammals, showing
how the cat’s slit pupil can close further than the circular primate pupil. Iris closer muscles are
continuous lines and opener muscles dashed lines. Bottom row: gecko pupil contracts to four ‘pin-
holes’ in the light. The hyrax or coney (Procavia, a small desert mammal) has a pupil partly closed
by a central operculum, which acts as a sunshade. A similar mobile operculum is present in some
fish, such as the catfish Plecostomus. Combined from Walls (1941).
Lens eyes on land 113

in many lizards, snakes, and mammals. Horizontal pupils in mammals tend


to be broadly oval, as in horses (Fig. 5.11) and ruminants (Walls 1942).
The other function of a slit pupil, mentioned in Chapter 4, is to assist in
the correction for chromatic aberration. This correction involves the use of a
multifocal lens in which different zones have different focal lengths, so that
wavelengths refracted by different amounts because of optical dispersion can
all be brought to a common focus (Fig. 4.5). A slit pupil, even when partially
closed, allows all lens zones to be sampled, whereas a circular pupil cuts out
the outer zones, and so undermines this correction (Malmström and Kröger
2006). This cannot be the universal reason for a slit pupil, however, since
Octopus, which is colour-blind, also has an oval horizontal pupil (Plate 1).
Another pupil arrangement that permits a high degree of closure is a
circular ring with an expandable operculum inside it. This is quite common
in shallow-water fishes (e.g. rays and catfish, Fig. 5.11, Plate 1), and amongst
mammals in the hyrax (Procavia) and some whales. This arrangement has
the additional advantage of acting as a sunshade, excluding strong light
from above, and in some cases it may also act to camouflage the eye. Squid
and cuttlefish have W-shaped pupils, possibly for the same reasons.
Calculations suggest that they also provide a more even retinal illumination
than a circular pupil.

Resolution
The two factors that limit an eye’s resolution are the quality of the optics, and
the fineness of the retinal mosaic, as was discussed in detail in Chapter 3.
Psychophysical measurements show that the finest detail a well-focused
human eye can resolve, expressed as an angular spatial frequency, is very sim-
ilar to both the optical cut-off frequency set by the diffraction limit (D/λ cycles/
radian) and the sampling frequency of the retinal mosaic (f/2s
c/rad), where D is the aperture diameter, λ the wavelength, f the focal length
and s the receptor separation. Both are close to 60 c/deg (3438 c/rad). This
means that the performance of a human eye, in bright light, is as good as the
physical constraints on optics will allow, and that the retinal mosaic has
evolved to match this optical limit.
In terms of resolution, our eyes are probably at least as good as any other
mammal, but they are not quite as good as some raptorial birds. Behavioural
data for the some hawks shows that they can resolve gratings that are more
then twice as fine as the human limit, about 160 c/deg. How is this achieved,
given that the hawks’ eyes are similar in size to our own? There are three
differences that contribute to this improvement. First, the hawk’s daylight
pupil is wider than ours, about 6 mm, which improves the diffraction limit
by a factor of between 2 and 3. Second, the foveal receptors are narrower,
114 Animal Eyes

about 2 μm between centres, rather than 2.5 to 3 μm in man. Thirdly, the


hawk uses an optical trick to increase the effective focal length of the eye.
It seems that the pit-like depression in front of the fovea acts as a negative
lens, since the material of the retina has a higher refractive index than the
vitreous humour, thereby creating a modest telephoto system, the principle
of which is shown in Fig. 5.12. The image focused by the cornea and main
lens is shifted backwards by the negative lens, giving a longer overall focal
length, and a locally magnified image. The magnification of the system,
relative to a system without the negative lens, is about 1.45 (Snyder and
Miller 1978). The overall effect of these various modifications is a linear

(a)
(b) Vitreous
n = 1.336

Deep fovea
Neural retina
100 μm

Foveal pit
n = 1.369
External limiting membrane
Oil droplets
Cone outer segments

(c)

flens
fcombination

(d)

nc r nr

Fig. 5.12 Telephoto optics in the eye of a hawk. (a) Section of head of a hawk (Buteo latissimus)
showing the eyes meeting on the centreline (typical of birds) and the direction of view of the deep
foveas. Temporal foveas are also shown. (b) Diagram of the foveal pit and its relation to the retina.
In the telephoto theory the optically important surface is the spherical bottom of the foveal pit. (c)
Construction of a telephoto camera lens. The effect of the negative (concave) lens is to increase
the focal length of the combination, so that its imaging properties are those of a single lens in
front of the combination (dotted ). (d) As above, but with a single concave surface, corresponding
to the foveal pit. The magnification of the system is given by: m = 1 + (s /r).(nr – nc)/nc. Redrawn
from Snyder and Miller (1978).
Lens eyes on land 115

resolution gain of more than 2, but in terms of the number of foveal recep-
tors imaging a given area, it is a gain of 4.
Other vertebrates may or may not have as good a match between the
diffraction and sampling limits as do hawks and humans. Many are noc-
turnal, and use their wide pupils for sensitivity, not resolution. The hooded
rat, for example, has an aperture of about 0.3 mm in bright light, roughly
10 times smaller than man (Hughes 1977). This implies a diffraction limit to
resolution of about 10 c/deg. In fact the threshold measured by behavioural
testing is about 1 c/deg, 60 times worse than man. This is a much larger
discrepancy than can be accounted for by optics alone. The horse, with an
eye nearly double the diameter of ours has a behavioural resolution (meas-
ured by its ability to distinguish stripe patterns from uniform grey) of only
23 c/deg, about three times worse than man (Timney and Keil 1992). The
lower resolution, relative to the diffraction limit, of these and many other
vertebrates may be due to optical imperfections, larger receptors, or
commonly the grouping of receptors into larger units based on ganglion
cells, within which there is no further resolution.

Ecology, resolution, and ganglion cell distribution


Although this chapter is not specifically concerned with the organization of
the retina, a consideration of the way ganglion cells are distributed in verte-
brate eyes tells us a good deal about an animal’s visual priorities. Animals
that live a life dominated by activity around the horizon, for example, preda-
tors like the cheetah and herbivores such as rabbits and ungulates, have a
narrow horizontal strip through the retina where the ganglion cell density is
very high—the ‘visual streak’ (Fig. 5.13). Animals from a more three-dimen-
sional environment, such as forest, either have a uniform retina, or one with
a more or less circular ‘area centralis’ where ganglion cells are concentrated
(Hughes 1977). A similar situation occurs in fishes from different underwater
niches, as we saw in Chapter 4 (Fig. 4.9). In primates this ‘area’ concentrates
to a 1° central spot, the fovea centralis, with exceptionally high numbers of
ganglion cells associated with it—one per cone, about 150 × 103 per square
millimetre. This compares with about 6 × 103 in the centre of the area centra-
lis of the rat retina. Birds often have two foveas, one looking out laterally,
and the other, situated at the rear (temporal region) of the retina, imaging
the region of the bill where the bird pecks at food (as in the pigeon retina in
Fig. 5.13).
This tendency for ganglion cell densities to match the pattern of ‘inter-
est’ in the environment is seen in mammals, birds (Martin 1985) and fish
(Collin and Pettigrew 1988). It seems to be a way of economizing on the
numbers of axons that have to leave the eye for the brain, by matching
116 Animal Eyes

S
S

T 4 7
2
6 1
5
T
4
3

Rat 10 mm Rabbit 10 mm

S S
18
0.3 22
24
26
0.5
32
0.75 T 30
T 1
25 12
16 20

Cat 10 mm Pigeon 10 mm

Fig. 5.13 Distribution of ganglion cells in the retina. The ganglion cells supply the axons in the
optic nerve, and so represent a ‘bottleneck’ in the visual system. Numbers are thousands of nuclei
per square millimetre of the flattened retina. The rat has a roughly circular area centralis, whereas
the rabbit has a linear visual streak corresponding to the horizon. The cat is somewhat
intermediate. The pigeon has two distinct foveas with particularly high ganglion cell densities. One
looks laterally along the optical axis of the eye, whereas the other is situated temporally, and
images the region of the bill tip. T and S, temporal and superior. P, the position of the pecten, a
nutritive structure in the bird eye. Modified from Hughes (1977) and Martin (1985).

visual information to neural capacity In vertebrates the optic nerve is a


real bottleneck in the system (in contrast to compound eyes where it is the
optics that limit performance). In humans the optic nerve contains about
1.2 million axons of ganglion cells, compared with about 6.5 million cones
and 120 million rods, giving an overall receptor to optic nerve axon ratio
of about 100:1. Clearly there is great compression, and it is not hard to
guess why. The human optic nerve, 2 mm thick, is flexible enough not to
interfere with eye movements; but if each receptor contributed an axon, the
nerve would need to be as wide (and so as solid and immobile) as the eye
itself.

Axis direction and binocular field of view in vertebrates


The horizontal visual field of a single eye is about 170º in most vertebrates,
and slightly more in fish. In most of the lower vertebrates—fish, amphibians,
and reptiles—the eyes are directed laterally, with their axes making angles
Lens eyes on land 117

Fig. 5.14 (a) Direction of the eye axis relative to the head axis and (b) the extent of binocular
overlap for various vertebrates. Data from Duke-Elder (1958), Martin (2009), Rochon-Duvigneaud
(1943), and Walls (1942).

of between 70º and 90º to the forward direction, which means that the for-
ward-directed binocular field is small, a few tens of degrees. However, in
mammals and some birds a pattern emerges in which the eyes of predatory
species, such as cats, dogs, hawks, and owls, have their axes directed much
further forward (Fig. 5.14). This gives them a larger frontal region of binocu-
lar overlap, and correspondingly a blind region behind. In carnivores the sig-
nificance of this overlap is probably that it provides a better signal, in terms
of both resolution and photon catch, in the hunting direction. In primates,
which are not primarily carnivorous, it provides a basis for stereoscopic vision
based on the disparity between the images in the two eyes, and this is of great
value when looking for and manipulating objects within a range of up to a
few metres. On the other hand, prey animals, such as mice, squirrels, rabbits,
and granivorous birds, have retained laterally directed eyes, giving them a
total field of view of close to 360º in the horizontal plane, and often a com-
plete view of the sky as well. Large herbivores, with fewer natural predators,
have their visual axes directed at intermediate angles. Useful reviews are
given by Hughes (1977) for mammals, and Martin (1999) for birds.

Amphibious eyes
In all vertebrate groups there are some species that need to see reasonably
well in both air and water. Flying fish, mud skippers, most amphibians, tur-
tles, diving birds, seals, and otters all spend part of their lives in each
medium. How do they cope with the sudden large changes in optical power
when they dive or surface? We have seen one method already. Seals, many
diving birds such as penguins, and some rock-pool fish minimize the
118 Animal Eyes

problem by having a much flatter cornea than their non-amphibious rela-


tives (Figs. 5.6b and 5.8), and thus the cornea has little or no refracting power.
The lens has to do nearly all the optical work in both media, and there is lit-
tle change in the position of focus on immersion. One problem with a flat
cornea is that it tends to restrict the field of view, and also results in serious
distortion in the periphery. The rock-pool fishes Dialommus fuscus and
Mnierpes macrocephalus have solved this problem in a particularly interesting
way (Fig. 5.15a). They have two flat goggle-like corneas in each eye, making
an angle of about 135°. Presumably this both increases the field of view and
decreases distortion, although at the price of having a distinct ‘join’ through
the centre of the visual field. A somewhat similar arrangement occurs in the
flying fish Cypselurus heterurus which has a tent-like cornea consisting of
three almost flat triangular facets.
An alternative is to have a focusing mechanism so strong that it can
make up the shortfall in optical power. When we dive we lose 40 D (diopt-
res) and can accommodate by a maximum of 10 D, so we are still left with
an unbridgeable 30 D. Certain diving birds, however, have a method of
altering the curvature of the front surface of the lens that is much more
effective than ours. Birds and reptiles have a muscular iris supported by a
ring of bony ossicles around the eye, and they are able to squeeze the lens
into the constricted pupil using the powerful ciliary (Brücke’s) muscle,
creating a very high curvature in the resulting blip (Figs. 5.9 and 5.15b).

(a) (b)

(c) Air

Surface
Lens

Water

Fig. 5.15 Amphibious eyes. (a) The rock-pool fish Mnierpes macrocephalus with flat-faced
‘goggles’. (b) Accommodation in the merganser, a diving duck, is achieved by squeezing the lens
through the iris to produce a high curvature. (c) The ‘four-eyed fish’ Anableps achieves
simultaneous vision in air and water by the use of an ovoid lens with different curvatures on
different axes. Various sources.
Lens eyes on land 119

Using this technique mergansers and goldeneyes, both diving ducks, are
able to generate 80 and 67 D of extra power respectively, whereas the non-
diving wood duck and mallard produced only about 6 and 3 D (Sivak et al.
1985). Other diving birds probably use this method of accommodation, as
do aquatic turtles and water snakes.
The ‘four-eyed fish’ Anableps anableps from South America has solved
the problem of seeing in air and water simultaneously. Anableps cruises
with half its eye above the surface meniscus, and half below (Fig. 5.15c). It
has two pupils, one looking into each medium, and a lens whose shape
‘combines an aquatic optical system harmoniously with an aerial one, in a
perfectly static situation’ (Walls 1942). The compromise is achieved by the
ovoid shape of the lens, with its long axis in the direction that looks down
into the water. Rays parallel to the axis meet the strongest curvature of
the lens, and so are refracted relatively more than rays coming from air,
which meet the weaker curvatures of the short axis. The latter rays, how-
ever, are also bent by the cornea, so that the total amount of refraction is
much the same in the two cases. It seems that this wonderful design is
unique.
One invertebrate deserves mention here. The chitons are crawling marine
molluscs protected by eight dorsal shell plates, and embedded in these
plates are photoreceptors that respond to dimming. In most species these
structures (‘aesthetes’) are unspecialized, but in Acanthopleura and some
other species these are elaborated into ocelli with a lens and a retina with
a few tens of receptors. The lenses are made of aragonite, which is birefrin-
gent, and the two refractive indices (1.68 and 1.53) give the lenses two focal
lengths. According to measurements by Speiser et al. (2011) these allow the
animal to have in-focus images in both air when the tide is out, and water
when it is in. Resolution is modest, about 9º in both media, but this allows
the animals to detect the movement of potential predators, to which they
respond by clamping down onto the substrate.

Invertebrate eyes with corneal optics


It comes as something of a surprise to find that the cornea-lens combination
(our kind of eye) is not particularly popular in the animal kingdom. It is nec-
essarily confined to terrestrial animals, and since insects have opted predom-
inantly for compound eyes, that only leaves one major land-living invertebrate
group, the spiders, which makes exclusive use of the simple corneal eye sys-
tem. Some insects also have simple eyes when they are larvae, and adults
may also use them as flight-stabilizing devices, in conjunction with the com-
pound eyes.
120 Animal Eyes

The eyes of spiders


Spiders and their terrestrial relatives (particularly the phalangids and scor-
pions) all have eyes of the simple type with the cornea as the main refracting
surface. Their distant chelicerate relatives the horseshoe crabs (Limulus) have
compound eyes, and it may well be that the eyes of modern arachnids are
derived from compound eyes by a process of simplification. True spiders
usually have eight eyes (sometimes six) and these are of two kinds: the prin-
cipal eyes which point forwards and the secondary eyes which cover more
peripheral fields of view (Fig. 5.16). The two kinds of eye have different
embryological origins, and the layout of the receptors is different. In the
principal eyes the receptors are similar to those of most other invertebrates.
They have a distal segment (nearest to the lens) which bears the photopig-
ment on microvilli, and the cell body and axon are proximal to it (Fig. 5.16).
In the secondary eyes, however, it is usually the cell body that is distal, with
the microvillous segment forming what is morphologically the first part of
the axon (Blest 1985).
Optically the eyes of spiders are very varied. The eyes are all quite small,
mostly much less than a millimetre across, but this still makes their lenses
larger than the facets of compound eyes by an order of magnitude, and so
their potential resolution is correspondingly greater. Some have indeed spe-
cialized in high resolution, most notably the jumping spiders (Figs. 5.17–
5.19). The most impressive of these, Portia fimbriata, has an inter-receptor
angle of 2.4 arc-minutes; this is only five times greater than the human eye,

(a) (b) AM PM

AM
AL Vit.
PL
PM
Rec.

Tapetum

Optic nerves 100 mm

Fig. 5.16 Eyes of spiders. (a) Head of the house spider Tegenaria showing the four pairs of eyes.
The principal eyes (antero-median, AM) have a different structure from the three pairs of
secondary eyes (antero-lateral, AL; postero-lateral, PL; postero-median, PM). (b) Details of a
principal eye and a secondary eye. In the AM eye the photopigment-containing rhabdoms (shown
darker) are distal in the receptor cells (Rec), but in the PM eye the receptor nuclei are distal. Above
the retina lie the transparent vitreous cells (Vit). The PM eye has a tapetum, but the AM does not.
See also Fig. 6.9. and Plate 3.
Lens eyes on land 121

Fig. 5.17 Eyes of Portia (left), a jumping spider with the highest acuity known in any spider, and
Dinopis, an ‘ogre-faced spider’ with the most sensitive eyes. In Portia the large eyes (diameter 0.8
mm) are the antero-medians, in Dinopis they are the postero-medians (diameter 1.3 mm).

Dors.
Jumping spider Ogre-faced spider
Plexippus Dinopis

Lat.

Ant. AL
PL
PM

Fig. 5.18 Fields of view of jumping spiders and ogre-faced spiders (see Fig. 5.17) showing the
way different eyes are used for different purposes. The diagrams show the disposition of eyes
on the prosoma (inserts), and the fields of view of the three secondary eyes, which detect
movement of the prey. These fields are represented on the surface of a globe with the spider at
its centre. In the jumping spiders (left ) the antero-lateral (AL) and postero-lateral (PL) eyes
detect potential prey, which is then identified by the high-resolution principal eyes, whose
retinal fields of view are shown here hatched (the retinae of these eyes scan, as indicated on
Fig. 5.19a). In the very nocturnal Dinopis the postero-median (PM) fields overlap almost
completely and presumably pool their signals. Dinopis typically hunts from a downward-
pointing position above the forest floor, and the AL and PL eyes image the field behind (above)
the spider, presumably watching for potential predators. The antero-median (AM) fields in
Dinopis are similar to the PM’s.
122 Animal Eyes

(a) (b) Anterior (c)


Principal Lateral
(antero-median)
eye 20 μm Dorsal

Antero-
lateral eye Lateral
1
2b 2a
Postero-
Eye 2°
median eye 1
muscles 6 5
1 2°

Retina
4 3
Postero-
lateral eye
1st optic
ganglion Layer 1

Fig. 5.19 The principal eyes of jumping spiders. (a) Horizontal view of opened prosoma (right
side) showing the long tubular principal eye and smaller lateral eyes. The retina of the principal eye
is moved by six muscles in two bands (dorsal muscles in black, ventral stippled). These move the
eye around three axes, and in the horizontal plane each can move over the arc shown by the thick
arrow (see also Chapter 9). (b) Horizontal section of right retina, showing the four tiers of
receptors. The tiering is thought to compensate for both focus and chromatic aberration. (c) The
distribution of receptors in layer 1. The highest density is in the centre, giving the eye a distinct
acute zone. Modified from Land (1985).

and more than five times smaller than the equivalent angle in the ‘best’
insect, a dragonfly Others have modest resolution but enormous light gath-
ering power. Some spiders of the genus Dinopis, which catch cockroaches
in forests at night, have eyes up to 1.4 mm in diameter, comparable with a
small rodent (Figs. 5.17 and 5.18). However, the majority of web-building
spiders have rather poor eyesight. The principal eyes usually do form low-
resolution images, but the secondary eyes have strange unfocused lens-
mirror combinations. These are certainly involved in navigation, using the
sun and other celestial cues, but exactly how they work is still a mystery
(Land 1985).
The largest eyes, and the simplest to understand from an optical point
of view, are found in spiders which hunt their prey by sight rather than
using webs as traps (Fig. 5.17). These include the families Salticidae (jump-
ing spiders), Lycosidae (wolf spiders), Thomisidae (crab spiders), Sparassidae
(huntsmen), and Dinopidae (ogre-faced spiders). Of these the jumping spi-
ders undoubtedly have the most acute vision, and the most sophisticated
visual system (Fig. 5.19). They are diurnal hunters that stalk their prey
(usually insects) in much the same way that a cat stalks a bird. They turn
towards moving objects, directed by the secondary eyes, and then track
them using both the forward-pointing principal eyes and the antero-lateral
eyes (Zurek et al. 2010). Oscar Drees studied these spiders in the 1950s,
Lens eyes on land 123

and found that the principal eyes were also responsible for distinguishing
between prey and potential mates, and that this judgement was made on
the basis of the geometry of the leg pattern of the target animal (Forster
1985). Not surprisingly, in view of the need for fine discrimination, it is the
principal eyes that are largest in salticids, with a corneal diameter of 380
μm and a focal length of 767 μm (F-number about 2) in a moderately large
species, Phidippus johnsoni (Land 1985). In Portia fimbriata, and probably
other species, the focal length is increased by a telephoto arrangement sim-
ilar to that in hawks (Fig. 5.12; see Williams and McIntyre 1980). By con-
trast, the postero-lateral (secondary) eyes, which detect movement over a
field of 135°, have a corneal diameter of 300 μm and a focal length of 254
μm. In addition, the receptors are narrower in the principal eyes, the small-
est separation being 2.0 μm compared with 4.5 μm in the postero-lateral
eyes, corresponding to angular separations of 9’ and 1°, respectively. The
principal eyes are specialized in two other ways. First, they each have a
very narrow field of view (about 5° horizontally by 20° vertically), but this
is offset by the fact that they ‘scan’ targets, with a complex pattern of eye
movements involving lateral, vertical, and rotational movements of the reti-
nae (see Chapter 9). Unlike vertebrate eyes, the lens itself remains still: it is
only the retina that moves. Second, the retina is arranged in four layers,
one behind the other. These animals have good colour vision (Plate 4), and
this arrangement allows each visual pigment to be situated at the right dis-
tance from the lens to compensate for the longitudinal chromatic aberration
of the optics. It may also allow objects at different distances to be focused
on different layers, and so act in lieu of an active accommodation system.
Of all the eyes considered in this section, the principal eyes of salticids
are probably the only ones in which the full optical resolving power of the
corneal lens is exploited. Like the human eye, there is a close match between
receptor spacing and the diffraction limit, meaning that the eyes are opti-
mized for vision in bright light. Their light-gathering power is correspond-
ingly low, with a calculated sensitivity (Chapter 3) similar to that of diurnal
insects. One of the most attractive features of the salticid visual system is
its compactness. By confining high resolution to one pair of narrow, long
focal length eyes, whilst using much smaller eyes for peripheral vision
which requires lower resolution, jumping spiders have saved a great deal of
space. If the same eye performed both tasks (as in vertebrates), its volume
would be at least ten times greater.
In contrast to the diurnal salticids, wolf spiders are mainly crepuscular
or nocturnal hunters. Four of the secondary eyes are much larger than the
rest, with corneal diameters of up to 0.4 mm in Arctosa variana, and have an
F-number of 1 or less (Plate 3). The inter-receptor angle is 1–2°, which is
similar to the eyes of many insects, and is nowhere near the diffraction
124 Animal Eyes

limit. Lycosids typically hunt by pouncing on their prey in a single, very


rapid combined jump and turn, for which the four posterior eyes are cer-
tainly responsible. Thus these eyes function mainly as low-light movement
detectors for locating prey, and probably predators as well. Besides having
a wide aperture which would help vision at low intensities, the eyes also
possess a reflecting tapetum which has the function of doubling the effec-
tive length of the receptors (see Chapter 6). It consists of many layers of
very thin crystals (probably guanine) which form a long ribbon beneath the
receptors (see Chapter 6, Fig. 6.9). Overall, the secondary eyes of lycosids
are about 100 times more sensitive (S, see Chapter 3) than their salticid
equivalents. The principal eyes, however, are relatively small, they lack a
tapetum, and are probably not involved in prey capture, although they do
seem to be concerned with orientation to the pattern of polarized light in
the sky.
The largest eyes of any spider, and probably the largest simple eyes of
any land invertebrate, are found in the genus Dinopis (Fig. 5.17). As men-
tioned above, they are nocturnal hunters. They ambush insects passing
beneath them by pinning them to the substrate with a net of sticky silk—
rather like a Roman retarius gladiator. The trigger for this action is visually
detected movement. Here the specialized eyes are the postero-medians,
with corneal diameters of up to 1.4 mm, a focal length of 0.8 mm, and an
extraordinary F-number of about 0.6 (Blest and Land 1977). The severe
spherical aberration of an optical system of this size and aperture is coun-
teracted in part by the lenses having a double structure—a low index outer
layer surrounding a more dense core—the core itself behaving as a graded-
index lens as in fish eyes (Chapter 4). The receptors are also huge, with
receptive segments 20 μm wide and 55 μm long during the day, lengthen-
ing to twice this in the dark. The other remarkable feature of the receptors
is that during the day the microvilli are almost completely resorbed into
the proximal part of the cell and reconstituted to fill the rhabdomeres each
night. This trick, apparently for protecting the photopigment during the
day, is quite common throughout the arthropods. The net effect of these
various heroic adaptations is that the sensitivity of these eyes is enormous.
Compared with a salticid like Phidippus, the sensitivity (measured as the
number of photons absorbed per receptor, for a given field luminance; see
Chapter 3) is roughly 2000 times greater, although with an inter-receptor
angle of about 1.5°, the resolution is about ten times coarser.
Web-spinning spiders have principal eyes that are image-forming,
although of relatively low resolution. The secondary eyes, however, are
quite different. They typically have weak lenses that form an image well
behind the retina, which is itself rather long and thin. However, behind the
retina is a tapetum, usually referred to as ‘canoe-shaped’, which reflects
Lens eyes on land 125

light back and probably focuses it into a very astigmatic line image on the
retina. The impression one has is that these eyes are not for ‘seeing’ in the
conventional sense, and indeed there is no indication that movement in
their field of view elicits behaviour. They seem to be concerned instead
with detecting the direction of the sun and other celestial cues. In some
species the principal eyes are responsible for detecting the pattern of polar-
ization in skylight, and from this the sun’s direction when it is not visible
(Görner and Claas 1985). However, in the wandering spider Drassodes, the
tapeta of the postero-median eyes have built-in polarizing properties, and
these eyes provide the spiders’ polarization compass. They use these eyes
to find their way back to the nest after foraging trips (Dacke et al. 1999;
Mueller and Labhart 2010).

Corneal eyes in insects


Insect simple eyes, or ocelli, fall into two main groups: the larval eyes of
holometabolous insects, and the dorsal ocelli present in most winged adult
insects. In both, the curved air–tissue cornea interface is the main refracting
surface, although as in vertebrate eyes, a lens of some kind often augments the
optical power of the system and aids in the formation of the image.
In insects with a distinct larval stage, the ocelli are the only eyes the lar-
vae possess. They vary greatly in size and complexity. The larvae of flies
have no more than a small group of light-sensitive cells on either side of the
head. Lepidopteran caterpillars, however, have ocelli with lenses, and a
structure resembling that of a single ommatidium from a compound eye. In

Fig. 5.20 Larval ocelli in insects. (a) The least complex is in the lepidopteran Isia where each of
the 12 ocelli resembles a single ommatidium from a compound eye. (b) The ant-lion Euroleon has
six pairs of ocelli which have a more extended retina of 40–50 receptors. (c) The single pair of
ocelli of the sawfly Perga are more eye-like, with an extended retina and an inter-receptor angle of
about 5°. Scale bars are all 0.1 mm. Redrawn from various sources.
126 Animal Eyes

each ocellus in Isia, seven receptors contribute to a two-tiered rhabdom con-


taining the photopigment (Fig. 5.20a). There seems little possibility of spatial
resolution within each ocellus, but as it appears that the fields of view of the
12 ocelli do not overlap, they are capable of providing a 12 ‘pixel’ sampling
mosaic of the surroundings. These ocelli do, however, resolve colour; three
spectral types of receptor have been found in butterfly larval ocelli.
The ant-lion Euroleon (Neuroptera) also has six ocelli on each side of the
head, borne on a small turret (Fig. 5.20b). Unlike caterpillars, however, each
has an extended retina of 40–50 receptors, giving inter-receptor angles (Δϕ)
of 5–10°. Although this resolution is not impressive, it is presumably enough
to allow the animals to detect their prey—moving ants at a distance of
about 1 cm. Sawflies (Hymenoptera) have larvae with a single pair of ocelli,
each with an in-focus retina covering a hemisphere (Fig. 5.20c). The rhab-
doms in Perga are made up of the contributions from eight receptors (much
as in an ordinary compound eye) and are spaced 20 μm apart, giving an
inter-receptor angle of 4–6°. These larvae are vegetarian, and it seems that
the main function of the ocelli is to direct the larvae to their host plants.
However, Perga larvae will also track moving objects with their head, and
defend themselves by spitting regurgitated sap.
Particularly impressive larval ocelli are found in tiger beetles (Cicindela).
These have a lifestyle similar to ant-lions, ambushing insect prey as they
pass their burrows (Fig. 5.21). There are again six ocelli on each side of the
head, but two are much larger than the others. The largest has a diameter
of 0.2 mm and a retina containing 6350 receptors. The inter-receptor angle

(a) (c)

3
4 2
5 6
1

(b) 2
3
1

100 μm

10 μm

Fig. 5.21 Eyes of tiger beetle larvae (Cicindela). These are the largest and best resolving simple
eyes in insects, and are used to spot prey (usually ants) which are then caught and pulled down the
burrow: (a) head, (b) larva in ambush position, (c) section of largest ocellus showing cornea, lens
formed from thickened cuticle, and retina. Inset shows part of retinal mosaic. The inter-ommatidial
angle is about 1.8°. Redrawn from Friederichs (1931).
Lens eyes on land 127

is about 1.8°, comparable with or better than the resolution of the compound
eyes of most adult insects. This raises the interesting question as to why the
insects did not retain eyes like this into adult life, a topic we will explore
further in Chapter 7. The predatory larvae of water beetles (Acilius,
Thermonectus) have equally large and intriguing ocelli (Buschbeck et al.
2007; Stowasser et al. 2010). Because of the scanning behaviour associated
with their operation (Fig. 9.14) a discussion of their structure and function
will be postponed until Chapter 9.
Adult insects that fly typically have three simple eyes on the top of their
heads. These dorsal ocelli resemble larval ocelli in possessing a lens and
(like sawfly larvae) an extended retina (Fig. 5.22), but they are not embryo-
logically related to the larval eyes. Some dorsal ocelli have tapeta, and some
a mobile iris. They each have a wide field of view of 150° or more, and may
have as many as 10 000 receptors. So far all this suggests that these are
‘good’ eyes, like those of hunting spiders. However, there is a problem.
Everyone who has tried to get to grips with the optics of these eyes agrees
that they are profoundly out of focus, with the retina much too close to the
lens (Goodman 1981). For example, in the blowfly Calliphora the receptors
extend from 40–100 μm behind the lens, but the focus is at 120 μm. It
appears that this is not a mistake; dorsal ocelli are deliberately defocused.
What then are they for? A defocused camera is a pretty useless object if
detail is to be recorded, so under what circumstances might one not want

(a) (b)

Pigment
Receptors
Neuropil

200
μm LA
focus
DA
(c)
90

90 90
Horizon

90

Fig. 5.22 The dorsal ocelli of the locust Schistocerca gregaria. (a) Position of the frontal and
lateral ocelli on the head. (b) Section of an ocellus, showing the pigmented iris, receptor layer, and
layers of neuropil from which a few large axons emerge. The focus positions (light and dark
adapted) are very much deeper than the receptors, so this is not an eye that makes use of image
detail. (c) The fields of view of the three ocelli, showing how they straddle the horizon. Redrawn
from various sources.
128 Animal Eyes

detail? There have been many suggestions over the years, but recent studies
mainly support the idea that the ocelli are horizon detectors, involved in
enabling an insect to make fast corrections for pitch and roll (Stange 1981).
The defocus then makes sense; high spatial frequency clutter such as leaves
and branches will be removed, allowing the receptors to respond to changes
in the overall distribution of light in the sky. The idea that these ocelli con-
tribute to flight equilibrium is supported by the fact that the receptors con-
verge massively onto a relatively few second-order neurons, and that these
project directly into the optomotor system.
The dorsal ocelli of dragonflies differ from those of other insects, in that
they do produce images within the retina. The lens of the median ocellus
in Hemicordulia and Aeschna is elliptical, with a longer horizontal axis, and
also a greater radius of curvature in the horizontal than the vertical plane
(Berry et al. 2007). This asymmetry, and other structural features of the
lens, produces an image which has much better resolution for features that
are elongated horizontally rather than vertically. It thus seems that dragon-
fly ocelli act as horizon detectors, as do those of other insects. But here part
of the detection occurs within each ocellus, rather than as a result of the
overall light balance between by the three non-resolving ocelli.
Finally, there are a very few examples of simple eyes that seem to be
derived from the compound eyes by reduction. The most bizarre are those
of male scale insects (Eriococcus: Homoptera). A single lens eye occupies the
place where each compound eye would have been, and it contains about 500
receptors, giving a quite respectable value for Δϕ of 4.7°. Even stranger, the
rhabdoms, which in all other insects are composed of microvilli, here con-
tain flattened plates resembling those of vertebrate rods. As pointed out by
Paulus (1979): ‘The possibility of such modifications demonstrates how easily
great changes in organ structure can occur in the evolution of groups’. But
it is a good thing that evolution doesn’t play tricks like this too often.

Summary
1. Life on land provides animals with a potential new refracting surface—
the cornea. For a spherical cornea the nodal point is at the centre of
curvature, and with an aqueous fluid behind it the focal length is about
four times the radius of curvature.
2. Large eyes are associated either with high resolution or high sensitivity.
Nocturnal eyes have larger lenses, relative to the size of the eye, than
diurnal eyes.
3. Most land vertebrates have a deformable lens that allows the eye to focus
at different distances (accommodation). In humans the optical power
(1/focal length) of the cornea is about twice that of the lens.
Lens eyes on land 129

4. The human cornea has an elliptical profile that corrects for axial spher-
ical aberration.
5. Opening the pupil in man only produces about a tenfold increase in
light capture. The pupil’s main function is to bring about an optimal
balance between sensitivity and resolution. In the gecko, however, the
slit pupil can change the brightness of the retinal image by up to 1000
times.
6. Raptorial birds (hawks and eagles) have the highest resolution of any
animal, 2–3 times higher than man.
7. The distribution of retinal ganglion cells to some extent reflects an ani-
mal’s ecology: ‘flat-land’ animals such as rabbits have a narrow horizon-
tal band of high ganglion cell density (the visual streak).
8. In animals that move between air and water the change in refractive
power of the cornea presents a problem. Some have solved it by having
a flattened cornea with little power in either medium, others by squeez-
ing the lens into a bony iris to produce a ‘blip’ of high curvature. The
‘four-eyed’ fish Anableps has a lens with different radii of curvature for
simultaneously looking above and below the meniscus.
9. The spiders are the only other major group whose main organs of sight
are single-chambered corneal eyes. The highest resolution is found in
jumping spiders (Salticidae) and the highest sensitivity in ogre-faced
spiders (Dinopidae). The eight eyes of spiders are of two different struc-
tural types; which eyes are used for what purpose varies between dif-
ferent families.
10. Many larval insects have simple corneal eyes, or ocelli. In some preda-
tory larvae the resolution is excellent, but in general it is poor. These
eyes are replaced in the adults by compound eyes. Flying insects have
unfocused dorsal ocelli (usually three) which provide a system of flight
stabilization based on horizon detection.
6 Mirrors in
animals

Mirrors seem unlikely things to find in Nature, as living creatures do not pro-
duce naturally shiny metals, such as silver or aluminium. Nevertheless, mir-
rors of various kinds are found performing many functions throughout the
animal kingdom. The two most familiar to us are probably the silvery fish that
we see in the sea or on the fishmonger’s counter, and the eyes of a cat in the
headlights of a car. Natural mirrors are not metallic, but are typically made of
multilayers of material with alternating high and low refractive indices (for
example, air and chitin in insects, water and guanine in fish), and they rely for
their effectiveness on interference between the light reflected from the upper
and lower surfaces of each layer (Land 1972). We will explore the optical con-
struction of the mirrors later in this chapter, but an interesting and important
consequence of the interference is that many natural mirrors are coloured,
and this colour can be put to good use in display and camouflage of various
kinds. This chapter departs from the general layout of the rest of the book in
that it explores the functions of mirrors in structures other than eyes. These all
relate to vision, however, and share the same basic mechanism of reflection
with mirrors that are found in eyes.

Mirrors in eyes
As with lenses, the value of mirrors as optical components depends on their
ability to alter the direction of light rays. The law of reflection states that
the angle an incident ray makes with a normal (right angle) to the surface is
the same as the angle made by the reflected ray and the normal (Fig. 6.1a).
One of the first recorded applications of this capacity to redirect light was
Archimedes’ scheme to defend Syracuse against the Roman fleet, in which

130
Mirrors in animals 131

(a)

i r

(b) (c)

l(v) C C l(r)

Fig. 6.1 (a) The law of reflection, for a specular (mirror-like) surface. The angle of reflection r
equals the angle of incidence i. (b) A convex reflector, such as a car wing mirror, produces an
erect virtual image of a distant object at I (v), behind the mirror. I (v) is located half-way between
the centre of curvature C, and the mirror surface. (c) A concave mirror, such as a shaving mirror,
produces a real inverted image of a distant object at I (r), halfway between the centre of curvature
C and the mirror surface.

he devised a system of giant mirrors to concentrate the sun’s rays on the


enemy’s sails, to set them alight. The same property also makes it possible
to use curved reflecting surfaces in the formation of images. Convex sur-
faces produce diminished virtual images (i.e. images that cannot be thrown
onto a screen, such as the image in a car wing mirror), but concave mir-
rors can produce real images that can be captured in various ways (Fig.
6.1b and c). Newton was the first to exploit these image-forming powers in
his reflecting telescope of 1671, and since then concave mirrors have been
the preferred imaging system for large, high magnification astronomical tel-
escopes, as they are easier to construct than large lenses. Curiously, there
are only two good examples of image-forming concave reflectors in eyes: in
scallops and in a deep-sea fish.

The image-forming reflector in the eye of the scallop


Bivalve molluscs are perhaps not the kind of animal one would look to for
optical surprises, or even much in the way of eyesight. However, in this one
would be mistaken. A number of genera have evolved optical structures, not to
‘see’ in any complex sense, but to enable them to detect the approach of preda-
tors. Ark shells (Area, Pectunculus) have basic but effective compound eyes in
the mantle surrounding the opening of the shells (Nilsson 1994), which evolved
quite independently of the more familiar compound eyes of insects and crus-
taceans (see Fig. 7.2c). And scallops of the genus Pecten and its close relatives
have evolved unique concave reflector eyes for the same purpose (Land 1965).
132 Animal Eyes

Fig. 6.2 The eye of the scallop. (a) A number of eyes peering out between the tentacles of the
mantle which lines each shell. (b) Frozen section of an eye showing the large ‘lens’, beneath which
is a thick retina occupying the whole of the space between the lens and the hemispherical back of
the eye, which is lined with a reflecting layer, the argentea. The eye is 1 mm in diameter. (c) Silver-
stained section of the retina showing the two photoreceptor layers, distal above and proximal
below. The dark structures at the top are the ciliary photoreceptive membranes of the distal cells,
and those at the bottom the microvillous membranes of the proximal cells.

Scallops have 60–100 small (1 mm) rather beautiful eyes peeping out
between the tentacles of the mantle that protects the gape between the two
shells (Fig. 6.2a, Plate 1). Few know of their existence, because this inedible
part of the animal is usually thrown away by the fishmonger. A quick look
at a section of a scallop’s eye (Fig. 6.2b) shows it to be quite like a fish eye. It
has a single chamber, so is camera-like rather than compound; and there is
a lens of sorts, and behind this a thick two-layered retina filling the space
between the lens and the back of the eye (Fig. 6.2b and c). A problem with
this fish-eye interpretation of the section is that there is no space between the
lens and retina, and had this been a fish eye with a ‘Matthiessen’ lens there
should have been a space of at least 1.5 radii for the converging light rays to
focus across (the focal length of a fish lens is ~ 2.5 lens radii; see Chapter 4). It
turns out that the ‘lens’ is jelly-like, with a low, homogeneous refractive index,
Mirrors in animals 133

and a resulting focal length that would put the focus a long way behind the
back of the eye (Fig. 6.4a). There is no way that this could work in the same
way as an eye with a fish lens. A more remarkable observation is that when
you look into a scallop’s eye through a dissecting microscope, you do indeed
see an image: an inverted image of yourself looking through a microscope!
(Fig. 6.3). It was this observation that finally led to the solution of the optical
enigma. The back of the scallop’s eye is accurately spherical and lined with a
green-reflecting mirror, the ‘argentea’, so named for its silvery appearance. The
image one sees is formed by this reflector, with a small amount of help from
the lens (Fig. 6.4a). A calculation of the image position showed that it fell on
the part of the retina just below the back surface of the lens; this is the region
occupied by the photoreceptive parts of the outer, distal, layer of receptors.
Thus this is an eye based on a mirror, not a lens.
Concave mirrors form images on the same side of the reflecting surface
as the object (Fig. 6.1c), and if they are spherical they have a focal length ( f )
equal to half the radius of curvature (r), i.e.:

f = r/2. (6.1)

This means that the image of a distant object will be situated half a radius
of curvature in front of the mirror. In the scallop it is actually a little nearer
to the mirror than this, because the lens has already converged the light
slightly. For nearer objects the appropriate equation for working out image
position (analogous to eqn 5.5 for refracting surfaces) is:

1/ v+1 /u = 1/ f (6.2)

where u and ν are the object and image distances. In this case object, image
and focal length are all on the same side of the reflecting surface, so they

Fig. 6.3 Images in scallops’ eyes. Left: self-portrait of the author (MFL), whose hand is holding the
microscope objective used to photograph the eye. Right: a grid of 3-mm squares, 15 mm from the eye.
134 Animal Eyes

are all positive in the sign convention. This formula is not of much interest
to the scallop, which will be concerned to close its shells when potential
predators are a metre or more away, and for an eye this size a metre is
practically infinity.
One might ask, why does this eye have a lens at all? It seems that the
lens probably does have a function, related to the strange domed shape of
its front surface. Just like spherical refracting surfaces, concave mirrors suf-
fer from spherical aberration (over-focusing of rays at a distance from the
axis). This is routinely ‘cured’ in astromical telescopes with an additional
lens called a ‘Schmidt corrector plate’ whose complex profile manipulates
the beam entering the reflector so that there is more focusing power in the
centre, near the axis, and less at the periphery If one works out a profile for
a scallop lens that would enable it to make the same kind of correction, it
comes to look very much like the profile of the real lenses (Fig. 6.4b), with a
high curvature in the centre, flattening towards the periphery (Land 1965).

(a) (b)

100 μm

lens image

Fig. 6.4 (a) Image formation in the scallop eye. The lens has very little power, and on its own
forms a very deep-lying image. The reflecting argentea forms an image just below the lens, on the
region of the cilia of the distal receptor cells (see Fig. 6.2c). (b) The probable function of the dome-
shaped lens is to correct the spherical aberration of the mirror. The diagram shows a constructed
front lens profile that brings all parallel rays to a point focus. Apart from the most peripheral part
of the lens, which in life is covered by pigment, the profile is very similar to that of real lenses. The
principle is similar to that of a Schmidt corrector plate in a reflecting telescope.
Mirrors in animals 135

In 1938, the pioneer neurophysiologist H.K. Hartline had recorded from


single fibres in the nerves leaving the two layers of the scallop retina (Fig.
6.2c). He found that the distal layer (where the image is formed) only gave
responses to a light going off, and the proximal layer (next to the mirror
with no image) to light going on. Much later, in the prime era of electron
microscopy in the 1950s and 1960s, it turned out that the photoreceptive
regions of cells in the two layers were anatomically very different: the dis-
tal cells’ photoreceptive structures were made of splayed-out cilia, but the
proximal layer had an arrangement of microvilli much like those found in
the photoreceptors of other molluscs and in arthropods. Scallops do orient
and swim to brighter or darker parts of the environment, presumably using
the weakly directional information supplied by the proximal cells, but their
more impressive behaviour is to shut when they see a distant object move.
This is a behaviour well-known to divers swimming over sandy bottoms.
Since distant objects cast no direct shadow, this response must result from
changes in the image on the retina itself. What must be happening is that
the cells of the distal retina are stimulated to give ‘off’ responses when
either the leading edge of a dark object, or the trailing edge of a light object,
crosses the retina in the reflected image.

The stepped-mirror eyes of Dolichopteryx


In Chapter 4 we discussed how some deep-sea fish have evolved double
eyes, with a ‘normal’ eye looking towards the surface and a secondary eye
pointing downwards to detect luminescent objects in the dark waters below.
Recently a very unusual example of a secondary eye has been found in
which the optical system is not a lens or lens-pad (see Chapter 4, Fig. 4.11),
but a concave mirror of unique design (Wagner et al. 2009). Dolichopteryx
longipes is a rarely encountered mesopelagic fish with double eyes, the larger
of each pair directed upwards and the smaller downwards (Fig. 6.5a and
b). The secondary eye is a diverticulum of the larger, but in contrast to the
main eye it forms an image using a mirror. Unlike the scallop argentea, this
is not simply a ‘front-silvered’ mirror, but it is made up of distinct stacks of
reflecting platelets, probably composed of guanine crystals. These platelets
make angles to the substrate membrane that increase in a regular manner
from top to bottom of the structure (Fig. 6.5c).
It is not easy to produce a mirror that gives a good image over a wide
angle (in this case about 48º). A spherical surface, unless corrected with a
lens, has severe spherical aberration. A parabolic surface gives an excellent
image for a point on its axis, but the image quality deteriorates very rap-
idly away from that point. No single surface can do the job. By producing
a stepped surface the Dolichopteryx mirror has largely solved the problem.
136 Animal Eyes

Fig. 6.5 The double eyes of the spookfish, Dolichopteryx longipes. (a) Head from above showing
the main eyes facing upwards with secondary eyes on each side. The lenses of the main eyes are
3–4 mm in diameter. (Photograph by Ron Douglas.) (b) Diagrammatic transverse section showing
the retinae (dotted) of the principal and secondary eyes, and the mirror, which accepts light through
a ventral transparent window. The secondary eye has a downward-pointing field of view about 48º
wide. (c) The mirror has a concave overall profile, but is made up of stacks of crystals which make
increasing angles with the backing membrane. It is this tilting of the mirror elements that makes it
possible to produce a focused image over a wide angle. Based on Wagner et al. (2009).

The introduction of a further degree of freedom—the variable angle of the


platelets—allows image quality to be optimized over a much wider angle
than could be done with any single surface. Stepped reflecting surfaces are
common in fish, and occur in tapeta (see Fig. 6.8) and in the scales of sil-
very fish (see Fig. 6.17c). However, the Dolichopteryx eye is the only known
example of such a surface being used for image formation, and indeed the
only case of image formation by a mirror in a vertebrate.

Other mirror eyes


The mirror design has not been popular. Although it has the advantage of
compactness and high light-gathering power, it does have a very serious
weakness: it inevitably produces a low-contrast image. The light reaching
the image (in scallop eyes) has already passed through the retina unfo-
cused before the mirror returns it as a focused image. This reduces the
image contrast to roughly one-half that in an equivalent lens eye, which
would be like looking through a fog.
Mirrors in animals 137

There are many very small eyes that incorporate a mirror, although
none of them forms an image of a quality comparable with that in the scal-
lop eye. The cockles (Cardium) have eyes that are backed by a mirror, but
both their small size and the small number of receptors precludes all but
the most rudimentary imaging powers. Some crustaceans, particularly the
copepods and ostracods, have small median eyes consisting of three cups,
often backed by a reflector, and each containing a handful of receptors.
Again, the reflected image may provide some directionality to the fields
of view of the receptors, but not very much. The best of these is probably
the ostracod Notodromas, where ray tracing indicates that a good image is
formed on a retina of 18 receptors in each of the lateral cups and nine in
the ventral cup.
Although most ostracods are tiny bivalved aquatic crustaceans, with an
unbroken fossil record going back to the Cambrian, there are some mon-
sters with extraordinarily developed mirror eyes. These are found in the
deep-sea genus Gigantocypris (1 cm across compared with 1 mm for most of
the others). This is Alistair Hardy’s description of them:
The paired eyes have huge metallic-looking reflectors behind them,
making them appear like the headlamps of a large car; they look
out through glass-like windows in the otherwise orange carapace
and no doubt these concave mirrors behind serve instead of a lens
in front (Hardy 1956).

Fig. 6.6 (a) Parabolic reflecting eyes of the deep-sea ostracod Gigantocypris. The animal is about
10 mm across. (Photograph by Dr M.R. Longbottom.) (b) Top and side view of the Gigantocypris
eye showing the main part of the retina; the hatching shows the approximate orientation of the
receptors. The dashed lines enclose much thinner retinal regions. Scale bar 1 mm. (c) Astigmatic
line image resulting from the different focal lengths of the parabolic and circular profiles of the
reflector, shown in (b). The image lines roughly fit the very long receptors (750 by 25 μm) in the
deep region of the main retina.
138 Animal Eyes

Hardy made water-colour sketches of Gigantocypris and many other deep-


sea animals, and he was undoubtedly right about the optical importance of
the mirror; but as an imaging system it is certainly very odd. The mirrors
are not spherical, but parabolic, and the retinae are not flat sheets as is
usual, but condensed into a shape that looks more like a light-bulb than a
retina (Fig. 6.6b). The curvature of these mirrors in the horizontal and verti-
cal planes is different, which means that the image of a point source will
be astigmatic: it will not be a point, but a line at right angles to the mirror
(Fig. 6.6c; see Land 1978). The receptors are also elongated in this direction,
and so may have some capacity to resolve these linear images. But every-
thing suggests that the function of these eyes is to concentrate as much
light as possible from directions to the left or right of the body axis, rather
than producing an image in any conventional sense. At a depth of 1000 m
there is no remaining light from the sky (Lythgoe 1979; Denton 1990), so
the function of these eyes must be to assist predation by tracking down the
luminescent organisms which are common at these depths.
Another enigmatic mirror eye is found in the deep-sea amphipod
Scypholanceola (Fig. 6.7). This rarely encountered crustacean lives in a simi-
lar environment to Gigantocypris, and probably uses its eyes for the same
purpose. The mirrors are of a very strange shape, looking much more like
ears than eyes. There are a pair of these on each side, the upper one is
a half-cone rather like a rabbit’s pinna, pointing obliquely upwards, and
the lower mirror is shorter and more cylindrical, and points forwards. The
retinae are open patches of receptors at the base of each reflector. Attempts

(a) Eyes (b)

Fig. 6.7 (a) The mid-water hyperiid amphipod Scypholanceola. The double eyes are shown
stippled. (b) Pinna-like reflectors of the double eyes of Scypholanceola. The retina is the J-shaped
white structure at the base of the reflectors. The height of the whole eye pair is about 2 mm.
Mirrors in animals 139

to model these eyes suggest that the mirrors are efficient light-collectors,
capable of indicating at the very least the presence and vague direction of
a self-luminous object. As in Gigantocypris, however, there is no possibility
of an image in the usual sense.
Eyes that use mirrors to produce images are also found in the superposi-
tion compound eyes of shrimps, lobsters and their relatives. These will be
considered separately in Chapter 8.

Tapeta
A great many eyes have mirrors behind the retina, but unlike the scal-
lop mirror their function is not to form an image. These structures (the
correct name is ‘tapetum lucidum’ meaning silvery carpet) are a common
feature of the eyes of vertebrates and arthropods. They are found especially
in animals that live in deep water or are active at night. Their function is
to reflect the light already focused by the lens, and return it through the
retina, giving the retina a second chance of capturing photons missed on
the first pass. Because the tapetum is in the focal plane of the lens it has no
effect on the optical system of the eye, and the reflected light is returned
through the lens as a narrow beam (Fig. 6.8a), visible only from the direc-
tion of the original illumination.
Tapeta in vertebrates are made from a wide variety of materials, all hav-
ing in common a high refractive index. The proportion of the incident light
that non-metallic surfaces reflect is closely related to the amount by which
their refractive index differs from that of the surroundings; glass, for exam-
ple, is quite reflective in air, but reflects hardly at all in water. Thus one
finds crystals of guanine with a refractive index (n) of 1.83 in the tapeta of
the eyes of many fish, riboflavin in the tapeta of bush-babies, and rods con-
taining the zinc salt of cysteine in the tapeta of cats (see below, Fig. 6.13f).
Many ruminants have a ‘tapetum fibrosum’ made of collagen, the reflect-
ing properties of which can be appreciated from the white gleam of muscle
tendons. In some teleost fish the tapetum is made up of sub-micrometre
spheres of lipid or melanin.
In bright light a tapetum is not needed, and there are a few instances of
tapeta that can be occluded as part of the light/dark adaptation process. In
many elasmobranch fish, for example, black pigment-containing cells migrate
over the surface of the reflecting platelets during the day and retreat at night
(Fig. 6.8b; see Walls 1942; Nicol 1989). Another feature of elasmobranch tap-
eta is the way the reflecting platelets are carefully angled, especially at the
edge of the retina, to ensure that all untrapped reflected light leaves the eye
through the lens (Fig. 6.8a); there is no point in having a tapetum if it scat-
ters light so much that it reduces the contrast of the image.
140 Animal Eyes

(a) (b) Light


direction

LIGHT DARK

Fig. 6.8 (a) Alignment of the reflecting plates in the tapetum of an elasmobranch fish
Squalus acanthias. The mirrors are always at right angles to the centre of the image-forming
ray bundle, which, for peripheral bundles that only pass through part of the lens, means that
they make a steep angle with the retinal surface. Note that ray bundles always leave the eye
parallel to the direction that they entered it. (b) The occlusible tapetum in elasmobranchs.
In dark adaptation (right ) the pigment cells are withdrawn but in light adaptation (left ) they
migrate over the surface of the reflecting plates ( hatched ) cutting off the reflection. The retina
itself lies above the figure with light coming from the top of the page. Both figures from Nicol
(1989).

It might seem an advantage, from a construction point of view, for an


eye to have an inverse design. The curious—and seemingly misguided—
arrangement in the vertebrate retina, in which the receptors lie at the back
of the retina behind the neural layers (Fig. 4.8), makes it possible to ‘lay’ a
single reflecting carpet behind the retina, unencumbered by the need for
the axons of the retinal cells to pass through. In support of this, tapeta are
certainly uncommon in the eyes of cephalopod molluscs, which have right-
way-round (everse) retinae. Spiders, however, have solved this problem.
Many crepuscular spiders have tapeta, usually green-reflecting, made in
many cases of multilayers of guanine crystals. Some of the most beautiful
are in the wolf spiders and their relatives where the tapetum has a ‘gridi-
ron’ structure, with strips of reflector underlying each row of receptors (Fig.
6.9, Plate 3). In other spiders such as the huntsmen (Sparassidae) the recep-
tor axons simply penetrate the continuous tapetum. Tapeta are found in
compound eyes as well, especially in the refracting superposition eyes of
insects, particularly moths, and the reflecting superposition eyes of deca-
pod crustaceans such as the shrimps, crayfish, and lobsters. In these cases
the eyes produce a characteristic eyeglow when viewed from the direction
of illumination. The reason is the same as the glow from a cat’s or a spi-
der’s eye, in spite of the very different basic optical systems of these eyes
(see Chapter 8).
Mirrors in animals 141

(a) Light (b)

vitreous cells

receptor nuclei
rhabdom
tapetum

receptor axon

Fig. 6.9 Tapeta in the secondary eyes of lycosid spiders. (a) Diagram of the retina of a lycosid
spider, showing how the rhabdomeric (photoreceptive) region of each receptor ‘sits’ on a strip of
reflecting tapetum (modified from Baccetti and Bedini 1964). (b) The tapetum of a lycosid relative
(the ctenid Cupiennius salei ), photographed through the eye’s own lens. The tapetal strips are
clearly visible, as is the ‘join’ in the centre of the retina. Each small division of the tapetal strip
corresponds to a receptor, and the angle between receptors is about 1°, corresponding to a
physical width of about 8 μm. Another lycosid tapetum is shown in Plate 3.

Butterfly eyes also contain reflectors, but here the mirror is a tiny device
formed from the chitinous ridges of a tracheole (Fig. 6.13d), and is situated
immediately below each rhabdom (the photoreceptor structure). The colour,
which varies across the eye, can be seen transiently when the eye is illumi-
nated from the direction of viewing. The function of these mirrors, like the
tapeta of cats and moths, is to redirect light back through the photoreceptors.

Reflecting sunshades
Some animals have mirrors on the outside of the eye, whose function is to
keep out direct sunlight that would otherwise scatter within the eye. Some
shallow water fishes, such as rays, have a pigmented flap or operculum
across the top of the eye, acting as a sun-shade (Plate 1d). Cephalopods such
as cuttlefish have an iris with a similar function. However, many other fish
have a different solution, which is to use a multilayer mirror instead. These
mirrors generally have a green iridescence, and although they appear simi-
lar in different species, they are constructed in at least six different ways,
implying multiple origins (Lythgoe 1979). The mirrors are organized so that
light from above reaches the iridescent layers at a high angle of incidence,
which provides a very high reflectance. However, light from objects in the
142 Animal Eyes

Fig. 6.10 Nipple array covering the cornea of the eye of the butterfly Morpho rhetenor. Scale bar
1 μm.( Scanning electron micrograph courtesy of Pete Vukusic.)

surroundings arrives at close to normal (90°) incidence to the layers, and is


only weakly reflected. Thus the fish can see out, but sunlight can’t get in.

Anti-reflection coatings
The corneas of the eyes of terrestrial animals have a much higher refrac-
tive index than air, and so about 4 per cent of the incident light is reflected
from them. Not only is this light unavailable for photoreception, it can
often be seen as a highlight that would make the eye visible to a potential
predator. Lenses of optical instruments are routinely coated with a layer of
low-refractive index material (typically magnesium fluoride, MgF2), whose
thickness is an odd multiple of half a wavelength. This allows for a degree
of destructive interference between reflections from the air-coating and
coating-glass interfaces, and so increases the light available for transmis-
sion. Eyes of many insects, particularly lepidopterans, have evolved a dif-
ferent technique. The surface of each eye facet is covered with a hexagonal
array of tapered elements, known as corneal nipples (Fig. 6.10). These have
a height of 20–230 nm, and are separated by 180–240 nm. They are smaller
than the wavelength of light, and so do not affect refraction at the surface,
but they do affect reflection. Their effect is to produce a gradient of refrac-
tive index between air and the cuticle, effectively abolishing the interface.
The reduction in reflection, for the taller paraboloid nipples, is almost to
zero (Stavenga et al. 2006). This represents a gain in transmission, which
presumably aids vision, and it also means that the eye will not act as a
bright point in sun or moonlight, thus improving camouflage. Nipple arrays
are also found on the transparent wings of certain hawkmoths, making
them almost invisible even when fluttering.
Mirrors in animals 143

The physical optics of animal reflectors


Reflectors found in nature are not metallic, but are made of structures in
which light is reflected from or scattered by arrays of elements of vary-
ing complexity. If the dimensions of these elements are comparable with
the wavelength of light interference occurs, typically resulting in enhanced
reflection for some wavelengths and reduced reflection for others. The col-
ours that result often vary with angle from which the surfaces are viewed,
giving rise to iridescence. The resulting colours are said to be structural, as
opposed to pigmentary. In recent years the word ‘photonic’ has come to be
applied to such structures. They can be classified according to their com-
plexity (Fig. 6.11). By analogy with crystal lattices, they can be said to have
periodicities that are one-, two-, and three-dimensional.
The commonest natural reflectors are one-dimensional multilayers (Fig.
6.11a). Their operation is closely related to the better-known phenomenon of
the brightly coloured reflections we see in soap bubbles and oil films, and
we will deal with these first. In thin films, some light is reflected from the
upper surface, and some from the lower surface. If, on re-emerging through
the upper surface, the light from the lower surface is in phase with the light
reflected directly from the upper surface (that is, the highs and lows of
the waves coincide), then the two beams reinforce each other, and the film
appears bright; if they are out of phase it appears dark (Fig. 6.12a).
In a soap bubble, the thinnest part of the film is always black. As the
thickness increases the film becomes a bright white, then passes through
a series of colours (known as Newton’s series, but not the same one as the
rainbow colours) that start off very vivid and slowly decrease in saturation
until the film becomes white again, when it is a few micrometres thick. It
is the first white band that is of particular interest from the mirror-making
standpoint, as it gives a high reflection over a broad spectral range. It occurs
when the optical thickness of the film is a quarter wavelength, i.e.

Fig. 6.11 Photonic structures. (a) One-dimensional multilayer made of plates of material with
different refractive indices. (b) Two-dimensional array of rods. (c) Three-dimensional array of
spheres.
144 Animal Eyes

(a) (b)

n1

n2t2

n1t1

λ/2 phase change n2t2

n1t1
t R.I.=n
n2t2

n2t2

n1

Fig. 6.12 (a) Reflection at a single thin film, for example an oil film or soap bubble. The reflected
light from both surfaces will interfere constructively, and the surface appear bright, if the optical
thickness of the film (nt) is equal to 1/4 of the wavelength of the incident light. (b) In a multilayer
structure, maximum constructive interference occurs when all the plates and spaces in the stack
have an optical thickness of 1/4 wavelength; i.e. n1t1 = n2t2 = λ/4.

nt = l /4 (6.3)

where n is the refractive index, t the actual thickness, and λ the wave-
length. The reason for dealing with optical thickness here is that light
slows down when the refractive index is high, and the wavelength short-
ens by a factor of n, so that when distances need to be measured in num-
bers of wavelengths, this has to be taken into account. Why a quarter
wavelength? One might think that it should be half a wavelength, on
the grounds that if the light reflected at the lower surface has been twice
through the film before emerging, it will have gone through a whole
extra wavelength, and so will come out in phase with the light from the
top surface. There is, however, a complication. Due to a piece of physics
that is resentably hard to understand, light reflected from a low-to-high
Mirrors in animals 145

refractive index interface (the top surface) automatically undergoes a half-


wavelength phase change, whereas at a high-to-low interface (the bottom
surface) it does not (Fig. 6.12a). This means that constructive interference
is achieved if the light from the bottom surface travels a total optical dis-
tance of only half a wavelength, meaning that the optical thickness of
the film should be λ/4. When the film becomes vanishingly thin the light
from the two surfaces travels the same distance, but the upper surface still
imposes its λ/2 phase change, so the interference is destructive, and the
film is black. In fact, high reflectances occur for optical thicknesses with
all odd multiples of λ/4, (i.e. 3λ/4, 5λ/4) and low reflectances for even
multiples.
Films that reflect in this way are indeed very thin. Blue-green light
has a wavelength of 0.5 μm (500 nm), and a quarter of this is 0.125 μm. If
the film is mainly water (refractive index 1.33) then the actual thickness
is 0.094 μm—several times smaller than the resolution limit of the light
microscope.
A single thin film only reflects a few percent of the incident light.
However, it is possible to increase this to very close to 100 per cent by
adding more films stacked one above the other (Fig. 6.12b). The trick here
is to make not only the films themselves a quarter-wavelength thick, but
also the spaces between them, thus ensuring that light from every inter-
face interferes constructively at the top of the stack. Figure 6.13 shows
electron micrographs of several different natural multilayers, and the
alternating pattern is very clear. In each case the thicknesses of the lay-
ers and spaces are all in the region of 0.1 μm, which is what one would
expect for quarter-wavelength interference reflectors. A variety of materi-
als is employed: guanine and water in fish (Fig. 6.13a), protein and cyto-
plasm in Octopus (Fig. 6.13b), and chitin and air in many insect structures
(Fig. 6.13c and d).
As mentioned earlier, the fraction of light energy (r) reflected at each
interface depends on the refractive index difference, according to Fresnel’s
formula:
2 2
r = ( na − nb ) /(na + nb ) (6.4)

where na and nb are the refractive indices of the plates and spaces respec-
tively. If the difference between them is big, then so is the reflectance. As
layers are added the reflectance of the whole stack (R) increases dramati-
cally. With k interfaces the equivalent formula to 6.4 becomes:
k k 2 k k 2
R = (na − nb ) /(na + nb ) (6.5)
146 Animal Eyes

(a) (b)

rs

ros

(c) (d)

(e) (f)
Mirrors in animals 147

The effect, for a guanine–water multilayer with different numbers of inter-


faces, is shown in Fig. 6.14a. The figure also shows the result for a stack of
otherwise similar ‘thick films’, so much thicker than the wavelength of light
that interference no longer occurs (for example, rolls of clingfilm, or adhe-
sive tape). Enlisting constructive interference is clearly well worthwhile:
the quarter-wavelength stack reaches 99 per cent reflectance after about 20
interfaces (or 10 high index plates), but the ‘thick’ stack only achieves 30 per
cent. The reflectance of quarter-wave stacks of other common combinations
of common biological materials is shown in Fig. 6.14b. The number of lay-
ers required to achieve a high reflectance depends mainly on the refractive
index difference.
Besides a high reflectance, the other important feature of multilayer
reflectors is their colour. A multilayer structure is tuned to reflect best at
a wavelength four times the optical thickness of the films and spaces. At
double this thickness interference is destructive, so no light of twice the
preferred wavelength is reflected. Thus these structures are wavelength
selective, and so inevitably coloured, which makes them potentially use-
ful in display. It is also possible, by varying the spacing of the plates,
to produce white reflectors, which are of more value in various types of
camouflage.
The term ‘structural colour’ is applied to any system that generates col-
our using interference of light waves rather than pigment. This includes
not only multilayer reflectors but also diffraction gratings and scattering
structures, which typically produce rather subdued blue and green colours.
Other examples of such structural colours are given in books by D.L. Fox
(1953) and H.M. Fox and Vevers (1960), and a review by Parker (2000).

Fig. 6.13 Electron micrographs of natural multilayer mirrors. (a) Tapetum of a bay anchovy
Anchoa mitchilli, in which a guanine crystal multilayer (g) surrounds each rod outer segment (ros).
From Nicol et al. (1973). (b) Part of a reflecting cell in the skin of Octopus, with proteinaceous
platelets separated by cytoplasm (rs, reflectosome). From Brocco and Cloney (1980). (c) Section of
a wing-scale from the brightly coloured day-flying moth Urania ripheus from Madagascar. This is
an orange-reflecting scale. The structure is a multilayer of six chitin layers with air spaces between
them. See also Plate 2. (d) Chitin-air multilayer in the tapetum behind the receptor cells in the eye
of the white peacock butterfly ( Anartia sp.). The tapetum is formed by the extended taenidial
ridges of a respiratory trachaeole, and reflects a bluish colour. From Miller and Bernard (1968). (e)
Cornea of a green-reflecting facet from the eye of a horsefly Hybomitra lasiophthalma (Diptera:
Tabanidae). The distinct layers consist of chitin of different densities, probably indicating different
degrees of hydration. From Bernard and Miller (1968). See also Plate 3. (f) Tapetum of a cat, made
of a multilayer of rods of a zinc-containing protein (see Fig. 6.11b). The layers of rods behave in
much the same way as plates in a conventional multilayer. From Pedler (1963). The scale bar is 1
μm on all the figures; note that it covers 4–5 repeat units of the pattern in all cases.
148 Animal Eyes

(a) 100 (b) 100 n1 = 1.0


n1 = 1.33
n2 = 1.56
quarter-wave n1 = 1.33 n2 = 1.83
thin film stack n2 = 1.83
n1 = 1.33
Reflectance (%)

Reflectance (%)
n2 = 1.56

thick film
50 stack 50

0 0
0 20 40 60 80 100 0 10 20 30 40
Number of interfaces (k) Number of interfaces (k)

Fig. 6.14 Performance of multilayer mirrors at their wavelength of maximum reflectance. (a)
A quarter-wave stack of materials with the refractive indices of guanine and water reaches 90
per cent reflectance after only 10 interfaces, whereas a thick film stack (nt > 5λ) only reaches 20
per cent. (b) The reflectance for a given number of interfaces in a quarter-wave stack increases
with the difference between the refractive indices of the component layers. The three curves
correspond to a chitin-air stack (left; see Fig. 6.13c and d), a guanine/water stack (centre; see Fig.
6.13a), and a protein or chitin and water stack (right; see Fig. 6.13b). Both from Land (1972).

Box 6.1 Spectral reflectance of multilayer mirrors


There is a particularly simple formula for working out the spectral dis-
tribution of reflectance of an infinite stack of quarter-wave plates. This
will give a useful guide to the way that a stack with a finite number
of plates will perform at different wavelengths. The reflectance is given
by:
2
R = 1 − ÷ (1 − r/cos φ ) (6.6)
where r is the reflectance of a single interface, from 6.4, and ϕ is
the amount by which the phase of the light is delayed by each plate or
space (the ‘phase retardation’), given by:
φ = 2ndp / l (radians). (6.7)
The spectral reflectance of infinite multilayers made of different
common combinations of materials is shown in Fig. 6.15a. The most
obvious difference between the curves is the spectral bandwidth, which
depends on r, and hence on the refractive index difference between the
high and low index layers. An air–chitin multilayer, of the kind found
in iridescent insect wings, for example, reflects light over a range of
Box 6.1 Spectral reflectance of multilayer mirrors
(contd.)
(a) 100

k=∞
n1 = 1.33
80 n2 = 1.56

n1 = 1.33
Reflectance (%)

60 n2 = 1.83

n1 = 1.0
40 n2 = 1.56

20

0
400 500 600 Wavelength 700 nm
λmax = 560nm
0.7 0.8 0.8 1.0 1.1 1.2 1.3 1.4
λ/λmax

(b) 100
k = 20
n1 = 1.33
n2 = 1.83
80 k = 10
Reflectance (%)

60

40

k=4
20
k=2

0
400 500 600 Wavelength 700 nm
λmax = 560nm
0.7 0.8 0.8 1.0 1.1 1.2 1.3 1.4
λ/λmax

Fig. 6.15 Spectral reflectance distribution of quarter-wave multilayers. (a) Spectral


reflectance for multilayers of air-chitin (outer), guanine/water (middle), and protein/water
(inner) for stacks with infinite numbers of layers. Note the decrease in bandwidth with
decrease in the refractive index difference. The upper ordinate scale assumes that λmax =
4nt = 560 nm (i.e. yellow-green). The lower ordinate is independent of λmax. (b) Spectral
reflectance for guanine-water stacks with small numbers of interfaces. The maximum
reflectance rises, the bandwidth decreases, and the number of sidebands increases as the
numbers of interfaces (k) increases. Both from Land (1972).
150 Animal Eyes

Box 6.1 Spectral reflectance of multilayer mirrors


(contd.)
wavelengths nearly three times larger than the water–protein or water–
chitin interface found in reflecting surfaces in the skin of cephalopods
such as squid. Thus high refractive index differences produce relatively
unsaturated colours, which is good for making mirrors.
It is a little more complicated to work out spectral distributions for
stacks with a finite number of layers (see Land 1972) but the basic result
is that with small numbers of layers the peak reflectance is lower (as in
Fig. 6.14b), the bandwidth is somewhat broader, and the central peak has
‘sidebands’ that get closer together the more layers there are (Fig. 6.15b).
Two other results are also important. First, the colour of the reflected light
changes with the angle of incidence. At normal incidence (at right angles
to the surface) the wavelength of maximum reflectance has its highest
value, given by eqn (6.5) for a quarter-wave stack, but as the angle between
incident beam and the normal increases this wavelength becomes shorter.
Thus iridescent structures with a multilayer construction often change
colour with viewing direction. Second, the saturation of the reflected col-
our varies with the relative optical thicknesses of the high and low refrac-
tive index layers. If one is somewhat greater than λ/4 and the other less
than λ/4, so that the sum comes to λ/2, the peak reflected wavelength is
the same as an all quarter-wave (‘ideal’) stack, but the bandwidth of the
reflected light gets narrower. Because these ‘non-ideal’ stacks tend to be
more highly coloured, they are particularly useful in display.

Uses of photonic reflectors in structures other


than eyes
Display
Multilayer mirrors are ideal for increasing an animal’s conspicuousness in
social or sexual contexts. Their silveryness catches the sun, and their colour
can be used to specify identity. Birds such as peacocks, birds of paradise,
and silver pheasants use coloured multilayer mirrors in display, and more
modestly iridescent feathers are part of the plumage of pigeons, starlings
and many ducks. The structures involved in bird mirrors are typically two-
dimensional photonic structures (Fig. 6.11b), usually often melanin rodlets
(which have a refractive index of around 2) embedded in the keratin of the
feather. In birds of paradise the rodlets are inflated by nitrogen into flat-
tened plates, with the gas providing a higher refractive index difference.
Mirrors in animals 151

Multilayer reflectors are also found as display colours in fish. Examples


are the coloured adornments of the dragonet (Callionymus lyra), and the
blue stripe along the body of neon tetra (Paracheirodon innesi). The latter
has the intriguing property that it can be ‘turned off’ at night, apparently
by an unknown mechanism that decreases the spacing between the gua-
nine platelets. This moves the reflectance peak into the violet–ultraviolet
region of the spectrum where it becomes almost invisible (Lythgoe and
Shand 1989). The Australian paradise whiptail (Pentapodus paradisius) also
has iridescent stripes on its face and body which can change colour from
blue to red and back again in a few seconds, again by varying the spacing
between the platelets (Mäthger et al. 2003). This is no doubt a signal, but its
meaning is unknown. There are many other examples of reflectors in fish,
and in cephalopods, but the function of most of these is camouflage rather
than display (see below).
Amongst insects the iridescence of the wings of some butterflies and
moths is particularly striking. The colours of most butterfly wings are pig-
mentary, but the blues, especially, tend to be structural, and depend on con-
structive interference. Males of the genus Morpho have intensely blue wings
which were much admired by Victorian collectors. These have scales with
long ridges, from each side of which protrude plates separated by air spaces
providing quarter-wave multilayers. Equally brilliant are the wings of the
day-flying Madagascan moth Urania (= Chrysiridia) ripheus (Plate 2). In this
species (also much collected) the colours span the range from blue-green,
through yellow to red, and then into the next spectrum through purple
and blue to green again. Structurally, the chitin layers in the scales increase
in optical thickness from 1/4 to 3/4 of a wavelength, whilst the air gaps
remain 1/4 wavelength thick (Fig. 6.13c).
Other examples of multilayer reflectors in insects include the strikingly
coloured corneas of horseflies, deer-flies, and long-legged (Dolichopodid)
flies. Here the colours, which disappear after death, are due to alternating
chitin layers with different degrees of hydration, and hence different refrac-
tive indices (Fig. 6.13e and Plate 3). Their function is unknown.
Many spiders, particularly jumping spiders (Salticidae), have silvery scales
on their face, legs, and bodies. In the SE Asian jumping spider Cosmophasis
umbratica the males, but not the females, have scales that reflect ultravio-
let light. This is seen by the females, and encourages courtship behaviour.
The scales involved have an unusual structure, with two layers of chitin
(285 nm thick) separated by a narrow air gap (150 nm). This produces a
principal reflectance peak in the orange at 600 nm, and a smaller peak at
385 nm (Land et al. 2007). The females too use ultraviolet light, but not for
reflection; their palps fluoresce green in ultraviolet light, and this too is a
courtship signal (Lim et al. 2007).
152 Animal Eyes

Fig. 6.16 Structure of part of a green-reflecting scale from the butterfly Parides sesostris. This has
a three-dimensional structure (see Fig. 6.11c) in which light scattered from the lattice of spherical
holes interferes to produce a similar colour in all directions. Scale bar 1 μm. (Scanning electron
micrograph courtesy of Pete Vukusic.)

Simple multilayer structures give intense reflections, but only over a nar-
row angular range. In some butterflies such as Papilio palinurus the multi-
layers form small, deeply concave, pits about 5μm across, and these allow
for both single reflection (yellow) from the base of each pit, and double
reflection (blue because of the lower angle of incidence) from the sides. This
combination creates a somewhat diffuse green combination colour which
blends with foliage. Other butterflies, for example Parides sesostris, employ
three-dimensional structures (Fig. 6.16) which, because they manipulate the
flow of light in all directions, produce a constant colour (green in P. sesos-
tris) over a wide angle. The structure in this case is composed of spherical
holes in a matrix of cuticle, arranged in a diamond lattice (Vukusic and
Sambles 2003).

Reflecting camouflage
Mirrors can also have the opposite function to display: that of rendering an
animal invisible. In a beautiful series of papers in the 1960s, Denton and
Nicol showed how the silvery sides of fish provide a form of camouflage
which, in the context of the open ocean, makes their bodies very difficult
to see (Denton and Nicol 1965; Denton 1970). The principle is simple, and
relies on the fact that as sunlight penetrates below the sea surface, wave
refraction and scattering diffuse the light so that it becomes nearly sym-
metrical around a vertical axis (Fig. 6.17a). At any particular angle to the
vertical light coming from, say, the north is similar to that from the south,
Mirrors in animals 153

and this is more or less independent of the sun’s angle relative to the sur-
face. In this situation, a vertical plane mirror becomes invisible, because
the light reflected from it is identical in intensity to the light that would
have passed through it (Fig. 6.17b). The efficacy of this can be judged from
the photograph of a silvery fish (Fig. 6.18), which is almost invisible until it
tilts out of the vertical and reflects light from above. Lythgoe (1979, fig. 7.1)
shows a similar photograph in which the most visible feature of the fish is
the black pupil of the eye. This cannot be disguised because it must absorb
light if vision is to work. Divers occasionally report being passed by shoals
of black dots, and regret the excesses of the previous evening.
To make this reflecting strategy work, fish have had to solve two prob-
lems. First, fish are not flat-sided—they bulge—and the camouflage trick
will not work unless the mirror is fairly accurately flat. Second, the mir-
rors must be white, not coloured, as a simple quarter-wave stack made of
guanine and water would be. Denton and Nicol showed that the bulge
problem was dealt with in a most ingenious way. Although the sides of
most fish are convex, the reflecting platelets in the scales themselves are

(a) (b)

(c) (d)

Fig. 6.17 Reflecting camouflage in the sea. (a) At a depth greater than a few tens of metres the
distribution of light around the vertical becomes symmetrical. (b) A vertical plane mirror becomes
invisible in the sea because the light reflected at any angle has the same intensity as the light that
would have passed through. (c) The orientation in the vertical plane of reflecting platelets around
the body of a herring. Note that they conform much more closely to the vertical than to the body
surface. (d) The overlap of reflecting scales in the herring. Each scale has regions each reflecting a
different 1/3 of the spectrum (Plate 2), and when three overlap each other the reflection is white.
All based on Denton (1970).
154 Animal Eyes

not parallel to the body surface, but are tilted so that they are aligned
much more closely with the vertical (Fig. 6.17c). Thus the side of the fish
behaves optically as a plane vertical mirror, independent of its real shape.
As mentioned earlier, a white reflector can be made by varying the thick-
ness of the layers within a multilayer stack. In fish like the herring and
sprat this is done in a very neat way. Within each scale the colour reflected
by the multilayer varies, so that approximately a third of the scale is
blue-green, a third red-purple, and a third orange-yellow (Plate 2). The
scales overlie each other rather like roof tiles, so that they are three deep
at any one point, with the differently coloured multilayers one on top of
the other (Fig. 6.17d; see Denton and Nicol 1965). Since multilayer reflec-
tors transmit what they do not reflect, each scale is able to reflect its own
one-third of the spectrum unhindered in transmission by the other two

Fig. 6.18 A silvery fish (a permit, Trachynotus falcatus) oriented vertically in the sea (top) and
tilted (bottom) so that it reflects light from above. (Photographs by Justin Marshall.)
Mirrors in animals 155

scales, and the net result is an impressively white reflection, as the display
on the fish counter attests. It is only when silvery fish become damaged,
or are simply not as fresh as they might be, that they lose scales and start
to become colourful.
An additional component of the camouflage strategy of many mid-water
fish, cephalopods, and crustaceans is ‘counter-illumination’ in which rows
of downward-pointing luminescent photophores (usually also involving
multilayer reflectors) are used to disguise the silhouette of the animal when
viewed from directly below (Herring 1994, 2002).
On land the mirror strategy usually won’t work because light is much
more directional. There is one setting, however, that has a light environ-
ment a little like the ocean. This is the deep forest. Here light is diffuse, and
the background in one direction looks much like that in any other. Pupae
of certain danaine butterflies, for example Euploea core from Sri Lanka, have
evolved brilliant gold-reflecting multilayer cuticles (Plate 2), whose surfaces
reflect the details of the surrounding forest undergrowth (Steinbrecht et al.
1985). This is perfect camouflage; the intensity and texture match the sur-
roundings, and invisibility is assured.

Summary

1. A small number of eyes employ concave mirrors, rather than lenses, as


image-forming structures. The most impressive of these is in the scallop
Pecten, where the image in each of the 60–100 eyes provides a means of
detecting movement. An image-forming mirror with a stepped construc-
tion is found in the downward-pointing secondary eyes of the deep-sea
spookfish Dolichopteryx.
2. The deep-sea ostracod Gigantocypris has a pair of eyes with parabolic
reflectors. These provide high sensitivity but poor resolution.
3. Reflecting tapeta that do not form images, but which double the effective
light path through the retina, are common in vertebrate eyes (e.g. cat)
and also in compound eyes of some insects and crustaceans.
4. Some insects, particularly lepidoptera, have anti-reflection coatings on
their eyes that consist of an array of minute nipples. These serve to mini-
mize the refractive index transition from air to chitin.
5. Reflecting structures can be classified according to their structure as
one-dimensional (plates), two-dimensional (rods), or three-dimensional
(solid or hollow spheres).
6. Most animal mirrors employ the principle of multilayer interference from
stacks of plates of alternating refractive index. Light is reflected from
each surface in the stack, and if the interfaces are separated by a quarter-
156 Animal Eyes

wavelength, or an odd multiple of this, constructive interference occurs


and a high reflectance is produced.
7. Materials involved in biological multilayers include guanine and cyto-
plasm (fish scales) and chitin and air (insect wings). The highest reflec-
tion is produced when the refractive index difference is high.
8. Because the reflectance of a multilayer is a function of wavelength, most
biological reflectors are coloured. This makes them useful in display, for
example, in the iridescent feathers of some birds, and the wings of some
butterflies and moths.
9. The special light conditions in the ocean make it possible to use mirrors
as an effective form of camouflage. The silvery scales disguise the sides
of the fish, by reflecting light that is close in brightness to the back-
ground.
7 Apposition
compound
eyes

Origins
Judging from the numbers of individuals that possess them, compound eyes
are by far the most popular devices for imaging an animal’s surroundings.
Built as convex structures around the outside of an animal’s head, they are
fundamentally different from the concave structure of single chamber eyes.
In spite of this major topological difference, however, the jobs of the two
kinds of structure are the same—to break up the incoming light according
to its direction of origin (Fig. 7.1). The other great difference between the
two kinds of eye is, of course, that compound eyes employ multiple optical
systems compared with the single optical system of so-called ‘simple’ eyes.
This does not necessarily mean that compound eyes form multiple images,
however. In apposition eyes, such as those of most diurnal insects, each of
the lenses does form a tiny image (although this is not what the animal
actually sees). But in superposition eyes, more commonly found in noctur-
nal insects and deep-water crustaceans, the lenses (or sometimes mirrors)
operate in concert to form a single deep-lying image. Because the optical
mechanisms involved are very different from each other we have split our
discussion of compound eyes into two: this chapter deals with apposition
eyes and their variants, and the next with superposition eyes.
Compound eyes first appear at the time of the Cambrian radiation event
(see Chapter 1). Several of the more peculiar animals of the Burgess Shale,
such as Anomalocaris, had large convex eyes, and from their shape these
must have been compound eyes, even though the facet structure has not
been preserved (Conway-Morris 1998). The arthropod sub-phyla Crustacea
and Chelicerata, which go back to the Cambrian, were equipped mainly

157
158 Animal Eyes

with compound eyes, as were the first insects which appeared later, in
the Devonian. An animal almost unchanged from that early period is the
horseshoe crab, Limulus (Chelicerata), whose famous compound eyes pro-
vided visual physiologists with one of their best preparations from the
1930s onwards. More recent chelicerate groups, such as the scorpions and
spiders are thought to have converted the ancestral compound eyes to
simple eyes by some process of coalescence. Some of the best preserved
fossil eyes of any animal group are those of the Trilobites, whose history
begins in the Cambrian and ends in the Permian, 300 million years later.
The calcite in the exoskeletons of these arthropods has preserved not just
the external structure of these eyes, but to some extent the optics too,
allowing us a tantalizing glimpse into visual systems half a billion years
old (Levi-Setti 1993). Trilobite eyes are discussed later in the chapter (see
Fig. 7.21).
All the animals mentioned so far belong to the Arthropoda, and they
probably originated from a worm-like ancestor that already possessed
a rudimentary compound eye—possibly a loose collection of eyespots.
Subsequently, arthropod eyes evolved independently along two separate
lineages, in crustaceans and insects, and in myriapods and chelicerates
(Nilsson and Kelber 2007). Independent of the arthropods, two small unre-
lated groups also evolved compound eyes (Fig. 7.2). The ark shells (Arca
and Pectunculus) are bivalve molluscs with an array of compound and sim-
ple eye structures around the edge of the mantle. They fulfil much the
same function as the mirror eyes of scallops (Chapter 6), namely as ‘burglar
alarms’ for the detection of moving predators (Nilsson 1994). Unlike arthro-

Δf

C N
Fig. 7.1 The underlying similarity of function in apposition
and simple (camera-type) eyes. The sampling angle ∆ ϕ (inter- f
ommatidial or inter-receptor angle) is D/r in an apposition eye
and s /f in a simple eye. D is the facet diameter, r the radius of
curvature (centre C), f the simple eye focal length and s the
receptor separation. N is the nodal point of the simple eye. s
Apposition compound eyes 159

pod compound eyes these are lens-less, with the acceptance angles of the
receptors constrained simply by the shadowing effect of the pigmented
tubes around them. Sabellid tubeworms are annelids that filter-feed with
tentacles which project from a tube half buried in mud, and like the ark
shells they need early-warning of approaching predators. In Branchiomma
the two compound eyes are borne at the tips of specially modified tentacles,

(a)

(b)

(c)

Fig. 7.2 (a) Primitive compound eyes in sabellid worms. From left: Hypsicomus, Protula, and
Sabella. (b) Well-developed compound eye in the sabellid Branchiomma. Longitudinal and
transverse sections (scale 100 μm) and a single element, in which the receptive part is a stack of
ciliary discs (scale 10 μm). (c) Mantle eyes of the bivalve mollusc Arca. Section through a single eye
on the right (scale 100 μm). Land (1981) from various sources.
160 Animal Eyes

and although they do have lenses they are more like the eyes of Arca than
those of arthropods (Nilsson 1994).
Amongst other groups, some starfish have rather loosely organized
compound eye-like structures at the ends of the arms, but they seem to
be more a collection of small eye-cups of a basic kind, rather than a sin-
gle eye. Their receptors certainly respond to light, but the ability of the
structure as a whole to resolve an image is uncertain. An intriguing array
of lens-like calcite structures has also been found in the armoured dorsal
arm plates of certain brittle stars (Aizenberg et al. 2001), but their possible
function as photoreceptor structures is not well established. Sea urchins
have distributed dermal photosensitivity, but the shadowing effect of the
spines limits the angle viewed by each region to less than 10º, allowing
the detection of dark objects (Yerramilli and Johnsen 2010). Other animals
which have arrays of small eyespots include the fresh-water flatworm
Polycelis, which has a line of about 30 eyespots around the head region,
and chitons (coat-of-mail shell molluscs) which have numerous photore-
ceptors, or in some species small lens eyes, embedded in the grooves of
their shell plates (Speiser et al. 2011). Although in chitons these are indi-
vidually sensitive to dimming or even movement, it is unclear whether
these structures are capable of providing any kind of unified view of the
light distribution in the surroundings.

A little history: apposition and neural superposition


The facets of compound eyes of insects are just too small to be resolved
with the naked eye, and it required the invention of the microscope in the
seventeenth century before they could be properly depicted. The process of
working out how compound eyes functioned took more than two centuries
from Robert Hooke’s first drawing of ‘The Grey Drone Fly’ (probably a male
horse-fly) in his Micrographia of 1665, to the essentially modern account by
Sigmund Exner in 1891. The first person to look through the optical array of
an insect eye was Antoni van Leeuwenhoek, and his observations caused a
controversy that was not fully resolved until the 1960s. The following quo-
tation comes from Wehner (1981) and is from a letter from Leeuwenhoek to
the Royal Society of London, which was published in 1695.
Last summer I looked at an insect’s cornea through my micro-
scope. The cornea was mounted at some larger distance from the
objective as it was usually done when observing small objects.
Then I moved the burning flame of a candle up and down at such
a distance from the cornea that the candle shed its light through it.
What I observed by looking into the microscope were the inverted
Apposition compound eyes 161

images of the burning flame: not one image, but some hundred
images. As small as they were, I could see them all moving.
Evidently, each facet of the eye (at least in apposition eyes) does produce
an inverted image (see Chapter 8, Fig. 8.2), even though the geometry of
the eye as a whole dictates that the overall image is erect (Fig. 7.1). What,
then does the insect see? Do the receptors (typically eight) beneath each
lens resolve the inverted images (as Hollywood would like us to believe),
or do they just indicate the average intensity across the field of view of the
ommatidium? (An ommatidium is the ‘unit’ of a compound eye, consisting
of the lens, receptors, and associated structures. See Fig. 7.3.)
Remarkably, the answer depends on the animal. By the 1870s, histologi-
cal studies had shown that in most apposition eyes the eight receptor cells
in each ommatidium contribute to a single radial structure, known as a
rhabdom (Greek for rod; Figs. 7.3 and 7.4). Much later, in the 1950s, this
material was found to be made up of photoreceptive membrane covering
large numbers of long narrow microvilli, but even by the time that Exner
wrote his monograph in 1891 it was clear that the rhabdom was the struc-
ture sensitive to light. Optically, each ommatidium works as follows. The
inverted image that Leeuwenhoek saw is focused onto the distal tip of the
rhabdom. Having a slightly higher refractive index than its surroundings,
the rhabdom behaves as a light guide, so that the light that enters its distal
tip travels down the structure, trapped by total internal reflection. Any spa-
tial information in the image that enters the rhabdom tip is lost, scrambled
by the multiple reflections within the light guide, so that the rhabdom itself
acts as a photocell that averages all the light that enters it. Its field of view
is defined, in geometric terms, by the angle that the tip subtends at the
nodal point of the corneal lens (see Fig. 7.6), and in a typical apposition eye

cornea
crystalline
cone
iris pigment
cell
receptor cell

rhabdom
basement membrane
receptor axons

Fig. 7.3 Basic structure of an apposition eye, showing its construction from ommatidial elements.
Modified from Duke-Elder (1958; Fig. 134).
162 Animal Eyes

this acceptance angle (Δρ) is approximately the same as the angle between the
ommatidial axes (the inter-ommatidial angle, Δϕ). Thus the field of view of
one rhabdom abuts (or ‘apposes’, hence the name) the field of its neighbour,
thus producing an overall erect image made up of a mosaic of adjacent
fields of view.
Although the eight receptors that contribute to the rhabdom share the
same visual field, that does not mean that they supply the same informa-
tion. The labels UV, B, and G on the cross-section of a bee rhabdom in
Fig. 7.4b indicate the regions of the spectrum that the cells respond to best.
Most insects have trichromatic colour vision, just as we do, although their
visible spectrum is shifted towards shorter wavelengths compared with
ours (Menzel 1979; Chittka 1996). Some butterflies and dragonflies have

apposition
(a) (b) B
UV
G G

G G
B UV
1 μm

neural superposition

(c) (d)

4
2

5
1 7

6
1 μm

Fig. 7.4 Optical comparison of an apposition eye (a, b) and a neural superposition eye (c, d). In
an apposition eye each rhabdom (hatched) views light from a slightly different direction (arrows),
and the rhabdoms (b), although made up from eight receptors, have a fused structure that acts
as a single light-guide. UV, B, and G indicate the receptor elements that respond to ultraviolet,
blue, and green in an ommatidium from the eye of a worker bee. In neural superposition eyes,
light from a single direction is imaged onto different rhabdomeres in adjacent ommatidia (c).
The axons from all receptors imaging the same point collect together in the first synaptic layer
(the lamina) so that here the image has the same structure as in an ordinary apposition eye. The
section (d) shows the arrangement of the separated rhabdomeres in an ommatidium from a fly
(see also Fig. 7.10c). The six outer rhabdomeres (1–6) all send axons to different adjacent laminar
‘cartridges’, as in (c). The central pair (7 overlying 8) bypass the lamina and go straight to the
next ganglion, the medulla.
Apposition compound eyes 163

four-colour vision, and so does the water flea Daphnia—rather implausi-


bly given that it only has 22 ommatidia. Most other crustaceans are di- or
trichromatic. An amazing exception is the mantis shrimp Odontodactylus
(Stomatopoda) which has 12 visual pigments in a specialized band across
the eye (see Chapter 9 and Plate 4). The second feature of the bee rhab-
dom (Fig. 7.4b) is that the microvilli making up the structure are arranged
in orthogonal sets. It has been known since the work of Karl von Frisch
in the 1940s that bees can navigate using the pattern of polarized light in
the sky. This capacity arises from the way the photoreceptor molecules are
arranged on the microvilli (see Chapter 2). A geometric consequence of the
cylindrical shape of the microvilli is that there will be twice as many light-
sensitive chromophore groups of the rhodopsin molecules aligned parallel
to the long axis of each microvillus than at right angles to it. This in turn
means that the receptors respond best to light polarized parallel to this
axis. In fact bees use a special dorsal region of the eye (the POL area) to
analyse sky polarization; in the rest of the eye the receptors are twisted
to abolish polarization sensitivity, so that it does not interfere with colour
vision (Rossel 1989; Wehner 1987). Polarization vision is also used by some
insects, such as the water bug Notonecta, to detect water surfaces, which
polarize light strongly (Schwind 1983, 1991).
The description of apposition optics given above holds for most diur-
nal insects and crustaceans (bees, grasshoppers, water fleas, crabs, etc.)
but it does not apply to the true (two-winged) flies. Ever since 1879, when
Grenacher observed that the receptors in fly ommatidia have separate pho-
toreceptive structures (rhabdomeres) that do not contribute to a common
rhabdom, there had been suspicions that flies might actually be resolving
the Leeuwenhoek images. In the focal plane of the lens of a fly ommatid-
ium the distal tips of the rhabdomeres are separated from each other and
form a characteristic pattern (Fig. 7.4d, see also Fig. 7.10c) which resolves
the image into seven parts (there are eight receptors, but the central pair lie
one above the other). This raises the obvious question: how are these seven-
pixel inverted images welded together to form the overall erect image, if
indeed that is what occurs? Kuno Kirschfeld finally solved this conundrum
in 1967. It turns out that the angle between the fields of view of adjacent
rhabdomeres within an ommatidium (about 1.5° in a blowfly) is identical to
the angle between neighbouring ommatidial axes. Furthermore, the fields of
each of the six peripheral rhabdomeres in one fly ommatidium are aligned,
in the space around the fly, with the field of the central rhabdomere of
one of the neighbouring ommatidia (Fig. 7.4c). Thus each point in space
is viewed by seven rhabdomeres in seven adjacent ommatidia. What does
this complicated and seemingly redundant arrangement achieve? To answer
this it is necessary to know what happens to the signals from the seven
164 Animal Eyes

receptors that view the same point, and that turns out to be the most aston-
ishing part of the story. Beneath each ommatidium the emerging receptor
axon bundle undergoes a 180° twist before the individual neurons disperse
to nearby regions of the first optic ganglion (the lamina) that correspond
to the adjacent ommatidia. The net result of this impressive feat of neural
knitting (indicated in Fig. 7.4c) is that all the axons that ‘look at’ the same
point in space finish up making connections with the same cells in the
lamina. Thus, as far as the lamina is concerned, the image is exactly the
same as it would be in a conventional apposition eye, except that the signal,
in terms of photon captures, is seven times stronger. One advantage of the
extra signal is that it provides flies with a short period at dawn and dusk
when they can see well, but the eyesight of their predators and competitors
is less sensitive and so less effective at detecting small objects.
Kirschfeld called this arrangement ‘neural superposition’, because, as in
optical superposition (Chapter 8), the contributions of a number of omma-
tidia are superimposed in the final image. One might ask: could the sig-
nal not have been made stronger simply by increasing the diameter of the
rhabdom in a conventional apposition eye? Indeed it could, but that would
mean increasing the rhabdom acceptance angle (Δρ) at the same time, which
in turn would mean a loss of resolution for the eye as a whole. The beauty
of the fly solution, and undoubtedly the reason why it evolved, is that it
involves no increase in acceptance angle, provided the rhabdomeres are
properly aligned. There are strong hints that something like neural super-
position occurs in other insect groups (some beetles, earwigs, water bugs
and craneflies; Nilsson and Ro 1994) but it is only in the advanced flies
and some diurnal mosquitoes that the perfect nearest-neighbours arrange-
ment is known to be achieved (Land et al. 1999).

Basic optics
Most of the optical theory given in Chapters 3 and 5 applies to apposition
compound eyes, but there are some differences from camera-type eyes. In
this section we outline the major points again, and make comparisons with
other types of eye.

Imaging mechanisms
The structures that form the images in the ommatidia of apposition eyes
are quite varied (Fig. 7.5). In terrestrial insects, as in terrestrial vertebrates
(Chapter 5), the simplest way to produce an image is to make the cor-
nea curved (Fig. 7.5a). Ordinary spherical-surface optics then apply (see
Apposition compound eyes 165

(a) (b) (c) (d) (e)

1.335
1.40
1.48
1.54 1.46
1.38
1.44

1.46 1.50
1.36
1.35
1.42

5 mm

rhabdom

Fig. 7.5 Five mechanisms of image formation in apposition eyes. (a) Corneal lens (bee, fly). (b)
Multisurface lens (water-bugs). (c) Graded-index lens-cylinder (Limulus). (d) Lens-cylinder with
light-guide (Phronima, Amphipoda). (e) Lens/lens-cylinder afocal combination (butterflies). Details
in text.

Chapter 5), and an image is formed about 4 radii of curvature behind the
front face. In aquatic insects such as the water bug Notonecta the external
surface of the cornea has little power, because of the reduction in refrac-
tive index difference (Fig. 7.5b). It is augmented by two other surfaces, the
rear of the lens, and an unusually curved interface in the centre of the lens
whose function may be to correct spherical aberration, as has been pro-
posed for some trilobite eyes (Levi-Setti 1993).
The horseshoe crab Limulus lives mainly in the sea, but comes ashore to
lay eggs. It has a flat cornea, but behind this lie a series of inward-pointing
conical projections which form images at their proximal tips (Fig. 7.5c). Exner
(1891) worked out that, in the absence of any optically useful interfaces,
these structures must operate as graded-index devices, forming images by
continuous ray-bending much as occurs in the spherical Matthiessen lenses
of fish (Chapter 4). He called these structures ‘lens cylinders’, and his
assumption that they must have internal refractive index gradients has been
repeatedly confirmed in recent years. Particularly interesting lens cylinders
are found in hyperiid amphipods (deep-sea cousins of the more familiar
sand-hoppers). The most impressive of these, Phronima, has a double eye in
which the upper part covers the dorsal surface of the head, and the lower
part is a small ventrally situated tear-drop-like structure (see Fig. 7.19a). The
166 Animal Eyes

upper eye has graded index lenses not unlike those of Limulus, but instead
of imaging directly onto the rhabdom tip they focus into the mouth of a
long light guide, 15μm wide and with a refractive index of 1.39 (Figs. 7.5b
and 7.19b), which conveys the light from the image 5 mm to the retina, situ-
ated ventrally next to the retina of the lower eye (Land 1981b). The function
of this peculiar arrangement seems to be camouflage, to keep the eye as
transparent as possible (Nilsson 1989), and other mid-water hyperiids have
similar arrangements.
The eyes of butterflies, which resemble ordinary apposition eyes in
nearly all respects, have an optical system that is subtly different from
the arrangement in Fig. 7.5a. Instead of forming an image at the rhabdom
tip, as in the eye of a bee or locust, the image lies within the crystalline
cone. The proximal part of the cone contains a very powerful lens cylinder
which makes the focused light parallel again, so that it reaches the rhab-
dom as a beam that just fits the rhabdom (Fig. 7.5e). This arrangement,
known as afocal apposition because there is no external focus, has much
in common with the superposition optical system of moths (Chapter 8), to
which butterflies are closely related.

Resolution
As discussed in Chapter 3, for any eye the resolution of the image seen by
the brain is determined by the sampling frequency of the eye (νs) and by
the optical quality, represented by the spatial cut-off frequency (νco). In an
apposition eye the sampling unit is the rhabdom in a single ommatidium.
Although the eight receptors that contribute to each rhabdom usually have
different spectral and polarization responses, they all share a common field
of view. Thus it is the angle between ommatidia (Δϕ) that determines how
the overall image is sampled (Fig. 7.1), where νs = 1/(2Δϕ). [In a hexagonal
array the exact definition of Δϕ can become quite complicated (see Fig. 7.15),
but for now we take it to mean the average of the angle measured along
each of the three axes of the array.] In the central region of a bee eye, Δϕ
is about 1.7°.
The neural superposition eyes of dipterans have an additional constraint,
namely that the separation of the tips of the rhabdomeres must match the
inter-ommatidial angle, i.e. Δϕ = s/f, where s is the separation and f the
focal length of the facet lens. If Δϕ is 2° (0.035 radians) and f is 70 μm,
then the tip separation must be 2.4 μm. This doesn’t leave a great deal of
room. Because narrow light-guides, such as rhabdomeres, tend to be ‘leaky’,
with a substantial fraction of the light energy outside the guide itself (Fig.
3.7), there needs to be an adequate gap between one rhabdomere and the
Apposition compound eyes 167

next to prevent ‘cross-talk’. In flies there is a 1-μm gap between adjacent


rhabdomeres, which means that the rhabdomeres themselves must be very
narrow. They have a distal tip diameter which is also about 1 μm, making
them amongst the narrowest photoreceptors in any animal. In most other
respects, however, neural superposition eyes are optically similar to other
apposition eyes.
As in the human eye (Chapter 5) one would expect that apposition eyes
would show a rough match between the inter-ommatidial angle and the
acceptance angle (Δρ) of a single rhabdom, the argument being that no indi-
vidual rhabdom can resolve detail finer than Δρ, so there is no point spacing
the directions of view of ommatidia closer than this angle. Just as in other
eyes, geometrical (ray) optics and physical (wave) optics both contribute to
Δρ (Fig. 7.6). Geometrically Δρray is the angle subtended by the rhabdom tip
at the nodal point of the facet lens, i.e. the rhabdom diameter divided by
the focal length (d/f radians). Typical values (for a bee) are 2 μm for d and
60 μm for f, which makes Δρray 0.033 radians, or 1.9°. In wave optics the
limit to image quality is set by diffraction, specifically by the angle sub-
tended by the Airy disc, and this (see Chapter 3) is given by λ/D radians.
If the wavelength (λ) is 0.5 μm and the facet diameter (D) is 25 μm, then
Δρwave is 0.02 radians, or 1.1°. To obtain the final value for Δρ, Δρray and Δρ
wave have to be combined, and unfortunately the proper way of doing this
(convolution, taking the waveguide properties of the rhabdom into account,
see van Hateren 1989) is very complicated. Snyder (1979) provides a simple
approximation:
2 2 2
Δρ = Δρray + Δρwave (7.1)

Unfortunately this equation neglects waveguide effects which are particu-


larly important with narrow rhabdomeres and rhabdoms (Stavenga 2003),
and tends to overestimate Δρ. According to Stavenga, in general a better
approximation is provided by the geometrical value (Δρray = d/f in Fig. 7.6).
Using the argument that works for humans, namely that the optical cut-
off frequency (1/Δρ) should match the sampling frequency (1/2Δϕ), we would
expect the ratio of Δρ to Δϕ to be 1:2. In fact, it is only 1:1.3 in the bee, and
this is fairly typical of diurnal insects (Land 1997). It implies that apposi-
tion eyes tend to under-sample the image slightly, or put another way, they
operate at levels of contrast in the image considerably higher than those
experienced by the human eye at its resolution limit.
Although diffraction imposes severe limitations on the performance of
compound eyes (see next section), in many other respects they are excellent
instruments. Optical defects other than diffraction tend to have a greater
impact on resolution as eyes get bigger (Land 1981a). The very short focal
168 Animal Eyes

Δρray Δρwave

d/f λ/D

f
Airy
disc
d

Fig. 7.6 The acceptance angle (∆ρ) of an ommatidium results from a combination of the Airy
diffraction pattern (point-spread function) given by λ/D (right), and the geometrical angular width
of the rhabdom (d/f ) at the nodal point of the lens (left).

length of the facet lenses of compound eyes, 100 μm or less, ensures that
such defects as spherical and chromatic aberration, which are troublesome
in camera-type eyes, are negligible in compound eyes. Similarly the depth
of field is enormous, extending to infinity from as close as an insect ever
needs to see.

Diffraction and eye size


In a short and remarkable paper on ‘Insect sight and the defining power of
compound eyes’, published over a century ago, Henry Mallock, an optical
instrument maker, described insect vision in these terms:
The best of the eyes . . . would give a picture about as good as if
executed in rather coarse wool-work and viewed at a distance of a
foot (Mallock, 1894).
Why is insect vision so poor? The problem, as Mallock recognized for the
first time, is diffraction. Compound eyes have very small lenses compared
with the lenses of camera-type eyes. As we have seen, a 25-μm diameter
facet produces a diffraction blur circle (Airy disc) that is just over 1° wide
in angular terms, and cannot resolve spatial frequencies higher than 1
cycle per degree. 1° is about the size of a finger-nail at arm’s length, so
one can imagine a bee’s world made up of pixels of about that size. In
Apposition compound eyes 169

terms of the acuity of our own eyes (about 60 c/deg), this is not very good
at all.
Mallock’s paper goes on to discuss what a compound eye with human
resolution would look like, and he came to the astonishing conclusion that
it would need to be more than 20 metres in diameter—bigger than a house
(Fig. 7.7a). The reason for this is clear. The human eye achieves 60 c/deg
resolution by having a daylight pupil diameter of 2 mm, 80 times the diam-
eter of a bee lens. For a bee to have the same resolution, diffraction requires
that all its lenses would need to have this diameter, and to exploit all the
detail in the scene they would need to be spaced at 0.5 arc-min angular
intervals, the same as the receptors in our fovea. In a spherical eye, the
inter-ommatidial angle (Δϕ) is the angle subtended by one lens diameter at
the centre of the eye (D/r radians, where r is the eye radius; Fig. 7.1), which
gives r = D/Δϕ. With Δϕ = 0.5 minutes of arc, (0.000145 radians), and D = 2
mm, the radius of curvature will be 13.8 m, and the diameter twice this.
Kirschfeld (1976) has pointed out that this calculation is a little unfair.
Resolution in the human eye falls off dramatically away from the fovea, to
a tenth of its maximum value at 20° from the fovea, and even less further
out. Taking this into account the ‘human’ compound eye can be shrunk in
size considerably, to an irreducable 1-m diameter (Fig. 7.7b). This still looks
silly, however, and would certainly be hard to fly with. The serious point

(a) (b)

1m

2mI

Fig. 7.7 The sizes of compound eyes with human-like resolution. Left a compound eye with
1 minute resolution everywhere. Right: compound eye with 1 minute resolution in the fovea, but
falling off with eccentricity as in the human eye. Both figures from Kirschfeld (1976).
170 Animal Eyes

is that because of diffraction compound eyes are stuck up an evolution-


ary blind alley. For a single-lens camera-type eye only one lens needs to
be made larger to improve resolution, but for a compound eye all have to
be enlarged and the numbers have to increase correspondingly. The net
result is that the size of camera eyes increases linearly with resolution, but
compound eye size increases as the square of resolution. Dragonflies seem
to approach the limit of what it is possible. Their eyes are 8 mm or more
in diameter, have up to 30 000 facets each, and resolve about 0.25° in their
most acute region. This is still poor compared with what is achievable by
any camera-type eye of the same diameter.
The outcome of this discussion is that it is very hard for an apposition
eye to improve its resolution—it simply gets too big. Space is thus at a pre-
mium; a little extra resolution here must be bought by a bit less there, and
for this reason the different visual priorities of arthropods with different
life styles show up in the distribution of inter-ommatidial angles, and often
facet sizes, across the eye. We will return to this point later when discuss-
ing the various ecological adaptations of compound eyes.

Sensitivity
The sensitivity of an apposition eye is calculated in the same way as for a
camera-type eye. The formula for sensitivity was derived in Chapter 3:
2 2
S = 0.62D Δρ Pabs (7.2)

where D is the lens diameter, Δρ the rhabdom acceptance angle (Fig. 7.6),
and Pabs the proportion of photons absorbed, which for simplicity we assume
here to be 1. Although D is roughly 100 times greater in a human eye than
in a bee ommatidium, Δρ is about 100 times smaller (approximately 0.015°
compared with 1.5°), so that the value of S is very similar in the two species.
Thus the range of illumination conditions over which an insect with an appo-
sition eye can operate is similar to that of a mammal using its cone system.
Mammals can also see at much lower intensities, by pooling the responses
of rods over quite large retinal areas. This process involves a serious loss of
resolution, and although pooling may occur in some arthropods (Warrant
et al. 1996) the numbers of ommatidia involved are relatively small.
Apposition eyes are generally suited to daylight conditions, unlike the
superposition eyes discussed in Chapter 8 which provide much brighter
images. Nevertheless a number of cases are known of animals with appo-
sition eyes that are active at night. The most impressive of these are the
Panamanian sweat bee (Megalopta genalis; Greiner et al. 2004) and the Indian
carpenter bee (Xylocopa tranquebarica; Somanathan et al. 2009). Compared to
Apposition compound eyes 171

related diurnal bees, these nocturnal species have a number of optical adap-
tations that increase sensitivity, notably larger corneal diameters and much
wider rhabdoms. These result in lower F-numbers and wider acceptance
angles (Δρ), and give an overall sensitivity gain, using eqn (7.2), of about 27
times. Similar optical adaptations have also been found in diurnally and
nocturnally foraging Myrmecia ants (Greiner et al. 2007), and even between
different castes of the same species (Narendra et al. 2011). The modest gain
in sensitivity that optical enhancements can supply is, however, far short
of the factor of 105 between dim daylight and a moonless night when the
bees still fly and ants forage. Other factors such as longer integration times
and spatial pooling must also be involved, and have indeed been found
(Frederiksen et al. 2008).
When discussing sensitivity, ‘adaptation’ can have two meanings.
Different eyes may be adapted in the evolutionary sense to work perma-
nently in conditions of high or low illumination: night or day, deep-sea or
surface. Alternatively, the same eye can be said to be light- or dark-adapted
via reversible and temporary changes in its optical anatomy. In both cases
eqn (7.2) is the key to interpreting changes and differences. Figure 7.8

Callinectes Cirolana

–c

cc

rh
100 μm

rf

Fig. 7.8 Ommatidia from a shallow-water blue crab (Callinectes) and a deep-water isopod
(Cirolana). The differences in the dimensions of the components mean that the Cirolana eye is
about 4000 times more sensitive than that of Callinectes. The ommatidial acceptance angle,
however, is more than 20 times greater. c, cornea; cc, crystalline cone; r, receptor cell;
rh, rhabdom; b, basement membrane; rf, reflecting material.
172 Animal Eyes

shows ommatidia from eyes of two crustaceans, a shallow-water blue crab


Callinectes and a deep-sea isopod Cirolana, to illustrate the extent of perma-
nent adaptation to two extremes of lighting conditions. Values for D and
Δρ derived from the figure indicate that Cirolana is about 4000 times more
sensitive than Callinectes, the main effect coming from the wide rhabdom
acceptance angle in Cirolana which results from the massive rhabdom diam-
eter. The cost of high sensitivity, in terms of decreased resolution is very
great: Δρ is about 47° in Cirolana compared with 2° in Callinectes. As with
other types of eye, sensitivity and resolution are in conflict, and to excel in
both requires an eye of prohibitive size.

Light and dark adaptation


Temporary light and dark adaptation mechanisms take a number of forms
in apposition eyes (see Autrum 1981). Some are illustrated in Fig. 7.9, and
include the following: (a) An iris mechanism just above the distal tip of the
rhabdom which restricts the effective value of Δρ in eqn (7.2.) In the case of
craneflies (Tipulidae), which have an arrangement of six outer and two cen-
tral rhabdomeres, the iris cuts off the outer six in the light leaving only the
central pair. (b) A ‘longitudinal pupil’ consisting of large numbers of very
small pigment granules which move into the region immediately around
the rhabdom in the light and withdraw in the dark. The main effect of this
is to absorb the wave guided light that travels just outside the rhabdom
(see Fig. 3.7). This is replaced from light within the rhabdom, and this is
absorbed in turn, so that light is progressively ‘bled’ out of the rhabdom.

(a) (b) (c) (d)


LA DA LA DA LA DA LA DA

Fig. 7.9 Dark and light adaptation in apposition ommatidia. (a) Variable pupil in front of a
rhabdom (tipulid flies, water bugs). (b) Radial migration of pigment granules in the retinula cells
(flies, butterflies). (c) Changes in rhabdom size and shape (crabs, orthopteran insects). (d) Changes
in lens focal length ( Artemia). Nilsson (1989).
Apposition compound eyes 173

This mechanism is particularly important in higher Diptera (houseflies, etc.)


and in butterflies, and it can work in a matter of seconds. (c) The rhabdom
dimensions may themselves change, usually over a period of hours. This
mechanism may involve the resynthesis of photoreceptive membrane in the
dark, and its sequestration in the light. (d) Other photomechanical changes
include movements of the rhabdom towards or away from the lens, and
in the case of the small crustacean Artemia there is a change in the focal
length of the lens itself, which shortens in the dark and so increases Δρ in
eqn (7.2). In addition to these changes there are electrical and enzymatic
changes in the receptors themselves, that alter the gain of transduction and
increase response time in the dark.

The pseudopupil
Before describing the different ways that the resolution of apposition eyes
is matched to behaviour and ecology, it will be helpful to discuss an optical
curiosity known as the pseudopupil. This optical phenomenon provides us
with a powerful and non-invasive technique for studying the way resolution
varies across a compound eye. Insects and crustaceans with light-coloured
apposition eyes have an easily visible dark spot which has the alarming
property that it moves across the eye as the observer rotates around the
animal (Fig. 7.10). It seems as though one is being watched. In fact this is a
passive optical phenomenon that has nothing to do with the visual process
itself. Whatever the background colour of the eye, the region that images
the observer must look dark because it absorbs photons from the observer’s
direction. The dark spot (the pseudopupil) moves with the viewer because
different parts of the eye image different directions in space; it is almost
disappointingly simple. Often, however, the pseudopupil is more than a
dark spot, and has a pattern to it which on careful inspection turns out to
be an enlarged and slightly fuzzy image of the various structures around
the rhabdom tip, in the focal plane of each facet lens.
Figure 7.11 is an attempt to explain this. When one views the eye from a
close distance, rays (dark ones!) joining the tips of several rhabdoms to the
eye or microscope form a cone that appears to originate in the centre of the
eye. Similarly, rays leaving points just to the left of each rhabdom seem to
come from a point just to the left of the centre of curvature. Thus the ‘deep
pseudopupil’ has the same geometry as the structures in each focal plane,
but is composed of the superimposed contributions from many omma-
tidia. Using the principle of similar triangles, it can be seen that the deep
pseudopupil is enlarged relative to the original structures in the ommatid-
ium by a factor of r/f (eye radius divided by facet-lens focal length).
(a)

(b) (c)

(d)

Fig. 7.10 Appearance of pseudopupils in eyes of insects and a crustacean. (a) The Australian bee
Amegilla photographed at 20° intervals around the eye (front at left). The pseudopupil appears
to move round the eye as the head is rotated. It is elongated dorso-ventrally, implying greater
vertical resolution than horizontal, and it becomes narrower with increasing angle from the front
meaning that the horizontal resolution decreases from front to side. (b) Appearance of a butterfly
eye ( Junonia villida) with a complex pseudopupil that shows the arrangement of the different
types of pigment cells in the plane of focus of the ommatidial lens system (see Figs. 7.5e and 7.11).
The darkest dot in the centre is the image of the rhabdom itself. (c) Antidromic deep pseudopupil
of the fly Drosophila melanogaster. The pseudopupil has the same geometry as the seven-
rhabdomere structure in Fig. 7.4d, with the same characteristic asymmetry. (d) Extreme vertical
elongation of the pseudopupil in the ghost crab Ocypode, related to increased vertical resolution
around the horizon.
Apposition compound eyes 175

f
x

r
Fig. 7.11 Explanation of the pseudopupil. Seen from
outside, rays emerging from the centre of each of several
ommatidia appear to come from a single enlarged
ommatidium situated at the centre of curvature of the eye
(local radius r). Other regions of the ommatidial focal plane
superimpose in the same deep region to give a pattern of
width y that resembles that in each ommatidium (width x).
A good example of such a pattern is seen in Fig. 7.10b. y

We can learn a great deal from the pseudopupil (Stavenga 1979). Its form
reveals structures in the ommatidium, without recourse to histology. For
example, in the butterfly pseudopupil (Fig. 7.10b) the pattern of pigment
cells surrounding the rhabdom tip is impressively displayed. The over-
all shape of the pseudopupil (whether it is elongated in one direction or
another) indicates asymmetries in the eye’s resolution. For example, a verti-
cally elongated pseudopupil generally implies a larger radius of curvature
for the vertical than the horizontal plane, which in turn means that more
ommatidia sample a given vertical angle than a horizontal one. An extreme
example of this is seen in ocypodid crabs (ghost crabs, fiddler crabs; Fig.
7.10d), but a similar pattern is found in many insects (Fig. 7.10a). However,
perhaps the most useful feature of the pseudopupil is that one can use it
to measure inter-ommatidial angles. If one rotates an insect’s head through
a degrees, and the pseudopupil appears to move across b facets, then the
inter-ommatidial angle is a/b degrees. Variations in inter-ommatidial angle
in different planes, and in different regions of the eye, can be mapped in
this way, revealing how the eye is organized to make the most of its limited
acuity (Horridge 1978).
Often eyes are so dark that no pseudopupil is visible. This is the case
in many dipteran and hymenopteran insects. A useful technique is then
to try to obtain an antidromic pseudopupil. This method, pioneered by
Nicholas Franceschini, involves shining a light up through the base of the
176 Animal Eyes

head, illuminating the proximal ends of the rhabdoms or rhabdomeres. If


it works, light passes up the light-guiding structures and emerges from
their distal tips to give a luminous pattern that has the same geome-
try as a conventional (or orthodromic) pseudopupil, and can be exploited
in the same way. In flies this works particularly well (Fig. 7.10c), and
shows up the arrangement of rhabdomeres in the focal plane very clearly
(Franceschini 1975).

Ecological variations in apposition design


As we have seen, the optical design of apposition eyes means that there
is no spare room on the head surface, and what there is needs to be used
as efficiently as possible. This in turn means that the disposition of the
optical axes of the ommatidia in space will be matched to the visual needs
of the animal—to its ecology (Land 1989, 1999). A survey of the apposi-

(a)

(b)

(c)

Fig. 7.12 Three reasons why there should be differences in resolution across compound eyes. (a)
During forward flight close to vegetation the relative angular velocities of objects in the flow field
are greatest to the side. The resulting blur is matched by lowered horizontal resolution. (b) Pursuit
behaviour requires increased resolution, usually in the dorso-frontal quadrant. (c) Close to flat
surfaces most objects of interest are near the equator of the eye, where there is often a strip of
high vertical resolution.
Apposition compound eyes 177

tion eyes of insects and crustaceans leads to the conclusion that there are
three main patterns of acuity distribution that one can identify fairly eas-
ily. These are: (1) patterns related to the velocity flow-field encountered in
forward locomotion, especially flight; (2) ‘acute zones’ associated with pre-
dation or sex, these zones sometimes developing into separate components
of a double eye; and (3) horizontal strips of high resolution in animals
living in environments such as water surfaces and sand-flats where almost
all important activity takes place around the horizon. Figure 7.12 illustrates
these situations.

The forward flight pattern


When an animal is moving through the world, the objects in it appear to
move backwards across the eye, in a pattern that has become known as a
velocity flow-field (discussed in more detail in Chapter 9). Objects to the
sides move faster than those in front, and there is a point in the direction of
the animal’s travel (the ‘focus of expansion’) where there is no image motion
(see Fig. 9.7). Objects further away move more slowly than near objects. The
geometry of image motion is shown in Fig. 7.13, and the relation between
motion, position, and distance is summed up in the following expression:

i
q = Vsinq /s (7.3)

where θ̇ is the angular speed of the image on the retina, V the animal’s
actual velocity, θ the angle between the particular object and the animal’s
heading direction, and s the object’s distance.

.
q
q
s V sin q

Fig. 7.13 The relation of retinal angular velocity (flow) to distance and angle. An animal moving
with velocity V will see an object at distance s moving across its retina at an angular velocity of θ̇
when the angle from the front is θ. From the geometry of the figure, θ̇ is given by (Vsin θ)/s
178 Animal Eyes

Clearly, near objects to the side are likely to move so fast across the ret-
ina as to cause blurring, and if this is the case it would be economical to
employ fewer receptors there, as high resolution is not usable. A butterfly
or bee spends much time flying past foliage, and reasonable values for s
might be 0.5 m, with V about 2 m.s−1. If θ is 90°, then eqn (7.3) gives the
speed across the retina as 4 radians or 229° per second. A typical response
time of a light-adapted insect photoreceptor is 10 ms, which means that in
one response time the image will have moved 229º × 0.01, giving a blur
streak about 2.3° long. It follows that there is little point in having lateral-
pointing receptors closer together than 2–3°, however good the resolution at

A P

Fig. 7.14 The organization of ommatidial receptive fields in a butterfly. The circles represent the
acceptance angles (∆ρ) of a light-adapted butterfly. In Heteronympha this has an almost constant
value of 1.9° across the eye. Two trends are clear. Going from anterior (A) to posterior (P) the axes
of the ommatidia separate, so that by 120º from the front ∆ ϕh (see Fig. 7.15) has roughly doubled.
From dorsal (D) to ventral (V) the ommatidial axes come together in the region of the eye’s
equator, and then separate again. These two trends are seen in most flying insects.
Apposition compound eyes 179

the front of the eye may be. This seems to be borne out in practice. In the
butterfly Heteronympha merope, for example, the horizontal inter-ommatidial
angle increases from 1.4° in front to 2.6° at the side (Fig. 7.14).
In describing acute zones it is helpful to indicate how densely the omma-
tidial array samples different regions of the surroundings (Fig. 7.15a). The
measure adopted here is the number of ommatidial axes per square degree.
This is easily calculated from the partial inter-ommatidial angles Δϕh and
Δϕv as defined in Fig. 7.15b (see Stavenga 1979). The axis density is then
1/(2Δϕh Δϕv), or 1/(√3Δϕ 2/2) if the array is symmetrical.
Bees, butterflies, and acridid grasshoppers are flying insects, and their
eyes all show increasing horizontal inter-ommatidial angles from front to
rear, consistent with these ideas (Fig. 7.10a). Non-flying insects, for example
many tettigonid grasshoppers, have more or less spherical eyes, without
this gradient. In all the flying groups there is another, separate gradient
of vertical inter-ommatidial angles (Figs. 7.14 and 7.16a); they are smallest
around the eye’s equator, and increase towards both dorsal and ventral
poles. This results in a band around the equator with enhanced vertical
acuity (Horridge 1978; Land 1999). The most likely reason for this vertical
gradient is that the region around the eye’s equator contains the highest
density of information important to the animal, especially if it is an insect
that feeds on flowers. The need for higher acuity is obviously greatest
in that part of the field. Vertebrates that live in open landscapes (rabbits,
cheetahs) show a related pattern of increased acuity, but there it takes the
form of elongated regions of high ganglion cell density corresponding to

2Δfh
L Δfv Δf
2Δfv
A Δfh

Fig. 7.15 (a) Representation of resolution as the number of ommatidial axes (black dots) in a given
(conical) solid angle in the space around an insect. In Figs. 7.16 and 7.17 the contours represent
equal numbers of ommatidial axes per square degree. (b) Convention adopted by Stavenga (1979)
for describing inter-ommatidial angles in an array where vertical and horizontal angles differ. This
array is of the bee/grasshopper type (hexagons on their points) but the system applies just as well
to dipteran fly/butterfly arrays (hexagons on their sides).
180 Animal Eyes

(a) (b)
0.3

0.05
0.5
0.7 0.1
0.9 0.4 0.2

Locusta Gerris
migration lacustris
front side front

1˚ 1˚ 1˚

Fig. 7.16 Distribution of resolution, expressed as ommatidial axes per square degree (see
Fig. 7.15), for a typical flying insect (locust) and a ‘flatland’ insect that lives on the surface film
(water strider). In Locusta the resolution decreases from front to back, and from the equator to
the dorsal and ventral poles, as in butterflies (Fig. 7.14). In Gerris the equatorial streak is very
pronounced, and as the insert shows it is due to an extreme distortion of the lattice of axes, giving
much greater vertical resolution than horizontal. The pattern of facets on the eye itself does not
show this distortion.

the horizon, and known as ‘visual streaks’ (see Fig. 5.13, see also Hughes
1977). More extreme versions of the vertical gradient are found in animals
from really flat environments such as beaches and water surfaces (Fig.
7.16b). These are discussed in more detail later.
The combined effects of these two gradients on the overall density of
ommatidial axes is shown for a locust in Fig. 7.16a. Worker honey bees,
butterflies (Fig. 7.14), and female blowflies (Calliphora) show a similar
pattern, although in male flies and drone honey bees, this pattern is dis-
torted to give a more pronounced acute zone concerned with mate capture
(see Fig. 7.17a and b).

Acute zones concerned with prey capture and mating


Many insects and crustaceans have a forward or upward-pointing region of
high acuity, related either to the capture of other insect prey, or to the pur-
suit in flight of females by males. Where both sexes have the specialization
(mantids, dragonflies, robber-flies, hyperiid amphipods) predation is the
reason, but more commonly it is only the male that has the acute zone
(simuliid midges, hoverflies, mayflies, drone bees) indicating a role in
sexual pursuit. The acute zones vary considerably. In male houseflies and
blowflies (Fig. 7.17a) they may involve little more than a local increase in the
Apposition compound eyes 181

(a) (b)

1.4
1.2

1.0
0.9 0.3
0.6 .8
.6
.4
Apis
Calliphora mellifera
erythrocephala (drone)

2 d.e
(c) (d) 1
1 5 v.e
5
2

1 .2 .1 .05
2 .02

2.4
.02

Anax Phrosina
junius semi-lunata

Fig. 7.17 Distribution of resolution, expressed as the number of ommatidial axes per square
degree (see Fig. 7.15), in the eyes of four arthropods with acute zones concerned with capture
(Fig. 7.12b). (a) Male blowfly. Here the frontal acute zone appears to be an enhancement of the
‘forward flight’ pattern (cf. Fig. 7.16a) which is present in both sexes. (b) The pattern in drone bees
is quite different from workers, with a dorso-frontal acute zone in which the axis density is more
than three times that in the worker eye. (c) Some dragonflies have a weaker acute zone in the
direction of flight, and another band across the fronto-dorsal region, in which the axis density
(5 deg –2) is higher than in any other insect (see Fig. 7.18c). (d) Like a dragonfly, the mid-water
amphipod Phrosina has high upward resolution. This is in the dorsal part of a divided eye
(Fig. 7.19c), which has a very small field compared with the smaller-faceted ventral eye.

acuity of the ‘forward flight’ acute zone common to both sexes (see above).
However, in other insects and many crustaceans the acute zone may be
in a separate eye, as is the case with the dorsal eyes of male bibionid flies
(Fig. 7.18b), or the upper eyes of hyperiid amphipods (Fig. 7.19). In these
more extreme double eyes, the upward-pointing part is often specialized for
detecting other small animals against the sky, or—in the sea—against the
residual downwelling daylight.
Good examples of forward-directed acute zones are found in the praying
mantids, predators in which both sexes ambush prey. The eyes have large,
binocularly overlapping acute zones which are used to centre potential prey
before it is struck with the spiked forelegs. Mantids provide the only known
example in insects where prey distance is determined by binocular triangu-
lation. The inter-ommatidial angle (Δϕ) in Tenodera australasiae varies from
0.6° in the acute zone centre, to 2.5° laterally. Facet diameters decrease from
50 μm in the acute zone to 35 μm peripherally, but this is less of a decrease
than would be expected from diffraction considerations alone (Rossel 1979).
182 Animal Eyes

Fig. 7.18 Eyes in which variations in resolution are reflected in the sizes of the facets. (a) Male
Syritta pipiens, a small hoverfly in which the male has a region of enlarged facets in the dorso-
frontal region. It uses this to ‘shadow’ females which have no such acute zone (see Fig. 9.6).
(b) A male bibionid fly (Dilophus sp) with a divided eye in which the upper part provides higher
resolution for sighting and tracking females against the sky. The females lack the upper eye
altogether. (c) Upper part of the eye of the dragonfly Aeschna multicolor, showing the wedge of
enlarged facets corresponding to the line of enhanced resolution seen in Fig. 7.17c. This is present
in both sexes. (Photograph by Truman Sherk.) (d) An empid fly (Hilara sp.). Like Gerris (Fig. 7.16b)
this is a water surface-feeding insect and has a region of large facets and high resolution around
the equator. (Photograph by Jochen Zeil.)

Amongst crustaceans there are few known examples of frontal acute zones
concerned with predation, but perhaps the best documented is in the car-
nivorous water flea Polyphemus, which uses its single fused compound eye
to locate and track swimming prey. The Polyphemus eye has 130 ommatidia,
and includes a distinct acute zone of 22 ommatidia, where inter-ommatidial
Apposition compound eyes 183

angles are as small as 2°, which is remarkable in a 0.2-mm eye. The struc-
ture of the eye also indicates the use of polarized light in prey capture.
An extraordinary example of a predatory arthropod whose eyes have
enlarged frontal facets comes from an unnamed fossil found in the Cambrian
deposits of the Emu Bay Shale of South Australia (515 mya). The largest
lenses are 150 μm in diameter and 2.5 times wider than lenses in the periph-
ery. The facet size suggests that this was a dim-light hunter (Lee et al. 2011).
In many male dipteran flies an acute zone is associated with sexual pursuit,
and is typically situated 20–30° above the flight direction (Figs. 7.17a). In
Calliphora it is characterized by a low value for Δϕ of 1.07° compared with
1.28° in the female. The facet size is also larger, as expected from diffraction
considerations: 37 μm compared with 29 μm in the female. In houseflies and
probably in other flies there are also anatomical differences at the receptor
level that suggest that this region (the ‘love spot’ as it has been called) is
specifically adapted for improved sensitivity. This is no doubt due to the
very fast response times required for high-speed chasing. Male flies also
have a number of ‘male specific’ interneurons in the optic ganglia, which
are undoubtedly involved in the organization of pursuit behaviour.
In the small hoverfly Syritta pipiens the sex difference is particularly
striking. In the male’s acute zone Δϕ is about 0.6°, nearly three times smaller
than elsewhere in the eye, or anywhere in the female eye (Fig. 7.18a). Drone
bees have a similar antero-dorsal acute zone, where the density of omma-
tidial axes is three to four times greater than anywhere in the female eye
(Fig. 7.17b). They use this region when they chase the queen, and can be
induced to chase a dummy queen on a string subtending only 0.32°, much
smaller than the ommatidial acceptance angle of 1.2°. This implies that the
trigger for pursuit is a brief decrease of about 6 per cent in the intensity
received by single rhabdoms.
An increase in the detectability of small objects can be achieved either
by reducing the rhabdom acceptance angle (Δρ, Fig. 7.6) so that a small tar-
get causes a large change in the signal on the rhabdom that images it, or
by increasing the numbers of photons available to the rhabdoms, thereby
reducing the noise against which the signal must be detected. Either course
of action requires a larger facet diameter D. In most of the examples dis-
cussed it seems that the increased facet diameter in the acute zone is ‘spent’
on reducing Δρ, but in one well-documented case that is not so. The male
blowfly Chrysomyia megalocephala has a ‘bright zone’ rather than an acute
zone, where Δρ is similar to the rest of the eye, but the photon catch per
rhabdomere is enhanced by an increase in both facet and rhabdomere diam-
eter (van Hateren et al. 1989). This increase, compared with elsewhere in the
eye, is about tenfold. It is not known why this fly has taken this particular
route, but one would guess that it mates in dim conditions.
184 Animal Eyes

There is an interesting exception to the rule that it is always the males


that have the acute zone. In pipunculid flies of the genus Chalarus the
females have greatly enlarged ommatidia in the fronto-dorsal region. These
flies parasitize leafhoppers (Homoptera) and the females have to locate
these on the undersides of leaves in order to lay their eggs. The males have
no equivalent need for keen eyesight.
Most of the animals just discussed have to detect their prey or mates
against a background of foliage. This is a far from easy task, as the tar-
get usually only differs from the background by virtue of its motion, not
because of static qualities such as brightness or texture. However, many
insects and crustaceans have simplified the problem by using the sky as a
background, against which any non-luminous object becomes a dark spot.
Thus one finds not only upward-pointing acute zones, but also double
eyes with one component directed skyward—or in the ocean, towards the
surface.
Dragonflies hunt other insects on the wing, and have acute zones
with a variety of configurations. Many in fact have two acute zones, one
forward-pointing, and presumably concerned with forward flight as dis-
cussed above, and another directed dorsally and used to detect prey (Figs.
7.17c and 7.18c). The migratory, fast-flying aeschnids have the largest eyes
and most impressive acute zones. 28 672 ommatidia have been counted in
one eye of Anax junius, which has the smallest inter-ommatidial angles
of any insect (0.24° in the dorsal acute zone), and facets of corresponding
size (62 μm). The dorsal acute zone takes the form of a narrow band of
high resolution extending across the upper eye along a great circle, 50–60°
up from the forward direction (Fig. 7.17c). The axis density (5 per square
degree) is twice that in the forward acute zone, and five times higher than
in a male blowfly. The dorsal acute zone is easily visible as a wedge of
enlarged facets (Fig. 7.18c). Presumably the great high-acuity stripe in Anax
is used to trawl through the air, picking out insects against the sky much
as the scan line on a radar set picks up aircraft. ‘Perching’ dragonflies such
as libellulids detect their prey from a stationary position on the ground or
vegetation, rather than on the wing. They then fly up on an interception
course, with the head and eyes tracking the prey independently of the
motion of the main body (Oldberg et al. 2007). In Sympetrum species there
is a dorsally-directed acute zone with high acuity but low sensitivity, sur-
rounded by a field of low resolution but high sensitivity. This seems to be
an ideal combination for detecting fast-moving objects (with the sensitive
periphery) and then fixating and tracking them with the acute central zone
(Labhart and Nilsson 1995).
Male simuliid flies have divided eyes, and use the upper part to detect
potential mates against the sky. They can do this at a distance of 0.5 m,
Apposition compound eyes 185

when a female subtends an angle of only 0.2°. As in drone bees, this is a small
fraction of an acceptance angle. The eyes male of bibionid flies are similarly
divided (Fig. 7.18b) with larger facets and smaller inter-ommatidial angles in
the dorsal eye (1.6° compared with 3.7°, in Bibio marci). The upper eyes are
used exclusively for the detection of females; movement of stripes around the
lower eye evokes a strong optomotor turning response—the almost universal
visual behaviour used by insects to prevent involuntary rotation—but the
dorsal eye is quite unresponsive to this kind of stimulus (Zeil 1983).
Amongst the crustaceans, mid-water representatives of three groups
have specialized in double eyes: the hyperiid amphipods with apposition
eyes (Fig. 7.19), and the euphausiids and mysids with superposition eyes
(see Chapter 8). The hyperiid eyes present an extrordinary range of eye
anatomy from surface-living forms with single eyes, to mid-water species
with double eyes of various kinds (Phrosina, Phronima, and Streetsia are
illustrated in Fig. 7.19), and finally to the deep-living Cystisoma, a large and
very transparent animal which only has the upward-pointing component
of the eye. The logic of this trend seems to be that the deeper an animal
lives, the more important it becomes to devote as much photon-catching
power as possible to the residual downwelling light, because it is against
this dim background that potential food can be sighted from the silhouette
it casts. It is interesting in this context that a great many mid-water ani-
mals disguise their silhouettes in various ways. Hyperiid amphipods do
this by being transparent, whereas others such as the euphausiids (krill),
many fishes, and some squid use ventrally directed photophores to substi-
tute for the light blocked by the silhouette. In many cases the brightness
of this bioluminescence can be adjusted to match the background light. In
all the double-eyed hyperiids the upper part has larger facets and smaller
inter-ommatidial angles than the lower part. Phronima sedentaria, a remark-
able animal that protects itself with a transparent barrel hollowed out from
the body of a salp, is probably the most extreme in this respect. The inter-
ommatidial angle (Δϕ) is only 0.25° in the dorsal eye, compared with 10° in
the ventral. Interestingly, the acceptance angles of ommatidia in the dorsal
eye are about nine times greater than the Δϕ values, which in an ordinary
terrestrial eye would imply huge over-sampling of the image. However,
where the problem is to detect single dot-like objects and not to resolve
texture, it can be shown that this is not a real mismatch, because the appar-
ent contrast loss can be recovered by neural pooling later on. Phronima is
also unique in that the light focused by the lenses of the dorsal eye is con-
veyed the 5 mm distance to the ventrally-situated retina by light-guides 18
μm wide (Figs. 7.5d and 7.19b). The function of the lower eyes in Phrosina
and probably other double-eyed hyperiids appears to be to detect and track
luminous objects, such as bioluminescent animals (Land, 2000).
186 Animal Eyes

Fig. 7.19 Remarkable apposition eyes of mid-water hyperiid amphipods. (a) Head of Phronima
sedentaria, from in front. The lenses of the upper eye cover the whole of the top of the head, and
send light via 5-mm long light guides to the inner pair of dot-like retinae near the jaws. The outer
pair of retinae serve the much smaller tear-drop-shaped lower eyes. (b) Lenses with attached light
guides dissected from the upper eye of Phronima (see also Fig. 7.5d). (c) Phrosina semi-lunata, a
relative of Phronima, also has a divided eye with separate retinae, but the halves are not separated.
Total eye height 2.4 mm. Note the larger facets of the upper eye. The resolution distribution is
shown in Fig. 7.17d. In both species the eyes are very transparent, and the retina is condensed to
minimize its visibility to potential predators. (d) Cylindrical eye of Streetsia challengeri. Like the other
two species, the optical elements are transparent lens cylinders, but here arranged asymmetrically
around the retina, which is the dark sausage-shaped structure in the ventral region of the eye.
Curiously, the eye (right) has no field of view in the direction of the forward-pointing spike (left).
The eye is 7 mm long, and the body, off the page to the right, is unremarkably shrimp-like.

Horizontal acute zones


As we have seen, many flying insects have a zone of increased vertical acu-
ity around the horizon, no doubt reflecting the visual importance of this
part of the surroundings. The visual field of the locust in Fig. 7.16 shows this
clearly. There are environments where this region is even more important.
Sand and mud flats are good examples, and many of the crabs that inhabit
them have a narrow band of high vertical acuity around the equators of the
eyes (Zeil et al. 1989). In the ghost crab, Ocypode ceratophthalmus (Fig. 7.10d)
Apposition compound eyes 187

this band is about 30° high, with vertical inter-ommatidial angles as low
as 0.5°; by contrast the horizontal inter-ommatidial angles are four times
larger. There are interesting differences in this respect between crabs of the
flat beach and those of the rocky upper shore. The former tend to have tall
eyes close together, with a very pronounced equatorial band, whereas the
latter have rounder eyes far apart, with a weakly developed band. Zeil et al.
(1989) suggest that the tall-eyed crabs measure distance by the angle down
from the horizon to the feet or base of the object they are looking at (a strat-
egy which will only work on a flat surface) whereas the upper shore crabs
with their wide-spread eyes use some form of binocular stereopsis. Another
feature of the horizon is that objects that penetrate above it are necessarily
larger than the crab itself, and are thus likely to be predators. Fiddler crabs
(Uca pugilator) react defensively to moving objects above the horizon, but not
to objects of similar angular size or speed below it (Layne 1998).
Insects that fly over water have a similarly narrow equatorial field of
interest. Empid flies hunt close to the surfaces of ponds, again looking for
stranded insects, and they have a horizontal acute zone that can be recog-
nized by a linear region of enlarged facets around the eye (Fig. 7.18d). In
Rhamphomyia tephraea, vertical inter-ommatidial angles are only 0.5° in this
15° high region, rising to 2° above and below it.
Water surfaces themselves provide a similarly constrained field of view,
and water-striders (Gerris), which hunt prey stranded in the surface film,
have a narrow acute band imaging this region, as shown in Fig. 7.16b. This
has a height of only about 10°, centred on the horizon, and within this the
vertical inter-ommatidial angle in the frontal region is only 0.55°, which is

ns Δf

cycles per 0.4
degree 0.2 2
5
0

Fig. 7.20 Vision from below the surface film. Left : a floating object can be seen both above and
below the surface. Right distribution of resolution, expressed here as both inter-ommatidial angle
(∆ϕ) and acuity (vs = 1/2∆ϕ), around the eye of the back-swimmer Notonecta. The eye has two
regions of elevated resolution, corresponding to the upper and lower faces of the water surface.
188 Animal Eyes

close to the diffraction limit, and impressive in an eye with only 920 omma-
tidia (Dahmen 1991). The backswimmer, Notonecta, is in some ways even
more remarkable. Living just below the surface film, it looks up at the water
surface, and can view potential prey in two ways. With the ventral part of
the eye (Notonecta hangs upside down) it can look through the water surface
to view the top of the prey in air, and it can look below the surface to see
the bottom of the same prey through the water (Fig. 7.20). The two views
are separated by about 30° in the sagittal plane. It turns out that there are
actually two acute bands in Notonecta as there are in the fish Aplocheilus (see
Fig. 4.10), each imaging one of the views of the surface (Schwind 1980).

The anomalous eyes of strepsipterans and trilobites


We end this chapter with a discussion of a type of eye that seems to break all
the rules of compound eye design, and which comes close to straddling the
gap between simple and compound eyes. Strepsipterans are tiny parasites
of wasps and other insects, in which the males take to the wing for a few
hours in order to find mates. In Xenos peckii each eye of the male has about
50 lenses (Fig. 7.21) compared with 700 in the similar sized Drosophila mela-
nogaster, but the lenses are large (65 μm diameter), and beneath each there
is a ‘retina’ of about 100 receptors (Buschbeck et al. 1999). Thus each facet
of this compound structure is quite unlike the ommatidia of other apposi-
tion compound eyes, and is actually a complete little eye (eyelet) in its own
right, with a field of view of about 30°. Within each eyelet the inter-receptor
angle is about 4°, which is comparable with other insects of similar size. A
problem with an eye with this design is that the many inverted images do
not join up. The problem is similar to that presented by the neural super-
position eyes of Diptera (Fig. 7.4c), and the cure is the same: the images
need to be re-inverted by an appropriate crossing over of the axons joining
the receptors to the lamina—the first optic ganglion. Buschbeck et al. (1999)
found that these crossings over (chiasms) do indeed exist, so the machinery
is present for producing a single erect image, as in a conventional compound
eye. There are no obvious ecological reasons why this odd group of insects
should have evolved so strange an eye. Whilst it is tempting to think that
each eyelet is resolving an image and that the sampling unit in this eye is
a single rhabdom within the eyelet, the available evidence does not support
this. Pix et al. (2000) made a comprehensive study of the optomotor response
of these animals to moving patterns (see Fig. 4.3) and concluded that the
sampling unit for this type of behaviour was the whole eyelet, not the indi-
vidual receptor. If this is true for other behaviours (it may not be) then why
should an eye with this design exist at all? The enigma continues.
Apposition compound eyes 189

Fig. 7.21 Eyes of a strepsipteran insect ( Xenos peckii ) which has a small retina behind each facet
(left), and a phacopid trilobite (Hollardops mesocristata) which probably had the same ‘eyelet’
design (right). The Xenos eye is about 0.3 mm in diameter (scanning electron micrograph by Elke
Buschbeck and Birgit Ehmer); the Hollardops eye is 9 mm long (trilobite kindly identified by Pierre
Morzadek). Notice the difference in size, number, and packing of the lenses compared with the
conventional eyes shown in Fig. 7.18.

It seems likely that eyes with this peculiar design have occurred once
before, in the extinct phacopid trilobites, best known from the genus Phacops
(Fig. 7.21). Like Xenos their eyes have small numbers of large (0.5 mm) some-
what separated facets, and are known as ‘schizochroal’ eyes as opposed to
the ‘holochroal’ eyes of the majority of trilobites (Levi-Setti 1993). The latter
seem to be ordinary apposition eyes, but the huge lenses of the schizo-
chroal eyes would only make sense as conventional compound eyes if the
animals had lived in conditions of near darkness. The more likely explana-
tion is that these were ‘eyelet’ eyes, like those of strepsipterans.

Summary

1. Apposition compound eyes are made up of ommatidia, in which each


receptor group receives an inverted image from its own lens. In con-
ventional apposition eyes the receptive rod (rhabdom) in each omma-
tidium does not resolve detail within each image, but acts as a detector
that measures the average brightness of a small region of space, typically
about 1° across. The overall erect image seen by the animal is the mosaic
formed by these adjacent fields of view.
2. In dipteran flies the situation is slightly different: the inverted image in
each ommatidium is resolved by seven separate receptors. However, the
responses of these are combined in the lamina (first synaptic layer) in
a way that pools their signals, giving enhanced sensitivity without loss
of resolution. As far as the fly is concerned the form and resolution of
190 Animal Eyes

the overall image is the same as in a conventional apposition eye. This


arrangement has been called ‘neural superposition’.
3. Because individual facet lenses are very small the images they produce
are severely limited by diffraction, so that the minimum resolvable angle
is rarely better than 1°. To improve on this requires larger lenses as well
as more of them, and the size of the eye rapidly becomes unsupportable.
Arthropods do achieve enhanced resolution, however, by having local
regions of enlarged facets and closer ommatidial axes, at the expense of
resolution elsewhere.
4. Much can be learnt about the way that apposition eyes sample the sur-
roundings from a study of the pseudopupil: this is the small dark spot
that appears to move across the eye as the observer moves around it.
5. Acute zones are found frontally in many flying insects; dorsally or dorso-
frontally in insects that capture other insects on the wing, either to mate
with them or to eat them; and around the horizon in arthropods that live
in a flat environment, such as crabs on a beach, or bugs that hunt in the
surface film of ponds.
8 Superposition
eyes

Introduction—the nature of superposition imagery


From the outside, apposition and superposition eyes are almost indistin-
guishable. Both are convex structures with facets of similar dimensions,
and are clearly variants of the same general design. But there the resem-
blance ends. Internally there are several crucial anatomical differences: the
retina is a single sheet, not broken up into discrete ommatidial units as in
apposition eyes, and it lies deep in the eye, typically about halfway between
the centre of curvature and the cornea. Between the retina and the optical
structures beneath the cornea there is a zone with very little in it, the clear
zone, across which rays are focused—the equivalent of the vitreous space
in a camera-type eye (Fig. 8.1). The optical devices themselves are various;
as we shall see, they may be refracting telescopes, mirrors, or lens-mirror
combinations, although to a cursory examination most do not look very
different from the lens structures of apposition eyes.
The real surprise is optical. All superposition eyes produce a single
deep-lying erect image in the vicinity of the retina. Not only does this dis-
tinguish them from apposition eyes, which have multiple inverted images,
but also from camera-type eyes where the image is inverted. Clearly we are
dealing here with something quite out of the ordinary. Around the turn
of the twentieth century there were a number of successful attempts to
photograph these images. There is one in Exner’s monograph of 1891, and
a delightful portrait taken by H.E. Eltringham of his friend Sir Edward
Poulton ‘taken through the eye of a glow-worm’ and reproduced in Imms’
Insect natural history (1956, Plate VIIb). Our recent attempt to recreate this
photographic feat, in a firefly eye, is shown in Fig. 8.2 (right), where the

191
192 Animal Eyes

Fig. 8.1 Section through the refracting superposition eye of a nocturnal dung beetle, Onitis
westermanni, showing the cornea (c), the outer row of crystalline cones (cc), the wide optically
unencumbered clear zone (cz), and the convex rhabdom layer (rh) about halfway out from the
eye’s centre of curvature. (Photograph by Dr S. Caveney.)

Fig. 8.2 Apposition and superposition images. The photograph on the left shows the multiple
inverted images of a candle flame, taken through the facets of the eye of a robber fly. The erect
image on the right, of an influential nineteenth-century naturalist, was taken through the cleaned
cornea of the eye of a firefly, Photuris sp. Both were taken with the cornea in air, and the region
behind the optics in physiological saline.

single erect image is contrasted with the multiple inverted images of an eye
of the apposition type. It turns out that it is important to use a beetle (such
as a firefly) for this. Other insects, in particular moths, have superposition
eyes, as do crustaceans such as krill (euphausiids), but there the optical
structures that create the image are not joined to the cornea, and they are
swept away when the eye is cleaned to make a lens for photography. In
Superposition eyes 193

beetles, however, the optical elements are continuous with the cornea and
so survive the removal of the eye’s internal structures.
The credit for the discovery and elucidation of this remarkable piece of
optics is due to Sigmund Exner, who worked on the problem throughout the
1880s and published his complete findings in 1891. Exner showed that the
only way an erect image could be formed was for the optical elements to
behave in a rather strange way, as shown in Fig. 8.3a. Basically what each has
to do is not to form an image from a parallel beam as in a conventional lens,
but to redirect light back across the element’s axis, to form another parallel
beam on the same side of the axis (Fig. 8.3b). Exner realized that although
a single lens wouldn’t do the job, a two-lens telescope would, and he went
on to demonstrate (as well as he could with the technology of the time) that
such structures were indeed present in the superposition eyes of insects. In
the 1950s and 1960s Exner’s ideas ran into difficulties, when, armed with
a new device called an interference microscope, scientists started to look
for high refractive index structures that would make Exner’s telescopes a
reality. Unluckily, some of the first studies were on crayfish eyes (certainly
superposition eyes by their clear-zone anatomy) but with nothing that could
serve as a lens or lens combination. What should have been the optical ele-
ments appeared to be squarish blobs of low-refractive index jelly, with no
promising optical properties (see Fig. 8.13c). A plethora of unsatisfactory

Fig. 8.3 (a) Exner’s diagram of 1891 showing the ray paths needed to produce an image (B–B)
in a superposition eye. Note the ‘dog-leg’ way in which the rays must be bent by each optical
element. (b) Ray bending by a conventional lens, and by an element in a superposition eye.
Ordinary lenses cannot perform the required task.
194 Animal Eyes

Fig. 8.4 Three arrangements capable of ‘dog-leg’ ray bending (see Fig. 8.3). From left : A two-
lens telescope, a plane mirror, and a combination of lens and curved mirror. These correspond to
the elements in the three known types of superposition eye: refracting, reflecting, and parabolic
superposition.

theories arose as to how these eyes might work, but as it turned out they
were not necessary. In 1975 Klaus Vogt discovered that crayfish and their
relatives use a system that works with mirrors, not lenses. However, for all
the other eyes to which Exner had addressed his attentions the refracting
telescope mechanism was, and still is, the correct one. Later still, a third
mechanism was discovered in certain crabs that combined both refracting
and reflecting elements (Nilsson 1988). These three ways of achieving the
‘dog-leg’ ray-paths required for superposition imagery (refracting, reflect-
ing, and mixed or parabolic superposition) are shown in Fig. 8.4. In the
sections that follow they are discussed in turn.

Refracting superposition
Telescopes and lens cylinders
In a lens-based superposition eye the optical elements need to act as simple
inverting telescopes which redirect the entering beam of light back across
the axis, as shown in Fig. 8.3b. The most straightforward way to do this is
to have two lenses separated by the sum of their individual focal lengths,
with an image plane between them (Fig. 8.4). Exner realized that, given
plausible refractive indices and the curvatures of the structures revealed
by histology, there was not enough ray-bending power in each element of a
(a) (b)

(c)

Fig. 8.5 Lens cylinders. (a) and (b). Exner’s diagrams of lens cylinders capable of producing a simple
inverted image, as in the apposition eye of Limulus, and dog-leg ray redirection as in the superposition
eyes of moths and beetles. Essentially, the single lens structure in (a) turns into (b) if its length is
doubled so that rays producing the first image are brought parallel again (i.e. re-collimated). Note
that (b) is analogous to the two-lens telescope in Fig. 8.4. (c) Right: two versions of the refractive
index gradient (n) from centre to periphery of a lens cylinder, required to produce the kind of imaging
shown in (a) and (b). The abscissa is the radial distance from the axis divided by twice the focal length.
The parabolic and hyperbolic secant gradients differ only slightly in their optical properties. Left:
recent measurements made by interference microscopy of the refractive index gradients in a variety
of lens cylinder structures in compound eyes. (a, euphausiid; b, firefly; c, moth; d, skipper butterfly; e,
Limulus). The measured and theoretical estimates of the gradient agree very well.
196 Animal Eyes

beetle eye to make this possible. He came up with an idea which was simi-
lar to Matthiessen’s solution for the fish lens (Chapter 4), namely, that the
structures must have an internal refractive index gradient. The result would
be that most of the ray bending would occur within the tissue, rather than
at its external surfaces. The pure form of this structure, a flat-ended cyl-
inder with a radial parabolic refractive index gradient, Exner called a lens
cylinder. He showed that, depending on its length, it could act as a single
lens (Fig. 8.5a) as in the apposition eye of Limulus (Chapter 7), or as a pair
of lenses making up an inverting telescope of the kind required for super-
position optics (Fig. 8.5b). Although Exner did not have the means in his
time of establishing whether beetles and moths had optical elements with
the required refractive index gradient, numerous studies since the advent of
interference microscopy have shown that his brilliant conjecture was correct
(Kunze 1979; Nilsson 1989). Figure 8.5c gives a selection of these measure-
ments. Interestingly lens cylinder structures were invented de novo in the
1970s, manufactured from both glass and plastic by processes that provided
axial refractive index gradients. They are now used in optical fibre coupling
devices, and the lenses of some CD players.

Resolution and sensitivity


The geometrical optics of a superposition eye are shown in Fig. 8.6. The
peculiarities of this type of image formation mean that the nodal point of
the eye (the point through which rays pass undeviated) is at the centre of
curvature, and the focal length is the distance out from the centre to the
image. This conforms to the general definition of focal length ( f ) given in
eqn (3.1):

O/U = α = I/f

where O and I are object and image sizes, U is the (large) object distance,
and α is the angle in radians subtended by object or image at the nodal
point. The inter-rhabdom angle (Δϕ) is s/f, where s is the rhabdom sepa-
ration, just as in a camera-type eye. As in apposition eyes, the rhabdom
acceptance angle is a combination of the geometrical subtense of a rhabdom
(d/f ), and the width of the blur circle provided by the optics (see the discus-
sion in Chapter 7, and Fig. 7.6).
In the past there has been a belief that superposition eyes suffer from
poor resolution, mainly because of the difficulty of conceiving how the
large numbers of ray bundles contributing to a single point on the image
could be directed there with sufficient accuracy. However, this reputation
seems not to be justified, except perhaps in extreme cases. In a careful
Superposition eyes 197

Fig. 8.6 Optical definitions that apply to superposition eyes. A, diameter of the superposition
pupil. D, facet diameter (note that A replaces D in the sensitivity eqn 3.6); f, focal length; d,
rhabdom diameter; ∆ϕ, inter-receptor angle ~ d/f; L, rhabdom length.

study of the eyes of dung beetles that fly at different times of the day and
night, McIntyre and Caveney (1998) found that in the day-flying Onitis belial
about 50 optical elements (the effective superposition aperture) contributed
to the image at any one point, and in the nocturnal O. aygulus the number
was close to 300. O. belial had a calculated rhabdom acceptance angle (Δρ)
of 2.2°, which is comparable with values from many apposition eyes, and in
O. aygulus Δρ was somewhat larger, 3.0°, which is still quite impressive for
an eye with such a huge aperture. These modelling studies have since been
confirmed by electrophysiological recordings from single receptors. In the
Australian day-flying moth Phalanoides tristifica the image quality has been
measured directly with an ophthalmoscopic method which uses the eye’s
own optics to view the retina and images on it (Fig. 8.7). The result was
that Δρ, the acceptance angle of a rhabdom when viewing a point in space,
was 1.58°, of which the optical point-spread function contributed only 1.28°.
This is itself only slightly larger than the half-width of the Airy diffraction
image from a single facet. Thus a superposition eye in which 140 elements
contribute to a point image has optics that are as good as in an apposition
eye with similar sized facets. Other day-flying moths (including skipper
butterflies) show similar excellent image quality.
Trying to provide a theoretical estimate of the resolving power of a
superposition eye is not straightforward. The ray bundles from different
facets in the superposition aperture travel different optical distances to
198 Animal Eyes

Fig. 8.7 Image quality in a refracting superposition eye. Left: the rhabdom mosaic of the eye
of the day-flying moth Phalanoides tristifica, viewed through the eye’s own optics using an
ophthalmoscope (Land 1984). Each rhabdom is enclosed in a reflecting sheath and so appears
light compared to the brown surrounding pigment. The inter-receptor angle (∆ ϕ) is 1.9° and the
physical separation of the rhabdoms is 16 μm. Right: image on the moth’s retina of an object (the
year the photograph was taken) in the moth’s field of view. This gives a good impression of the
image quality in a superposition eye. The figures are 17° high.

the image (unlike the rays in a single large well-corrected lens) and so do
not, in general, interfere constructively at the image point. Thus one might
expect that there is no improvement in image quality compared with that
provided by the Airy disc diameter of a single facet. However, Stavenga
(2006) has pointed out that the superposition pupil can be thought of as
composed of a series of rings, or annuli, of facets, and that the within
each annulus the distance to the image will be the same. Hence the light
from each such annulus will be coherent. This should produce an overall
diffraction pattern considerably narrower than the single-facet Airy disc.
Optically measured resolution is not as good as this prediction suggests,
however, and Stavenga attributes this difference to focusing errors inher-
ent in the optical design of superposition eyes, somewhat analogous to
spherical aberration in a single lens eye.
Size for size, superposition eyes are more sensitive than apposition eyes,
which is why they are most commonly encountered in animals such as
moths and fireflies that are active at night, or in marine crustaceans from
the mid-water depths where the light regime is similar to moonlight on
the surface. To quantify the sensitivity difference we should consider eyes
of similar size, and the same resolution (the same Δϕ). The calculation is
given in Table 8.1. The additional assumptions are made that the effective
pupil in the superposition eye is 10 facets wide, and that the focal length
of the apposition ommatidium is 0.1 mm. These are both realistic values.
The result is that the superposition eye is a hundred times more sensi-
Superposition eyes 199

Table 8.1 Sensitivity calculation for apposition and superposition eyes of the same size
and resolution

Parameter Apposition Superposition

Radius (r) 1 mm 1 mm
Inter-receptor, or 2° 2°
Inter-ommatidial angle (∆ϕ) 0.035 rad 0.035 rad
Focal length (f) 0.1 mm 0.5 mm
Receptor separation
(superposition: s = f∆ϕ) – 17.5 μm
Receptor diameter
(superposition: d = s) – 17.5 μm
(apposition: s = f∆ρ, 3.5 μm –
where ∆ρ = ∆ϕ)
Aperture
(apposition D = r∆ϕ) 35 μm –
(superposition: A = 10 – 350 μm
facet diameters)
Sensitivity (eqn 3.6) 0.93 (μm2) 93 (μm2)
S = 0.62 D2∆ρ2 or 0.62A2∆ρ2

tive than a similar sized apposition eye, and in truly nocturnal moths and
beetles, which have even larger superposition pupils, the sensitivity can be
ten times higher again. Because rays entering the outer parts of the super-
position pupil are less effective than central rays, these figures somewhat
overestimate the sensitivity gain of superposition eyes, but not by more
than a factor of two.

Fig. 8.8 Many superposition eyes show eye-glow when observed from the same direction as
the illuminating beam. Parallel light is focused to a spot on the retina, and then reflected back
by the tapetum to emerge through the same superposition pupil that it originally entered. Left:
dark adapted reflecting superposition eye of a decapod shrimp (Leander); right: light adapted eye
in which rays from the outer zones of the pupil have been cut off (Fig. 8.9) leaving a small dark
central pseudopupil.
200 Animal Eyes

Eye-glow and the superposition pupil


Some decapod crustaceans with reflecting superposition eyes, and most
moths, have a reflecting layer (tapetum) behind the rhabdoms. Its function is
the same as the tapetum in the eye of a cat: to double the light path through
the photoreceptors and so improve their photon catch. In some diurnal moths
a reflector also surrounds each rhabdom, optically isolating it from its neigh-
bours. In dark-adapted eyes the tapetum causes the eye to glow when viewed
from the same direction as the illuminating beam. In some diurnal moths the
glow is always visible (Macroglossum, Plate 3). The mechanism is similar to
that in a cat’s eye. The optical system forms a point image of the light source
on the tapetum, or close to it, and this point acts as an emitter of light which,
on passing through the optics again, emerges as a roughly parallel beam.
If the optics are good, that is to say they really do bring a parallel beam
to a point in the image, then the patch of glow seen at the surface of the eye
will have the same diameter as the beam that entered the eye. This is the
superposition pupil—the amount of eye surface from which rays contribute
to each point on the image (Fig. 8.8). Eye-glow can also provide a useful
test of image quality. If the glow can only be seen over a narrow angle (a
few degrees) from the direction of the illuminating beam, then the retinal
image must itself be very small. On the other hand, if the glow can be seen
over a wide angle (as is the case with many deep-sea shrimps, for example),
this indicates either that there is a large blur circle on the retina, or that the
tapetum is situated a long way from the focus.

Light and dark adaptation


The high sensitivity of most superposition eyes means that they must pro-
tect their visual pigment in daylight, and so need adaptation mechanisms
that can reduce image brightness by several orders of magnitude. The main
mechanism of light adaptation in superposition eyes consists of pigment
movements that result in the progressive interception of rays from the outer
zones of the superposition pupil (Fig. 8.9). This reduction may ultimately
result in light from only a single facet reaching a single point in the image,
which is essentially the apposition condition.
The eye-glow (Fig. 8.8) provides a means of monitoring the process of light
and dark adaptation. As oblique rays across the clear-zone are cut off during
light adaptation (Fig. 8.9), so the brilliance of the glow and the size of the
patch reduce, often disappearing completely. In the dark they slowly return.
In insects with refracting superposition eyes the main pigment movement
is a longitudinal inward migration of granules in both the primary pigment
cells (around the crystalline cones) and secondary pigment cells (extend-
ing from cornea to basement membrane; see Autrum 1981). In the dark the
Superposition eyes 201

Fig. 8.9 Light adaptation in a superposition eye. The most common light adaptation mechanism
involves the inward migration of dark pigment from between the crystalline cones. This
progressively cuts off more oblique rays from the outer zones of the superposition pupil and so
reduces the light flux on the retina.

granules are bunched up between the crystalline cones, and with the onset
of light they extend inwards, over a matter of minutes, to occupy much of
the clear zone. In many crustaceans, especially decapods such as crayfish
with reflecting superposition eyes, there is also an outward movement of
pigment in the proximal pigment cells. In the dark the pigment is held
beneath the basement membrane, but in the light it moves up between the
rhabdoms, preventing rays from entering them obliquely and thus reducing
the width of the cone of light that each rhabdom can accept.
Interestingly, the trigger for pigment migration in some moths is not pro-
vided by photoreception in the rhabdoms themselves. In the crepuscular sph-
ingid moth Deilephila, Nilsson et al. (1992) showed that a region immediately
beneath each crystalline cone initiates pigment migration, when illuminated
with ultraviolet light, and that the much deeper-lying rhabdoms are not
involved. However, in the owl-fly Ascalaphus, a day-flying neuropteran with
double superposition eyes, the pigment movements can be triggered from
both the region below the cones, and also from the rhabdoms themselves.

Single and double eyes


In superposition eyes major departures from spherical symmetry are
rare because the geometry of the eye is constrained by the shared optics.
However, in euphausiids (krill) bi-lobed eyes are common (Fig. 8.10a and b).
The two components are usually optically separate structures, so that each
202 Animal Eyes

can be regarded as a separate eye with its own symmetry. The focal length
of the dorsal eye is usually longer than that of the ventral eye, and this
results in smaller inter-ommatidial angles: in Stylocheiron maximum Δϕ is 1.2°
in the dorsal eye compared with 2.6° in the ventral. In Nematobrachion boopis
the lower eye is almost absent—a parallel with the amphipod Cystisoma
(Chapter 7). It has been shown in the double-eyed Nematoscelis atlantica that
the dorsal eye is always kept pointing upwards, towards the daylight, while
the animals themselves swim at various angles to the vertical (Land 1980).
Thus it seems very likely that the role of the dorsal eyes is to look for dark
silhouettes against the downwelling light, as suggested for the hyperiid
amphipods. The function of the wide angle, low resolution ventral eye is
less clear, but given that it images the dark of the abyss its most likely role
is the detection of bioluminescent objects.
In the genus Stylocheiron some species show a reduction in the numbers
of facets in the dorsal eye, and this seems to be related to the depths at
which the animals swim. In deeper-living species there are typically several
hundred facets altogether. In S. elongation, which lives at 180–420 m in day-
time, each row contains 13–16 facets, but in S. affine (40–140 m) this reduces
to 4–8, and in S. suhmi (0–50 m) there are only 3. This appears to be a crude
but effective way of ensuring that the different eyes provide similar retinal
illumination levels. The ultimate reduction of the superposition eye design,
to only one facet, is seen in a tropical shallow-water mysid Dioptromysis pau-
cispinosa (Fig. 8.10c). Nilsson and Modlin (1994) describe this shrimp as ‘car-
rying binoculars’, which is an almost exact analogy. The eyes are double:
the main part is a conventional superposition eye, but the accessory region
is quite different. It has only one large facet with a single giant crystalline
cone, beneath which is a retina of 120 rhabdoms, compared with 800–900 in
the rest of the eye. This accessory eye is thus a unique example of a single-
lens superposition eye—effectively a simple eye but with erecting rather
than inverting optics. The accessory eye functions as an acute zone, with a
minimum separation of rhabdom axes (Δϕ) of 0.64°—impressive in an eye of
this small size. The 44 μm diameter of the giant facet means that the diffrac-
tion limit is much lower than in the rest of the eye, which has 16 μm facets.
The most peculiar feature of these already strange eyes is that the giant facet
and its acute zone normally point backwards! Nilsson and Modlin (1994)
found that occasionally the eyes are rotated, directing the acute zones for-
wards, where they are presumably used for a higher resolution scrutiny of
potential food or mates. Unlike other double-eyed arthropods, there are no
good reasons for thinking that these special eyes normally point upwards.
Bi-lobed eyes with superposition optics are uncommon amongst insects.
As mentioned earlier, owl-flies (Ascalaphus) have double superposition eyes.
Male mayflies have a pair of dorsal superposition eyes (Plate 3), which
Superposition eyes 203

Fig. 8.10 Double superposition eyes. (a) Double eye of a living mid-water euphausiid
Nematoscelis megalops, eye height 2.5 mm. (b) Section through the eye of Nematoscelis atlantica,
height 0.9 mm. Note the two separate retinae (r) and the much larger clear zone in the upper
eye. The structure at the bottom right is a photophore, whose function is to disguise the eye’s
silhouette by counter illumination. (c) The eyes of the mysid Dioptromysis paucispinosa seen from
behind, showing the single giant crystalline cone (44 μm) surrounded by the (16 μm) crystalline
cones of the conventional eye. (d) Horizontal section of the eye in (c) showing the ordinary
superposition eye (compare with Fig. 8.1) and the giant crystalline cone ( gc), with its own higher-
resolution retina, or acute zone (az). From Nilsson and Modlin (1994).

they use for sighting females against the sky, in a similar way to bibio-
nid flies (see Fig. 7.18b). However, the lower eyes, present in both sexes
and responsible for other visual activities, are of the apposition type. Like
the euphausiid eyes the field of view of the dorsal eye is small, and it is
adjusted to the environmental circumstances of the species: those swarm-
ing in woods with small gaps in the canopy have the narrowest fields.
The hummingbird hawkmoth, Macroglossum stellatarum, a spectacular
fast-flying diurnal nectar feeder, does seem to have overcome the spheri-
cal constraints of classical superposition optics (Warrant et al. 1999). It has
a visibly non-spherical eye across which there are considerable variations
in resolution (Plate 3). These are not reflected in the pattern of facet sizes,
as is often the case in apposition eyes (Chapter 7) but in the spacing of the
retinal receptors, and also in variations in focal length across the eye. The
effect is to produce an anterior acute zone coupled with a horizon streak,
which is very similar to the pattern of resolution across the eye of a but-
terfly (see Fig. 7.14). How the optical variation is achieved without com-
promising image quality (which is excellent throughout the eye) is not yet
known, but presumably this entails a systematic variation across the eye of
the angular magnifications of the crystalline cones themselves. So far this
is the only known superposition eye that does depart substantially from a
spherical shape, without actually becoming divided.
204 Animal Eyes

Superposition and afocal apposition:


the eyes of butterflies
Butterflies and moths are classified together in the Lepidoptera, and are
undoubtedly very closely related. Most butterflies (skippers (Hesperidae)
are the exception) have eyes that behave in most respects as apposition
eyes. They have long narrow rhabdoms abutting the bases of the crys-
talline cones, no clear zone, and complex pseudopupils (Fig. 7.10). Many
moths, on the other hand, have refracting superposition eyes with wide,
deep-lying rhabdoms, clear zones and eye-glow. Transitions between the
eye types must have occurred a number of times within the moths, as well
as between moths and butterflies. A very similar picture emerges in the
beetles, most of which have apposition eyes, but a substantial number of
nocturnal and crepuscular groups, including the dung beetles and the fire-
flies, have superposition optics.
It is not very easy to see how it is possible to get from one type of eye
to the other, without going through an intermediate which doesn’t work.
Apposition eyes use simple lenses and superposition eyes two-lens tel-
escopes (or the equivalent lens cylinder devices), and there does not seem
much room for compromise. In the case of butterflies we do know the
answer: their apposition eyes actually have an extreme form of superposi-
tion optics in the ommatidia, in which the proximal lens in each telescopic
pair has become not weaker, as one might have guessed, but extremely
powerful (Nilsson et al. 1988).
The way this works is shown in Fig. 8.11. As in a normal superposition
eye a combination of the curved cornea and a weak lens cylinder in the
distal region of each crystalline cone results in the formation of an image
within the crystalline cone, about 10 μm in front of its proximal tip. This
focused light then encounters a lens with an extraordinarily short focal
length—about 5 μm. This lens has thus a power (1/f, where f is in metres)
of 200 000 dioptres, or 105 times the power of a pair of reading glasses.
The discovery of this lens involved taking thin frozen sections from the
tiny region at the base of the crystalline cone, and examining their image-
forming properties. Figure 8.12a, where a crystalline cone is seen against a
wing scale, gives some idea of the size of the structures involved. To our
delight the last 10 μm of the cone produced excellent images (Fig. 8.12b),
from whose size we could work out the optical power (Nilsson et al. 1988).
The effect of this second lens is to bring the light focused by the first (dis-
tal) lens back into a parallel beam (Fig. 8.11b and c), again just as in a super-
position eye. The essential difference is that, whereas in a superposition eye
the magnification of the telescopic pair of lenses rarely exceeds −2, here it
is much greater. The large difference in the focal length of the distal and
Superposition eyes 205

(a) (b) 0˚ (c) 2˚ (d)

20μm
CP
N

CC

SPC

PPC
I

Rh
Rh
RC

Fig. 8.11 Optics of butterfly eyes. (a) Anatomy of a single ommatidium of a butterfly. C, cornea;
CP, corneal process; CC crystalline cone; SPC, secondary pigment cell; PPC, primary pigment cell;
Rh, rhabdom; RC, receptor cell. (b) The ommatidial optics are here represented by three lenses: the
cornea using surface refraction (nodal point at N), a weak lens representing the distal part of the
crystalline cone, and a strong lens in the proximal stalk (see Fig. 8.12b). Their combined effect is to
produce a system with an internal focus (I) and a parallel output beam that matches the diameter
of the rhabdom (Rh). (c) The same system with an input beam off axis by 2°. The resulting output
beam is 12.8° off axis, close to the limit that the rhabdom can accept by internal reflection. (d)
The optical system has the secondary property that the rhabdom tip is imaged on the cornea. This
explains why waveguide modes produced by interference in the rhabdom can be seen, magnified,
when the cornea is viewed by light reflected back through the rhabdom by the mirror at its base
(Fig. 8.12c and Plate 3).

proximal lenses gives an overall magnification of −6.4, in the nymphalid


butterfly Heteronympha merope.
This high magnification has two important consequences, illustrated
in Fig. 8.11. The first is that the beam that emerges from the proximal
tip makes an angle with the axis that is 6.4 times greater than the beam
that entered the facet from outside. A ray making an angle of 1° with the
facet axis emerges at 6.4°, and similarly a beam 3° wide at the cornea wide
emerges into the rhabdom as a 19.2° wide beam. The significance of this
is that a rhabdom with a refractive index of 1.36 will just contain (by total
internal reflection) a beam 22° wide, which in turn means that the accept-
ance angle of the ommatidium will be limited to just over 3°: light making
higher angles with the rhabdom wall will escape and be absorbed by the
surrounding pigment. Thus in this kind of eye the ommatidial acceptance
angle is limited principally by the refractive index of the rhabdom, not (as
in a conventional apposition eye) by its diameter (Fig. 7.6). The second effect
206 Animal Eyes

(a) (b) (c)

Fig. 8.12 Butterfly eyes. (a) A single crystalline cone from the eye of a small blue butterfly ( Zizina
labradus) seen against a wing scale. Cone length is 40 μm. (b) The image of a letter F produced
by a 5μm-thick slice from the proximal part of a crystalline cone from the butterfly Heteronympha
merope. At this point the cone is only 5 μm wide. (c) The images of 2 lines, 10° apart, seen in the
corneal facets of Heteronympha. The images result from light that has entered the rhabdoms, been
reflected from the mirrors at the base of each rhabdom, and re-emerged from the rhabdom tip.

of the magnification is to reduce the diameter of the beam leaving the base
of the crystalline cone by a factor of about 9 (angular magnification × refrac-
tive index), compared with that entering the facet. The entering beam is
limited by the facet diameter, typically about 20 μm. The beam leaving the
crystalline cone and entering the rhabdom is squashed down to a diameter
of 2.1 μm, which is indeed close to the diameter of a butterfly rhabdom.
Thus rhabdom diameter and facet diameter are related, and between them
determine the effective aperture of the ommatidium, and hence its sensitiv-
ity. Bright-light butterflies tend to have smaller facets (20 μm) and narrow
rhabdoms (1.5–2 μm), whereas the crepuscular Australian butterfly Melanitis
leda has 35-μm facets and 5-μm rhabdoms.
Sadly, the story is yet more complicated. The small dimensions of the
rhabdoms mean that diffraction and waveguide mode effects slightly alter
the geometric story of the last few paragraphs. As this kind of apposition
eye does things almost the opposite way round from a conventional appo-
sition eye, none of the discussion in Chapter 7 is strictly appropriate. The
effects of these wave-optic phenomena are discussed in some detail else-
where (Nilsson et al. 1988) and here we will only mention one, the appear-
ance of waveguide modes at the cornea. As Fig. 8.11d shows, and as we
have seen in the last paragraph, the facet lens is imaged onto the rhab-
dom tip, reduced by a factor of 9. Light paths are reversible, so it is also
true that the rhabdom tip is imaged, magnified, in the plane of the facet
lens. Thus if there are interesting optical phenomena in the region of the
Superposition eyes 207

rhabdom tip we should see a magnified version of them at the cornea. This
turns out to be true in a rather spectacular way (Plate 3). In Chapter 3 there
was some discussion of waveguide modes, the patterns that result from the
interference of light trapped within a fibre such as a rhabdom. These are
seen beautifully in the facets of butterflies. The other feature of butterfly
eyes that makes this possible is the mirror-like tapetum at the base of each
rhabdom (Fig. 6.10d). A narrow beam of light directed down the axis of a
facet is guided through the rhabdom to the mirror, is reflected back up the
rhabdom, and the small proportion that has not been absorbed emerges
from the tip. This beam, with its mode structure, is displayed in the facet.
The appearance of the mode patterns depends on the butterfly. All have
the simplest (first-order) pattern which is bright in the middle and dimmer
towards the edges. Larger butterflies (most of the nymphalids) with wider
rhabdoms also show the more complex bi-lobed second-order mode (shown
in Plate 3), and the crepuscular Melanitis, with the widest rhabdoms, has
higher-order modes that give a pattern that is almost uninterpretable.
Interestingly, these mode patterns change as the eyes dark and light
adapt. As in dipteran flies (Chapter 7), butterflies have dark pigment in the
region around the rhabdom, and this moves into contact with the rhabdom
wall at high light levels (Fig. 7.9b). This absorbs the portion of the modal
light which travels outside the rhabdom (Fig. 3.7), and there is more of
this extra-rhabdom light in the higher-order modes. These then disappear,
leaving only the first-order mode. As the higher-order modes are wider
(in angular spread) than the first-order mode, their loss has the beneficial
effect of reducing the acceptance angle of the ommatidium in bright light,
thus improving acuity. In Melanitis, with the widest rhabdoms, the effect is
a halving of the acceptance angle from 3° to 1.5° (Land and Osorio 1990).
Another consequence of the presence of a mirror at the base of each
rhabdom is that the apposition image can actually be seen at the eye sur-
face (Fig. 8.12c). This is because the pattern displayed across the facets is an
attenuated version of the light that has entered each rhabdom, traversed it
twice and re-emerged from its distal tip. This image can only be seen if the
eye is illuminated from a wide source, and it fades in a few seconds as the
pupil mechanism bleeds light out of the rhabdoms.
What we have seen is that butterfly eyes behave as apposition eyes,
because light entering a single facet is received by a single rhabdom. They
are called ‘afocal’ because light is not focused on the rhabdom tip as in
most apposition eyes, but enters the rhabdom as a parallel beam. In their
fundamental optical design, however, these ommatidia remain of the super-
position type, constructed from two-lens telescopes. This makes it easy to
understand how different lepidopteran groups managed to switch read-
ily from the diurnal (apposition) version of the afocal eye to the nocturnal
208 Animal Eyes

(superposition) version. To become nocturnal, the powers of the distal and


proximal lenses must become more equal, the receptor layer moves to a
deeper location, and gradually more and more facets contribute to the
image. There are no blind intermediaries.
There still remains a problem of origins. By common consent, the first
compound eyes in the wormy ancestors of the arthropods had to be of the
orthodox, focal, apposition, type (Nilsson 1989). The butterfly apposition eye
helps us to understand the relationship between apposition and superposi-
tion optical types, but where did it come from? Could it have originated
from an ordinary apposition eye by a gradual increase in the refractive
index of the proximal part of the crystalline cone? This need have had no
deleterious consequences to image formation; indeed the ‘afocal’ type of
apposition eye can resolve marginally better than the ordinary ‘focal’ type.
Or it may be that butterflies developed their unique type of eye from the
superposition moth eye, which had already acquired superposition optics
by some other route. The problem remains unsolved.

Reflecting superposition
There was a period, between about 1955 and 1975, when shrimps and their
relatives couldn’t see. The use of interference microscopy in the 1950s had
shown that the optical structures that should have been producing the images
in these eyes had none of the required qualifications. Instead of being lens
cylinders with high refractive indices and a radial gradient, they were square
structures of low refractive index, made of more or less homogeneous jelly
(Fig. 8.13c). This is hardly a good basis for any kind of optical system. The

(a) (b) (c)

Fig. 8.13 Reflecting superposition eyes. (a) Eye of the decapod shrimp Palaemonetes varians.
Note the square facet array, the silvery appearance, and the dark central facets of the region
contributing to the image in the light adapted state. (b) Distal tips of the mirror boxes in the eye of
a living crayfish. (c) Tapered mirror box in a shrimp (Palaemon squilla) drawn by Grenacher in 1879.
The structure is 63 μm deep and 30 μm along each top edge.
Superposition eyes 209

solution to this enigma was first provided by Klaus Vogt in 1975, working on
crayfish eyes (a full account is provided in Vogt 1980). He found that the jelly
blobs were silvered, and that they were not lenses at all, but mirror boxes (Fig.
8.13b). Shortly afterwards the same mechanism was found in a shrimp, and it
now appears that this reflecting system is the rule throughout the long-bodied
decapod crustaceans—the shrimps, prawns, lobsters, crayfish, and the anomu-
ran squat lobsters. The hermit crabs and the true crabs (Brachyura), however,
have either apposition or parabolic superposition eyes (see below). The reflect-
ing mechanism does not occur outside the Decapoda; even the euphausiids,
sister group to the decapods, have refracting superposition eyes that resemble
the eyes of moths much more closely than those of decapod shrimps.
In essence the reflecting superposition mechanism is extremely simple.
In 1975 Vogt wrote:
‘Rays from an object point entering through different facets are su-
perimposed not by refracting systems as in other superposition
eyes but by a radial arrangement of orthogonal reflecting planes
which are formed by the sides of the crystalline cones and the
purine layers surrounding them’.
As Fig. 8.14 shows, the mirrors direct light to a common focus. Mirrors
are inverters, just like the telescopes in refracting superposition eyes (Fig.
8.3b), and so the ray-bending that the two kinds of optical element perform
is almost identical. However, problems start to arise when one tries to work
out what will happen to rays that are not in the idealized central plane
shown in Fig. 8.14b. In general, rays in oblique planes will not encounter just
one side of each mirror box, but two. What happens to such rays? Do they,
like the singly reflected rays in Fig. 8.14, all reach a common focus?
It turns out that the square arrangement of the facet array (almost unique
to the decapod crustaceans) is crucial here. The principle is that of the ‘cor-
ner reflector’. Corner reflectors—two mirrors at right angles—are occasion-
ally encountered in hairdressers and clothes shops, where they have the

(a) (b)

Fig. 8.14 Comparison of ray paths in a refracting (a) and reflecting (b) superposition eye. Both
redirect the rays as required by Fig. 8.3.
210 Animal Eyes

(a) (b)

(c)

Fig. 8.15 (a) The principle of the corner reflector. Whatever the angle of incidence, incoming rays
are reflected through two right angles, and so emerge parallel to their original path. (b) In the
mirror box of a shrimp, the incident and reflected rays are not in the same horizontal plane, but
viewed along the axis of the structure (below) the corner reflector behaviour is evident. (c) Ray
paths for rays that are not in the central plane of the eye (unlike Fig. 8.14b). The mirror boxes act as
‘corner reflectors’ in which rays reflected from two sides emerge in the same plane as the incident
rays to reach a focus at F. This is the condition for obtaining a clear image over a wide angular
field. See text for further explanation. C is the eye’s centre of curvature.

disconcerting property that wherever you move they continue to reflect


back your image. The reason for this peculiar property is shown in Figure
8.15a. A ray reflected from the two mirrors must be rotated through a total
of two right angles, which means that it will return parallel to its original
direction, no matter what angle the ray initially makes with the mirror pair. In
Superposition eyes 211

other words, apart from a slight lateral displacement of the reflected ray,
a corner mirror behaves as though it were a single mirror, but one that is
always at right angles to the incoming ray. This property turns out to be
very useful, for example in radar reflectors for ships and buoys, and it is
also the property that makes reflecting superposition possible. Radar reflec-
tors reflect in three dimensions, and thus require three surfaces mutually at
right angles, rather than two.
Consider first an arrangement for producing a point image by reflec-
tion that does not involve corner reflectors (Fig. 8.15c, left-hand side). This
consists of a series of concentric ‘saucer rims’, each angled to direct rays to
a common focus; Fig. 8.14b would then be any radial section through this
array. The problem here is that such an array of bands has a single axis, and
only rays nearly parallel to that axis form an image; other rays are reflected
chaotically around the stack. The alternative is to replace the single reflect-
ing bands with an array of corner reflectors—two sides of each mirror box
(Fig. 8.15c, right-hand side). This substitution is possible because each cor-
ner behaves as though it were a single, appropriately oriented, mirror. Rays
behave almost as though they had encountered a mirror strip in the saucer
rim array. However, the beauty of the corner-reflector arrangement is that
the orientation of each mirror pair is no longer important, unlike the situ-
ation in the single mirror array. Thus, the structure as a whole no longer
has a single axis and can be used to make a wide-angle eye. Clearly, this
mirror-box design only works with right-angle corners and not hexagons,
which accounts for the square facets (Fig. 8.13).
Various other features of these eyes are important for their function. The
mirror boxes must be the right depth, two to three times the width, so that
most rays are reflected from two of the faces, but not more. Rays that pass
straight through are intercepted by the unsilvered ‘tail’ of the mirror boxes,
and Vogt (1980) showed that its refractive index decreases in such a way that
appropriate critical angle reflection continues to occur through the clear zone.
Finally, there is a weak lens in the cornea of the crayfish. This lens ‘pre-focuses’
the light that enters the mirror box, thus giving a narrower beam at the retina.
All these features provide an image generally comparable in quality to that
produced by refracting superposition optics (Bryceson and McIntyre 1983),
although it does seem that rays which make too many or too few reflections
contribute to measurable stray light (glare) in the image on the retina.
Given the ancient origin of the decapods, the reflecting superposition
mechanism presumably evolved within that group back in the Cambrian.
Interestingly, the larval stages of decapod shrimps have apposition eyes
with hexagonal facets, which change at metamorphosis into superposi-
tion eyes with square facets (Nilsson 1989). This transformation strongly
suggests that the apposition eye is ancestral, and that the development of
212 Animal Eyes

reflecting superposition occurred as a brighter image was needed for dim-


mer, benthic, life. The retention of apposition eyes into adult life by the
brachyuran crabs, normally regarded as ‘advanced’ decapods, no doubt
reflects the crabs’ littoral or semi-terrestrial environment, in which light
levels are generally high.

Parabolic superposition
This final type of eye is the most recently discovered (Nilsson 1988) and
the most difficult to understand. From an evolutionary viewpoint, it is also
the most interesting because it has some characteristics of apposition eyes,
as well as both other types of superposition eye (Fig. 8.16). It was first dis-
covered in a swimming crab (Macropipus = Portunus). Each optical element
consists of a corneal lens, which on its own focuses light close to the proxi-
mal tip of the crystalline cone, as in an apposition eye. Rays parallel to the

Fig. 8.16 Parabolic superposition. Left: rays are focused by the cornea to a point near the bottom
of the crystalline cone. However, oblique rays are intercepted by the silvered walls of the cone and
redirected back across the axis to form a beam contributing to the superposition image (see Figs.
8.3 and 8.4). Right: view from above. In this plane rays are focused more strongly onto the wall of
the crystalline cone, and then brought parallel again by the same cylindrical lens. In both planes a
parallel input beam emerges as a parallel output beam.
Superposition eyes 213

axis of the cone enter a light-guiding structure that links the cone to the
deep-lying rhabdom. Oblique rays, however, encounter the side of the cone,
which has a reflecting coating and a parabolic profile. The effect of this mir-
ror surface is to recollimate (make parallel) the partially focused rays, so that
they emerge as a parallel beam that crosses the eye’s clear-zone, as in other
superposition eyes. This relatively straightforward mechanism is complicated
because rays in the orthogonal plane (perpendicular to the page) encounter
rather different optics. For these rays, the cone behaves as a cylindrical lens,
thus creating a focus on the surface of the parabolic mirror. The same cylin-
drical lens then recollimates the rays on their reverse passage through the
cone (Fig. 8.16, right). This mechanism has more in common with refract-
ing superposition. Thus, this eye uses lenses and mirrors in both apposition
and superposition configurations and it would be the ideal ancestor of most
kinds of compound eye. Sadly, the evidence is against this, as all the eyes of
this kind so far discovered in crustaceans are from the brachyuran crabs or
the anomuran hermit crabs, neither of which is an ancestral group to other
crustaceans (Nilsson 1989). However, this eye does demonstrate the possi-
bility of mixing mirrors and lenses, thus providing a viable link between
the refracting and reflecting superposition types. This is important because
such transitions do appear to have occurred. The decapod shrimp Gennadas,
for example, has a perfectly good refracting superposition eye, whereas its
ancestors presumably had reflecting optics as in related shrimps (Nilsson
1990). A variant of parabolic superposition, which uses square mirror boxes
with parabolically tapering sides rather than cylindrical lenses, occurs in
Xanthid crabs and Atalophlebid mayflies. The latter provide the only known
example of a mirror-based superposition mechanism in insects.

Summary

1. Superposition eyes produce real, erect images on a retina separated from


the optical elements by a clear zone.
2. In refracting superposition eyes the optical elements may be lens cylin-
ders or corneal lens/lens cylinder combinations. These act as inverting
telescopes.
3. Resolution can be as good as in an apposition eye with similar-sized fac-
ets, and the sensitivity is usually much greater than in an apposition eye
of the same size. Double eyes, with different resolution in the two parts,
occur in both insects and crustaceans.
4. Superposition eyes often exhibit eye glow, when they are illuminated from
the viewing direction. This results from a reflecting tapetum behind the
retina.
214 Animal Eyes

5. Butterflies have afocal apposition eyes. This system is closely related to


refracting superposition, except that the telescopic elements have a much
higher magnification than those of moth superposition eyes. Light enters
the rhabdom as a parallel beam, rather than as a focused image as in
ordinary apposition eyes.
6. Shrimps, crayfish, and lobsters have superposition eyes in which the
optical elements are not lenses but mirrors. The reflecting surfaces are
at right angles to the eye surface, and form a square array. Most rays
encounter two faces of each square, and this corner-reflector configura-
tion makes it possible for the eye to form an image over a wide field of
view.
7. A third mechanism, parabolic superposition, makes use of a lens/mirror
combination to form the dog-leg ray path necessary for superposition
imagery. This is found in certain crabs.
9 Movements
of the eyes

Sampling the world in space and time


Most of the chapters in this book have been concerned with the ways that
eyes produce images, and how these images are sampled by the retina. This
approach gives the impression that eyes are static devices which register the
scenes in front of them rather like surveillance cameras, recording the posi-
tions and motions of objects within a fixed field of view. This, however, is
a very misleading picture of the way that eyes deal with the world, because
most animals with good eyesight have mobile eyes and images that change
moment by moment. They move either because the animal they are attached
to moves in the world, or because the head moves on the body, or because the
eyes move in the head. In humans and many other animals all three kinds of
motion contribute to eye movement, and the field of view of the eyes is rarely
still for more than a few tenths of a second at a time. The eyes are not simply
dragged around by the platform they are attached to. In primates particularly,
vision is a very active process; our eyes search the surroundings for informa-
tion rather than simply absorbing it. To complete our account of the way that
eyes work we need to explore this dynamic aspect of their operation. Eyes
sample in time, as well as space. Before considering the roles of eye movements
in animals with advanced spatial vision, we will briefly consider how vision
and behaviour are related in animals with the simplest of visual systems.

The simplest forms of visual guidance


As soon as photoreceptors evolved they became associated with the control
of locomotion. As an animal moves or turns in its environment the pattern

215
216 Animal Eyes

of light and shade on receptors on the head or body changes, and as a result
the receptors and the locomotory musculature become involved of a feed-
back relationship, with movement affecting photoreception, and vice versa.
This relationship takes a number of forms. These were much studied in the
late nineteenth and early twentieth century and were classified, notably by
Fraenkel and Gunn (1940) whose book forms the basis for our commentary
here. Other useful accounts are given by Carthy (1958), Schöne (1984), and
Nilsson (2009). The behaviours described here correspond to classes 1 and
2 in terms of the visual tasks outlined in Chapter 1.
The simplest forms of locomotory control (kineses) involve no more than
the speeding up or slowing down of body movement, and can be achieved
by photoreceptors with no optical specializations at all. The teleology is
straightforward: if the environment is favourable it makes sense to go slowly
or stop, but if it is not then it is better to speed up and move to somewhere
better. For many animals this will often mean seeking out a cool moist
environment, and typically this is in a dark part of the surroundings. In
orthokinesis it is forward locomotion that is controlled; in klinokinesis it is the
rate of turning. In this case turning more tends to keep an animal in the
same place, and turning less causes it to move on.
Taxes are more sophisticated, and do require the photoreceptors to be
directional, though not necessarily capable of spatial vision, as defined in
Chapter 1. In the simplest type, klinotaxis, a single photoreceptor or receptor
cluster, shielded from behind, is swung left and right by movements of the
head as the animal moves forward (Fig. 9.1a). This kind of progression is
typical of fly larvae. To move to a dimmer region the animal would need
to turn more to the left if it encounters a higher intensity when the head
swings to the right, and continue in this way until right and left swings
produce the same low intensity. Other animals, such as the protist Euglena,
perform something functionally similar while rotating around their long

(a) (b)

Negative Positive

Fig. 9.1 (a) Klinotaxis. The organism swings its single directional detector from side to side, and
then swings further to the less illuminated side. (b) Tropotaxis. An animal with two symmetrical
detectors can turn directly towards the less or the more illuminated side.
Movements of the eyes 217

axis. The other common form of taxis, tropotaxis, involves the simultaneous
stimulation of symmetrically placed directional receptors on the two sides
of the head (Fig. 9.1b). Here it is not necessary to swing the head, but sim-
ply monitor the relative intensities on the two sides, and turn towards the
dimmer or the brighter side depending on which environment is preferred
(negative and positive tropotaxis). When stimulation is symmetrical the
animal stops turning. One test for tropotaxis is that a unilaterally blinded
animal in uniform light should continue turning indefinitely.
Although they are operationally well defined, in practice it is often dif-
ficult to distinguish cleanly between the different types of behaviour. In
particular, flatworms orienting to light can show combinations of kineses
and taxes that defy simple analysis. The presence of a pair of very simple
eyes does not preclude an animal from going faster or slower, or swinging
its head. In the early twentieth century these confusions led to a great deal
of debate, and it came to be recognized that one simply cannot make state-
ments like ‘Planaria is a tropotactic animal’.
The term telotaxis was used by Fraenkel and Gunn to describe a
fixation-like process, in which an object (not necessarily a light source) is
kept aligned in a particular direction, as defined by a particular eye region.
Here we are in quite different territory. Eyes capable of supporting such
behaviour are necessarily capable of spatial vision: the resolution of features
of the surroundings according to their direction of origin. We are now in
the world of what would ordinarily be called ‘seeing’, with all the complex-
ity that implies (visual tasks 3 and 4 in the classification of Chapter 1). At
this point the kinesis-taxis scheme starts to lose its utility, and we need a
different vocabulary to deal with the ways that resolved images are used to
provide the information required for the organization of behaviour.

Eye movements in animals with spatial vision


In animals such as ourselves or insects, whose eyes produce well-resolved
images, it becomes possible to determine both the identity and location of
objects in the surroundings. There are, however, problems in obtaining this
information when the eye is itself moving. Motion blur, it turns out, is a
major problem, and the better the resolution of the eye the more destructive
it becomes. Thus eye movements are not just concerned with shifting gaze
direction around a scene, and a major component of the eye movement
strategies of most animals is gaze stabilization. For example, in humans a
clockwise head movement is typically accompanied by an anti-clockwise
eye movement, so that although the eyes are seen to move relative to the
head, what they are really doing is keeping the image on the retina station-
ary, despite motion of the head or body. Thus, paradoxically, eye movements
218 Animal Eyes

are just as concerned with keeping gaze still as they are with changing its
direction. The underlying reason for this problem is that the photoreceptors
themselves are quite slow: it takes 10 milliseconds or more for a receptor to
respond fully to a change in light intensity, and this means that changes in
the image that occur faster than this are lost. Just as in photography where
it is important to avoid blur by keeping the camera still, so with eyes.
In what follows we will first examine the nature of our own eye move-
ments and ask how they contribute to vision. This is followed by a brief
survey of the eye movements of other animals, to see whether there are
common patterns across the animal kingdom. It turns out that there are,
and this leads to the next question: why this should be so? Finally, we dis-
cuss some interesting exceptions to these rules, animals with scanning eyes
that sweep their gaze across the scene in a manner that our eye-movement
system simply prohibits. Why are they doing this, and how do they get
away with it?

How humans acquire visual information


When we look at a scene we have the impression that it is stationary with
respect to our viewing point, and that we see all parts of it with full clar-
ity and resolution. If something or someone moves within the scene we see
that quite appropriately as motion in the world ‘out there’, but we see very
little evidence of motion brought about by our own movements: of eyes, or
head, or body. We may be dimly aware that we ‘pan’ gently around a scene:
‘She let her eyes wander over his . . .’. But this is an illusion. Our eyes take
in the scene before us in a staccato barrage of saccades. These are the brief
fast eye movements that convey the centre of gaze from point to point with
a frequency of up to three per second. Our eyes do not wander at all: they
jump around! Although these movements have been known about for well
over a century, it was not really until a famous series of illustrations of eye
movements across scenes were published by Alfred Yarbus (1967) in his
book Movements of the eyes that this quite counterintuitive notion of how we
view the world became compelling. Between saccades we have periods of
nearly stationary viewing—fixations—that last for about 300 ms, or longer
if our attention is caught, and this ‘fixate and saccade’ strategy seems to be
our main way of doing visual business with the world.
Figure 9.2 shows how this strategy works in practice. When doing real
tasks (in this case filling a kettle prior to making a cup of tea) saccades
are aimed at points in the surroundings from which visual information is
needed to execute the job in hand. So the eyes go to the kettle, then the
sink, the kettle lid, the taps, and then the water stream, as the changes
in the task require as it progresses. Notice that this is not a ‘random
Movements of the eyes 219

walk’. Behind the scenes the parts of the brain responsible for eye move-
ments ‘know’ just what is required of them by the motor program the
brain is trying to execute. Further examples of eye movement strategies
in tasks of everyday life can be found in Looking and Acting (Land and
Tatler 2009).
Intriguing though the question is, we will not discuss here the rea-
sons why we do not see our own saccadic eye movements. That problem
has remained essentially unresolved for more than a century. Instead
we will look at why it is that we have this saccade and fixate strategy
in the first place. An idea that comes to mind immediately is that the
fovea, being small (in angular terms it is only about 1° across, the sub-
tense of a thumbnail at arm’s length) has to be redirected from place to
place in order to give us the high resolution information we need from
different parts of the scene. Whilst this is certainly crucial for primate
vision, we have to remember that most vertebrates do not have foveas,
and yet they too use the same saccade and fixate strategy. It was Gordon
Walls, famous for his book The vertebrate eye who first pointed out that
the reason why our early fishy ancestors adopted this strategy was to
keep gaze still during locomotion, so that surroundings could be seen
without motion blur.
Their origin (eye movements) lies in the need to keep an image
fixed on the retina, not in the need to scan the surroundings (Walls
1962, p. 69).
Very early in the vertebrate lineage the powerful vestibulo-ocular and opto-
kinetic reflexes evolved, whose function was to stabilize the eye against
movements of the head (see Figs. 9.2b, 9.3, and Box 9.1). However, as an
animal turns as it moves through the environment, stabilization alone is
not enough; the eyes must move to re-centre gaze from time to time or they
will finish up in one or other extreme position. Saccades are the means of
achieving this. Their impressive speed reflects the need to keep the time
they blur the image to a minimum. Humans spend about 10 per cent of
their waking hours engaged in saccades, during which vision is degraded,
either through blur or ‘saccadic suppression’ when vision is actively sup-
pressed. Amazingly, this amounts to about one and a half hours of near
blindness each day.
Humans and other primates, but probably not many other vertebrates,
have the ability to track objects smoothly, provided they do not move too
fast or too unpredictably. Smooth tracking, or pursuit, is more than just a
fixation on a moving target. Numerous studies have shown that the pursuit
mechanism has a sophisticated control system capable of anticipating the
motion of objects, when this is at all predictable. The system also has the
220 Animal Eyes

(a)
ML

(b) 100 LID RTAP LTAP RTAP WATER STREAM SINK TIDY
Direction (degrees)

gaze
50
head

0 eye

-50
0 1 2 3
Time (s)

Fig. 9.2 The human saccade and fixate strategy in action. (a) Record showing the first 25 fixations
made by the author when filling a kettle prior to making a cup of tea. Dots are fixations, lines are
the paths of saccades. From Land et al. (1999). (b) The roles of eye and head movements in fixation
sequences. The records show horizontal eye rotations relative to the head (eye), head rotation
in space (head), and gaze rotation in space (gaze = head + eye) for the final series of fixations in
Fig. 9.2a, from the kettle lid to the water stream. Notice that the eye record contains both fast
saccades and slow movements that are the exact opposite of the head movements (the vestibulo-
ocular reflex). The result is that gaze fixations are steady, and unaffected by head movements.
Dashed line shows the straight ahead direction of the eyes; the other two traces have been
arbitrarily displaced on the ordinate for clarity.

capacity to separate moving foreground objects from the stationary back-


ground. Indeed, it is necessary to suppress the effect of background motion,
as this normally contributes to the optokinetic response whose function is
to prevent relative motion of retina and image. In smooth tracking that
clamp has to be removed.
Movements of the eyes 221

Box 9.1 Reflexes that stabilize the eye


In vertebrates, the eyes are prevented from involuntary rotation relative
to the surroundings by two reflexes, the vestibulo-ocular reflex (VOR)
and the opto-kinetic reflex (OKR). They both cause the eyes to counter-
rotate in their sockets when the head rotates, thus keeping the retinal
image more or less stationary.
The VOR is driven by the rotation detectors in the semicircular canals
of the inner ear (Fig. 9.3). When the head rotates the fluid in these canals
lags behind the surrounding structures. Mechanosensitive hair cells, at-
tached to a jelly-like cupula protruding into the fluid, are bent by the rela-
tive motion of the fluid, and fire action potentials in proportion to head
rotational velocity. This signal is received by the vestibular nuclei and
passed on to the oculomotor nuclei that innervate the eye muscles, with
the result that the eye moves in the opposite direction to the head rota-
tion. The six eye muscles, operating in antagonistic pairs, provide stabili-
zation about all three axes. Although the signal provided by the vestibular
system is one of velocity, the signal to the eye muscles is essentially one
specifying the rotational position of the head. This means that somewhere
in the system the signal is integrated from velocity to position, and the
exact location of this integrator has been a subject of much interest. VOR
is not a feedback system; the eye movements themselves do not affect the
semicircular canals. This means that (like throwing a dart) the system
needs to be well calibrated, and to have a gain (eye-movement size/head-
movement) of exactly −1. Interestingly, divers have problems because
their visual world, seen through an air-filled mask, moves faster across
the retina than the speed at which the head rotates (by a factor of 1.33, the
refractive index of water). They find difficulty in living with two gains for
the VOR, one for land and another for under water.
The OKR is a feedback loop which uses signals from motion detectors
in the retina to activate the eye muscles which thus ‘null out’ any residual
movement between image and retina (Fig. 9.3). If the image moves clock-
wise across the retina, a clockwise movement of the eye itself will de-
crease the relative motion. (Note that an anti-clockwise rotation of the
head will cause the same image movement). The real function of this
reflex is to clamp the eye to the visual world, which (barring earthquakes)
can reasonably be supposed to be stationary. Typically, however, this re-
sponse is studied by placing the subject (human or animal) inside a rotat-
ing striped drum (see Fig. 4.2). The eyes (and/or head and body) then
follow the stripe pattern, and this generally leads to a pattern of eye move-
ments known as nystagmus. The eyes follow the stripe pattern for a while,
222 Animal Eyes

Box 9.1 Reflexes that stabilize the eye (contd.)

Fig. 9.3 Diagrams showing how the vestibulo-ocular reflex and the optokinetic reflex
stabilize the eyes. In the vestibulo-ocular reflex the semicircular canals (SSC) measure
rotational head velocity (h). This signal is relayed via the vestibular nuclei (VN) to the
oculomotor nuclei (ON) which innervate the eye muscles. The result is a movement of the
eyes (e) equal and opposite to the head movement. In the optokinetic reflex any movement
of the image, whether brought about by movement in the world (w), or by movement of the
head, is detected by ganglion cells in the retina. The signal passes to the nucleus of the optic
tract (NOT) and thence back to the eye muscles via the oculomotor nuclei. The result is a
cancellation of the original image motion. Note that, in this feedback loop, what the eye sees
is an error signal, the ‘slip’ speed across the retina (w–e). This has to be amplified in the brain
to provide an eye speed comparable with the original external disturbance.

and then flick back to a more central position before resuming the follow-
ing motion. The resulting sawtooth-like behaviour is said to have fast
(resetting) and slow (following) phases. Because of the way the behaviour
is evoked, it appears that its function is to cause the eye to pursue moving
targets, and this has led, historically, to considerable confusion. Under
normal circumstances the velocity of the surroundings is zero, and as the
system operates to minimize the velocity of the image on the retina, the
result will normally be a stationary eye. To confuse matters further,
humans and other primates do have true pursuit behaviour, but this oper-
ates only for small foveated targets; the optokinetic response involves the
whole image, and actually has to be over-ridden when a small target is to
be tracked. The two stabilizing reflexes operate over different velocity
ranges. OKR is slow, and for oscillating backgrounds is only effective up
to about 1 Hz. VOR, on the other hand, operates up to 10 Hz (it is quite
Movements of the eyes 223

Box 9.1 Reflexes that stabilize the eye (contd.)


difficult to dislodge gaze by shaking your head) but fails at very low fre-
quencies. Between them the two reflexes keep the image almost station-
ary over the whole range of rotational speeds likely to be encountered by
a moving animal.

Are other animals like us?


We may ask whether the ‘saccade and fixate’ pattern of eye-movement
behaviour, just outlined, is merely a phylogenetic quirk of the vertebrate
lineage, or whether the same reasons for keeping the image still apply to
vision in all animals with good eyesight. Table 9.1 gives a brief classification
of human eye movements that can serve as a basis for making comparisons
across phyla (an excellent text is provided by Carpenter (1988)).
Figure 9.4 shows recordings of the eyes of a fish and a crab, both of
which are engaged in locomotion that involves some rotation. Taking
the goldfish first, the records show the head rotating relatively smoothly
through about 100º. The record of eye movements relative to the head (R
EYE/HEAD), however, is quite different. There are a number of fast sac-
cadic movements in the same direction as the head movement, and between
these the eyes rotate smoothly in the opposite direction to the head. The
sum of the head-in-space and eye-in-head movements gives the movements
of the eye-in-space, i.e. movements of gaze: these are shown in the top trace.
The result is a series of periods of stationary gaze, with fast saccades that
shift the gaze direction from time to time through angles of 10–30°. This
is the ‘saccade and fixate’ strategy mentioned above in connection with

Table 9.1 Types and roles of human eye movements

FAST (saccades) 1. ‘Voluntary’ relocation of the direction of gaze


2. Targeting new stimuli in the periphery
3. Fast phases of optokinetic nystagmus (re-centring movements)
SLOW 1. Compensatory movements that stabilize gaze (vestibulo-ocular reflex
and optokinetic reflex)
2. Foveal tracking of small targets
3. Vergence movements (tracking in depth)

(Microsaccades, drift, and tremor also occur. These small movements of a few minutes of arc may serve to prevent
the image from fading. However, it seems that there is always enough residual image motion from other sources
to keep the image ‘refreshed’.)
224 Animal Eyes

100° Goldfish Rock crab


150°
50° 100°
R GAZE 50°
R GAZE


HEAD
HEAD 50°
10 seconds
1 second
R EYE/HEAD
50°

0° R EYE/HEAD

Fig. 9.4 Eye, head, and gaze movements during curvilinear locomotion in a goldfish and a crab. In
both cases gaze is actively stabilized against movements of the body, except during fast (saccadic)
gaze changes. Goldfish record from Easter et al. (1974); crab record from Paul et al. (1990).

our own vision. Perhaps it is no surprise to find that we are rather like
fish; after all we share a common ancestry. The rock crab’s vision, how-
ever, evolved quite independently from that of vertebrates, and crabs have
a quite different design of eye (apposition, see Chapter 7). Nevertheless, the
pattern of head, eye, and gaze movements is remarkably similar to that of
the goldfish. Again, the head (part of the body in a crab) rotates relatively
smoothly, whilst the eyes execute both fast saccades and counter-rotations
(the angular scale is magnified here) and the resulting gaze movements are
fast refixations alternating with stationary periods.
In insects the eyes are physically part of the head and do not move
relative to it. A head movement for a fly is thus an eye movement as
well. This is shown in Fig. 9.5a in which a stalk-eyed fly (chosen because
the stalked eyes make the head movements more easily visible) rotates
through 90° on a glass plate. Notice that the body movement is continu-
ous, but that the head and eyes rotate in a series of saccade-like turns, one
after 120 ms, and another after 380 ms. Between these saccades the head
is actually counter-rotating relative to the body. Even without independ-
ently mobile eyes, the fly is doing exactly the same thing as the fish and
the crab: keeping gaze still as the body rotates. In a remarkable study
Schilstra and van Hateren (1998) attached tiny search coils, which generate
an electric current when they rotate in a magnetic field, to the head and
thorax of blowflies which were then allowed to fly relatively freely. They
found that even in flight the flies make saccadic movements of both head
and body, but because of the counter-rotation movements of the neck the
head rotates faster than the body, thus minimizing the periods of blurred
vision (Fig. 9.5 b). Honeybees make very similar head and body saccades
(Boeddeker et al. 2010).
Movements of the eyes 225

Fig. 9.5 (a) A stalk-eyed fly turning through 90° on a pane of glass. Notice that the body rotates
continuously and smoothly, but the head makes the turn in two saccadic jumps, one at 120 ms and
the second at 380 ms. Between the saccades the head counter-rotates relative to the body. (Data
from a film by W. Wickler and U. Seibt.) b) Top: three head and body saccades in the yaw plane of
made by a flying blowfly, measured with search coils attached to both head and thorax. The single
saccade below shows that the head first moves against the direction of the thorax, then with it,
and then against it again. The result is that the movement of the head-in-space (Head) only takes
about 10 ms to complete, compared to the thorax movement which takes about 25 ms. Redrawn
from Schilstra and van Hateren (1998).

In another study of free-flight head movements, this time using high-


speed video, Zeil et al. (2007) found that, when leaving their nest, wasps
(Cerceris) back away in a series of arcs of increasing radius (Fig. 9.6). This
is achieved by flying sideways with the head redirected back towards the
nest in a series of distinct saccades, and with minimal rotation between
the saccades. The suggestion is that in the stable periods between saccades
the wasp is picking up information about the locations and distances of
landmarks around the nest to facilitate its subsequent return.
Rather like insects, birds also make head saccades, and these are a very
obvious feature of their visual behaviour. Birds do have eye movements,
but their main function seems to be to ‘sharpen up’ the head saccades. As
the head turns the eyes counter-rotate briefly, then flick to the new position
and again counter-rotate until the head completes its saccade. But the eyes
do not seem to make saccades on their own. Head saccades can also occur
in humans. A particularly interesting case arose recently of a woman who
has no eye movements, due to a rare fibrosis of the eye muscles, and yet is
able to read fluently and conduct her life more or less normally (Gilchrist
et al. 1997). In reading she makes rapid head movements that, although
226 Animal Eyes

Fig. 9.6 Top: Cerceris is a small ground-


nesting wasp. The record shows that
it makes a series of expanding arcs as
it backs away from the nest. The lines
attached to the head show the orientation
of the head axis, and indicate that, like
the flies in Fig. 9.5, the head direction
changes saccadically (arrows), with
intervening periods of flight in which
there is no change in orientation. Times
in seconds. Redrawn and modified from
Zeil et al. (2007). Bottom: Syritta is a
small hoverfly. The figure shows a single
video record of the body axes of a pair of
flies, seen from above. The flight of the
female is essentially saccadic, with long
periods in which the body orientation
does not change, punctuated by rapid
turns. The male, who is shadowing her at a
constant distance of about 10 cm, rotates
smoothly, maintaining her image within
5º of his body axis. Corresponding times
are numbered every 0.4s. Modified from
Collett and Land (1975).

slower than typical eye saccades, serve the same function, and when she is
engaged in everyday tasks these give an extraordinarily bird-like impres-
sion. This reinforces the view that the saccade and fixate strategy is of cru-
cial importance, however it is achieved.

Insect flight behaviour seen as eye movement


For a light insect not attached to the ground there is nothing to prevent it
using body manoeuvres to move its eyes. This is what happens with hov-
erflies (Syrphidae), whose superb mobility makes even neck movements
superfluous. A good example is the small hoverfly Syritta pipiens. Female
flies hover around flowers, feeding on nectar, whilst the males spend much
of their time in stealthy pursuit of the females. The males have an advantage
Movements of the eyes 227

in that they have an ‘acute zone’ in the front-facing part of the compound
eye, where the resolution is about three times better than anywhere in the
female eye (Chapter 7, Fig. 7.18a). Thus the males can shadow the females
around until they land, whilst remaining effectively out of sight. Figure 9.6
shows an example of this. It is clear that the flight behaviour of the female
(above) and male (below) are not the same. Although the female’s flight is
continuous, her turning is not. She makes rotational saccades from time to
time (e.g. just before 3, just after 5) and between these the body does not
rotate, even though translational flight (non-rotational movements of the
whole body) may occur in any direction. The flight of non-tracking males
is similar. As soon as they begin to track, however, the pattern changes
dramatically. Throughout the 3.6-s period shown in Fig. 9.6 the male points
directly towards the female, tracking smoothly, and keeping her within the
± 5° forward sector containing his acute zone. Notice, too, that he maintains
a roughly constant distance of about 10 cm, which is important if he is to
remain undetected. Interestingly, if the female moves fast he switches to a
saccadic mode of tracking, just as we do. Amongst insects, Syritta shares
an ability to track both smoothly and saccadically with praying mantids
(Rossel 1980), although in the latter the tracking is mainly achieved with
movements of the head, rather than the body.
The examples in the last two sections make it clear that the human ocu-
lomotor system is far from unique in the way it samples the world around
us. Many, perhaps most, animals with good eyesight do something similar,
although this may not always present itself as movements of the eyes in
the head.

Translational saccades: head-bobbing in birds


We have emphasized in the preceding sections that the main function of
the ubiquitous saccade and fixate strategy is to minimize the blur caused
by gaze rotation and that in general no attempt is made to prevent trans-
lational motion (i.e. linear motion through space). One could imagine that
preventing translation might stop an animal from moving at all! There
are situations, however, when it would be useful to prevent movement of
the lateral field of view without impeding locomotion. The best examples
are seen in ground feeding birds, which need a clear field of view to the
side while foraging, in order to recognize small items of food. Birds achieve
this by ‘head-bobbing’ in which the head is thrust forward, and then held
still in space by a backward movement of the neck while the body continues
to move forward underneath (Fig. 9.7). In pigeons the two phases—thrust
and hold—each lasts about 0.1 s during normal walking (Frost 1978). Frost
showed that when pigeons walked on a treadmill, during which there was
228 Animal Eyes

Fig. 9.7 (a) Walking demoiselle crane showing stretched and retracted head at beginning and
end of a ‘hold’ phase. Vertical line is fixed in space. (Drawn from photographs in Necker 2007.)
(b) Record of relative positions of head and body of a walking pigeon. (Redrawn from Frost, 1978.)

no background motion, head-bobbing ceased, showing that it is residual


image slip that drives the compensatory motion of the neck, much as in
the rotational optokinetic reflex. Not all birds head-bob. Smaller birds often
hop, which also involves stationary and fast-moving phases. Ducks, geese,
and many other water birds do not head-bob, although herons, storks, and
cranes do. Raptors such as eagles, hawks and owls, with more frontally
placed eyes do not head-bob (Necker 2007). Even among ground-feeders
head-bobbing only occurs during walking, and not when they run or fly.
Many other animals, from insects to mammals, adopt a style of move-
ment in which they run in short fast bursts and then freeze, and it could
be argued that the effect of this is functionally similar to head-bobbing:
providing interludes of clear sight between periods when vision is compro-
mised by motion. Here the reason seems to be defence against predators
rather than food seeking, allowing periods of vigilance and relative invis-
ibility between movements in which they are more visible and vulnerable
(McAdam and Kramer 1998).

Why not let the eyes wander? Some consequences


of image motion
What are the arguments against allowing continuous image motion, and
in favour of sampling via a series of more or less stationary images? Three
seem the most persuasive:
Movements of the eyes 229

A. Resolution is lost if motion blurs the image.


B. It is easier to see motion of foreground objects if the retinal image of the
background is stationary.
C. It is easier to obtain heading and distance information from the pattern
of motion on the retina that results from locomotion, if the rotational
motion has already been removed.
We will consider each of these in turn.

A. Motion blur
There are good reasons why fast motion must degrade an image, and they
are to do with the rather slow rate at which photoreceptors respond to
changes in intensity. Figure 9.8a shows the response of a locust photorecep-
tor to a brief flash of light. When light-adapted the response takes about
20 ms (milliseconds) to reach its peak, and it has a duration of about 30
ms, if we ignore the end of the tail. By ‘cumulating’ the curve (adding it to
itself but shifted in time) we can work out how large the response would
be to sustained light pulses of different durations. As Fig. 9.8b shows, the
response is very small for short pulses of light (lowest insert), but reaches
95 per cent of its maximum, i.e. the plateau value for a sustained stimulus,
when the pulse is 25 ms long. No further increase is seen for pulse dura-
tions much longer than 30 ms. Thus this receptor needs about 25–30 ms of
light if it is to signal the change in intensity fully.
Now suppose that the change is not brought about by turning on a sta-
tionary light, but by a moving bright band in the environment, imaged onto
the receptor (Fig. 9.8c). The band we have chosen has the same angular
width as the receptor’s acceptance angle, because this is the narrowest band
that will provide a full signal to the receptor when it is stationary; there is
no degradation due to the receptor’s diameter alone. If the stripe moves so
fast that it illuminates the receptor for only, say, 5 ms, then from Fig. 9.8b
the response it will evoke is only about 30 per cent of the maximum value.
On the other hand if it illuminates the receptor for 30 ms it will produce a
full response. If the receptor’s field of view is 1° across, this means that the
maximum speed the stripe can move, and still generate a full response, is
1° in 30 ms, i.e. 33°s–1. Notice that if the acceptance angle and the stripe had
both been 5° wide instead of 1°, but the response time stayed the same, then
a full response would be generated up to speeds of 5° in 30 ms, i.e. 167°s–1.
In other words, poorly resolving systems with large receptor acceptance
angles (high Δρ) can tolerate higher velocities than better resolving eyes.
The loss of response to images that move faster than the permitted limit is
what we know as motion blur. It shows itself first as a loss of contrast in
230 Animal Eyes

(a) 100 (b) 100 (c)


100
80
60
80 80 40

Response (% max)

Response (%maximum)
20
Response (% max)

0
0 10 20 30 40
100
60 60 80
60
40
20
0
40 40 0 10 20 30 40
100
80
60
40
20 20 20
0
0 10 20 30 40
Time(milliseconds)
Receptor’s
0 0 field of view
0 10 20 30 40 0 10 20 30 40
Time (ms) Light pulse (ms) field of view

Fig. 9.8 The blurring of the image that results from its motion. (a) The small electrical response of
a light-adapted locust photoreceptor to a brief flash of light. It takes about 20 ms to reach a peak
and 40 ms to complete (from Howard et al. 1984). (b) Main graph shows how the response to a
pulse of light increases in amplitude as the pulse lengthens, reaching 95 per cent of its maximum
value when the pulse is 25 ms long. Insets show the responses to pulses of 2, 9, and 20 ms. (c)
A light stripe moving across the field of view of a receptor provides the receptor with a pulse of
light whose duration depends on the stripe’s speed. If this pulse is shorter than 25 ms [see (b)], its
intensity will not be accurately signalled, and the result will be seen as a blurred image.

the highest spatial frequencies in the image (see Land 1999). We can gener-
alize the results in this paragraph into a useful rule of thumb, as follows.
Significant blurring of an image occurs at angular velocities that exceed one recep-
tor acceptance angle per response time. We will refer to this as the ‘blur rule’
in the rest of this chapter.
In humans the acceptance angle of foveal cones is about 1 arc minute,
and the response time about 20 ms, which implies that blurring will occur
at image speeds of close to 1°s–1 for high spatial frequencies. This fits well
with psychophysical studies showing that when a pattern moves at only
3°s–1 across the retina all high spatial frequency information (greater than
about 8 cycles/deg) is lost. 3°s–1 is not very fast, and this explains why we
need to stabilize vision so tightly, using the vestibulo-ocular and optoki-
netic reflexes. Interestingly, vision also fades when the image is kept very
well stabilized, and so some low-speed image motion is actually desirable
(see the comment on Table 9.1). Very few species have spatial resolution as
good as ours. In particular, insects have much wider receptor acceptance
angles than we do (~1° rather than 1 minute), so they can be much more
tolerant of image motion. Their receptors tend to be somewhat faster too
(response time down to less than 10 ms for a fly in daylight), making the
blur rule limit about two orders of magnitude greater than in humans.
Thus insects such as bees and flies should be able to tolerate angular
Movements of the eyes 231

velocities of at least 100°s–1 without significant resolution loss. Again, this


conclusion is well supported by both behavioural and electrophysiological
evidence.

B. Movement detection
Many arthropod species do not allow their gaze to drift to anything like
the extent that the blur rule suggests. Hoverflies, for example, do not allow
their bodies to move even a few degrees per second, when they are hover-
ing in wait for passing females. Bombyliid flies and some solitary bees have
a similar capacity for remaining rotationally still. Layne et al. (1997) found
that walking crabs stabilize their eyes about 10 times better than they need
to in order to preserve vision from blur, and this is also likely to be true of
many insects (Fig. 9.6).
Hoverflies in particular need to detect small moving objects, usually
against a background of foliage, and it is easy to imagine that this is a par-
ticularly demanding task. To our knowledge there is no theoretical account
of why a stationary eye can detect motion better than a moving one, but
there are some human psychophysical studies that bear on this. Relative
motion between two surfaces is easy to detect when the scene as a whole is
nearly stationary, but detection becomes rapidly more difficult when com-
mon image motion is added, even though the difference in velocity remains
the same. Nakayama (1981) found that there is no impairment of relative
motion detection up to common motion speeds of about 0.3°s–1, but above
this the threshold rises rapidly. This is only one-tenth of the velocity (3°s–1)
at which acuity starts to be lost due to blur. Nakayama points out that
compensatory eye movements greatly reduce common image motion, and
comments:
It is possible that the main selective pressure on the evolution of
eye fixation and stabilization reflexes is not to ensure good visual
acuity (Walls 1962) but rather to ensure the optimal pickup of
motion parallax information (Nakayama 1981, p. 1482).
Keeping the eyes totally still should thus be a good way to ensure maxi-
mum detection of objects that move. Interestingly, this may be even more
effective than one might expect because, in humans at least, a totally
still image fades in a few seconds, and when this has occurred the only
detectable objects are those that move. We cannot, in normal life, keep
our eyes still enough to lose the image of the overall scene, but it may
well be that other animals can. It is an attractive idea that when rabbits
or squirrels hold their heads and eyes still they are able to see movement
232 Animal Eyes

but nothing else, and similarly for toads, snakes, or wolf spiders. It sim-
plifies the world enormously if the ‘high-pass filter’ nature of the early
visual process can be used to restrict what can be detected to just those
things that are of vital importance: those that move. There is, however,
an unresolved paradox here in that the very stability of gaze, especially
in an animal like a hoverfly trying to keep still in three dimensions, is
itself dependent on optokinetic reflexes, and so in some sense the animal
is still ‘seeing’. The reflexive aspects of motion vision may operate by
different pathways and with different rules from those concerned with
the detection of behaviourally significant objects, but we have little real
information to go on.

C. Disambiguating the flow-field


The third reason for preventing motion of the image brought about by rota-
tion of head or body concerns the use of information in the retinal motion
pattern (flow-field) generated by body movement. This is of two kinds (see
Gibson 1979). When our eyes rotate, the whole field moves across the retina
in a uniform way (Fig. 9.9b), but when we translate (i.e. move in a straight
line) the pattern of motion is much more interesting (Fig. 9.9a). The point
towards which we are moving is stationary on the retina, and motion radi-
ates from it, making it the ‘focus of expansion’ on the retina. The motion
pattern reaches its maximum velocity to the side, and then, if we could
see it, it contracts again behind us. This basic pattern is modulated by the
distances of objects. When an animal’s eye moves through space, near fea-
tures move faster across the retinal image than distant ones, and locomotion
can thus generate distance information. This was discussed in Chapter 7 in
connection with the design of apposition compound eyes, and the principle
is explained in Fig. 7.13. If the animal has an estimate of its velocity, then
the pattern of angular velocities on its retina converts quite simply into a
map of inverse object distances.
However, a serious problem is that animals rarely move in straight lines.
There is always a certain amount of rotation, and as Fig. 9.9c shows this
will corrupt the translational flow-field, making it difficult to interpret. The
focus of expansion is no longer a point but a blurred line that cannot be
used by the animal to determine its heading, and all the retinal vectors are
distorted, making distance judgements harder. An effective cure for this is
simply not to let the eye rotate, by applying gaze stabilizing reflexes (vesti-
bulo-ocular and optokinetic). Then, between saccades at least, the eye will
see an almost undistorted translational flow field. With rotation out of the
way, retinal velocities can be read as distances.
Movements of the eyes 233

(a) (b) (c)

Fig. 9.9 Velocity flow-fields on an animal’s retina resulting from pure translation (a), rotation
(b), and a combination (c). Arrows represent velocity vectors, and the variation in length of the
translational vectors indicates the presence of objects at different distances. In the combined flow-
field lengths and directions of vectors are distorted, and the stationary pole (dot) which gives the
animal’s heading direction has become an indeterminate line.

An obvious question is whether or not there is any evidence that animals


do actually use flow-field information to measure distance. One imagines
that they must. Although animals have a great many ways of measuring
the distances of objects (see Collett and Harkness 1982), few are available
to a moving insect, which is essentially monocular and has a fixed focus
eye. Simply not bumping into things requires a fast and accurate means of
determining the three-dimensional layout of objects in the field ahead, and
the translational flow-field is ideal for providing such information. In fact,
there are surprisingly few studies showing that flow-field information is
used for distance judgement, but there is one very convincing demonstra-
tion in honey bees.
Lehrer et al. (1988) trained bees to fly over an artificial ‘meadow’ with
black flowers of various sizes and of different heights. The bees were con-
strained to fly above the flowers at a constant height from the ground, which
meant that the only available clue to the relative heights of the flowers came
from retinal motion, and not from apparent size. The bees learned to asso-
ciate a food reward with either high, low, or intermediate height flowers,
showing that they had learned distances from the image velocity pattern
generated during flight. Further evidence that this involved motion vision
came from the finding that the system the bees were using was colour-blind
and sensitive only to green contrast. The motion-detecting system in bees is
known to be sensitive only to green contrasts, unlike the trichromatic sys-
tem usually involved in pattern recognition. Thus bees make explicit use of
the distance information contained in the locomotor flow-field.
In the bee’s case the ability to measure distance in this way comes as
a useful by-product of normal locomotion, but other animals make much
more deliberate parallax-generating movements. Before they jump, locusts
234 Animal Eyes

frequently make side-to-side movements of their heads (peering). These


movements are pure lateral translations, without rotation, which makes them
ideal for registering the distances of objects by their image motion. Sobel
(1990) found that moving the target as the locust peered generated artificial
parallax, and that the locust’s jump length was related to the actual image
velocity, and no longer the actual distance. It is likely that this method of esti-
mating distance, may be quite a common strategy. Young mantids make very
similar peering movements, often with a remarkable display of calesthenics
(Fig. 9.10), which enable them to choose the nearest vertical object to jump
to. During their approach to a feeder bees also make side to side peering
movements at a frequency of about 7 Hz, brought about by roll movements
of the thorax. As in locusts and mantids the head is stabilized visually by
counteracting lateral image motion (Boeddeker and Hemmi 2010). Amongst
vertebrates gerbils make range-finding movements, but in this case using
vertical head-bobs (Goodale et al. 1990). A prerequisite for this behaviour is
that the animal has the ability to keep track of particular edges in a clut-
tered environment, which implies quite sophisticated visual processing.
Of the three mechanisms discussed in this section, it seems that avoid-
ance of blur (A) is the most basic because it applies to all animals with
good vision, and the greater the acuity the better the stabilization must
be. Detecting relative motion (B) is important for animals that need to see
small moving objects, but is perhaps not a general requirement, and the use
of the translational flow field for distance measurement (C) can perhaps be
best thought of as a useful consequence of having the machinery for deal-

Fig. 9.10 Side-to-side scanning movements of an early instar praying mantis (Sphodromantis
lineola). Note that the body moves in such a way that the head travels along a line that is almost
exactly perpendicular to the animal’s forward direction of view, and that the head does not rotate
relative to the surroundings during a scan.
Movements of the eyes 235

ing with the first two. Whatever the mixture of reasons, however, one thing
is clear: no animal should make smooth rotational eye movements, except
in the special (and not very common) circumstance of tracking a moving
target. Nevertheless, there are a few animals that break this taboo, scanning
their eyes across the stationary environment. As we shall see, however, it
seems that they all succeed in moving at speeds that are just below the
maximum permitted by the ‘blur rule’, given earlier.

Exceptions: rotational scanning by one-dimensional


retinae
In this section we will examine eyes that actively rotate in order to acquire
information. Eyes of this kind are uncommon, but they do occur in several
animal groups. The examples given here represent a series of increasing
complexity. In all cases the relevant retinae are long and narrow, and oper-
ate more in the manner of industrial line-scan cameras than conventional
cameras with two-dimensional images.

Prey detection by the sea snail Oxygyrus


The carnivorous planktonic sea-snail Oxygyrus is perhaps the most straight-
forward example of a scanning eye (Fig. 9.11). It has been known for a cen-
tury that this group of molluscs have peculiarly narrow retinae, but how
an eye with such a reduced field of view could be of use to the animal
has only recently become apparent. Oxygyrus has a lens eye not unlike a
fish eye, except that the retina is only three receptors wide by about 410
receptors long, and covers a field of about 3° by 180°. The one-dimensional
structure of the retina would make very little sense unless it moved in
some way, and indeed the eyes do scan (Fig. 9.11c).
The eyes move so that the retina sweeps through a 90° arc at right angles
to its long dimension. The scanning pattern is a sawtooth, and the slower
upward component has a velocity of 80°s−1. The eye scans through the dark
field below the animal, and the presumption is that it is searching for food
particles glinting against the dark of the abyss.

Colour scanning in the mantis shrimp Odontodactylus


The mantis shrimps (Crustacea: Stomatopoda) are quite large crustaceans,
only very distantly related to the more familiar decapod shrimps. Like their
insect namesakes they are ambush predators, with a legendary ability to
destroy their prey with smashing or spearing appendages (Plate 4). Their
eyes are basically compound eyes of the ordinary apposition type, which
236 Animal Eyes

(a) (b)

(c) Lateral (d)


Eye angle

90°

Ventral

0 5 10
Time (seconds)

Fig. 9.11 Scanning with a linear receptor array. (a) Photograph of the sea-snail Oxygyrus shown
with its eye pointing downwards (the snail does swim ‘inverted’ like this). (b) The appearance
of the eye when directed laterally, a fraction of a second after (a). (c) The time course of 5 scan
cycles. The eyes move downwards very fast, and more slowly upwards. (d) Diagram showing the
visual field of the eye during a scanning movement, and its probable role in detecting plankton.
The retina is about 400 receptors long and 3 wide, giving a linear field of view which scans slowly
upwards. Mainly from Land (1982).

provide an erect two-dimensional image. However, stretching more or less


horizontally across each eye is a band of enlarged facets, six rows wide (Fig.
9.12a and b). This mid-band, which has a field of view only a few degrees
in width, contains the animals’ extraordinary colour vision system. This
consists of four of the mid-band rows (the other two subserve polarization
vision; see Chapter 2) and in each row the receptors are in three tiers. Each
of these 12 tiers contains a different visual pigment, giving the animal the
potential for dodeca-chromatic colour vision, with eight pigments covering the
visible spectrum, and a further four in the ultraviolet (Marshall et al. 1999).
In adopting this impressive system, however, the mantis shrimps have set
their eye movement system a daunting task. The outer parts of the eye oper-
ate as normal compound eyes—and are subject to the kinds of image stabil-
ity considerations discussed earlier. The mid-band, however, has to move
or it will not be able to register the colour of objects in the environment
outside a very narrow strip. The result of this visual schizophrenia is a rep-
Movements of the eyes 237

(b)

(a) V
H
T

(c) E
Vertical B C D (d)
30˚ A

30˚
Horizontal
Torsion
30˚
1s

Fig. 9.12 An eye that scans for colour and polarization. (a) The mantis shrimp Odontodactylus
has a band of ommatidia containing its colour vision system (black line) running across the centre
of each apposition compound eye (see also Plate 4). The vertical field of view of the band is very
narrow—a few degrees. (b) Close up photograph of an eye showing the six-row midband, and
the three pseudopupils which indicate that there are three separate regions of the eye directed
towards the camera. (c) Record of a series of small scanning movements. Notice that the eye
rotates about all three axes [V, H, T on (a)], but by differing amounts. (d) Four photographs
showing the eyes in a variety of positions. The eye movements are nearly always independent. The
top two photographs show one or other eye with the ‘acute zone’ directed towards the camera, as
shown by the wide pseudopupils (see Chapter 7). Mainly from Land et al. (1990).

ertoire of eye movements unlike anything else in the animal kingdom (Land
et al. 1990). In addition to ‘normal’ eye movements (fast saccades, tracking
and optokinetic stabilizing movements) there is a special class of frequent,
small (c. 10°) and relatively slow (40°s−1) movements (Fig. 9.12c), which give
the animal a strange inquisitive appearance, perhaps because they resem-
ble human saccades in their frequency of occurrence. They are, however,
not saccades, which are much faster. These movements are typically at right
angles to the band, and the only plausible explanation is that they are the
scanning movements the animal uses to pick out relevant coloured or polar-
ized features in the surroundings. See Chapter 2.

Pattern recognition by jumping spiders


The remarkable eyes of jumping spiders were described in Chapter 5
(Figs. 5.16–5.18, Plate 4). The secondary eyes (antero- and postero-lateral
pairs) and large forward-pointing principal eyes have different roles in
238 Animal Eyes

behaviour. The secondary eyes are fixed to the carapace and act only as
motion detectors. If something moves in the surroundings these eyes ini-
tiate a turn, which results in the target being acquired by the principal
eyes (Fig. 9.13). These eyes have narrow retinae shaped like boomerangs
(Fig. 5.18), subtending about 20° vertically by 1° horizontally in the central
region, which is only about six receptor rows wide. The high resolution
was discussed in Chapter 5. Of interest here is the fact that the retinae of
the principal eyes are moveable (the lenses themselves do not move). They
can move horizontally and vertically by as much as 50°, and they can also
rotate about the optic axis (torsion) by a similar amount (see Land 1985).
When presented with a novel target, the eyes scan it in a stereotyped way
moving slowly from side-to-side at speeds between 3 and 10°.s –1, and rotat-
ing through ±25° as they do so (Fig. 9.13). We actually know what they are
looking for: legs! In the 1950s Oscar Drees showed that jumping spiders
are relatively indifferent to the appearance of potential prey so long as
it moves, but males are quite particular in what they regard as potential
mates. Drawings consisting of a central dot with leg-like markings on the
sides, however, will elicit courtship displays. Whatever its other functions
may be, scanning in these spiders really seems to be concerned with fea-
ture extraction, the procedure itself apparently designed to detect the pres-
ence and orientation of linear structures in the target.

Scanning without eye movements: the larvae of water beetles


The larvae of dytiscid diving beetles have six simple single-chambered
ocelli on each side of the head. In some species, such as Thermonectus
marmoratus, two of these are directed forwards and slightly upwards, and
are greatly enlarged (Fig. 9.14a and b). The retinas of these eyes are linear
strips, with horizontal fields of view of 30º (upper eye E1) and 50º (lower
eye E2), and vertical fields of as little as 2º (Buschbeck et al. 2007). Both
eyes point upwards at an angle of about 35º to the longitudinal plane of the
head, and appear to have overlapping visual fields. Each eye has a complex
double retina, but it is the deeper proximal retina that has the higher reso-
lution: in E1 there are about 180 rhabdoms in each of two lateral rows, with
a horizontal resolution of about 1º (Mandapaka et al. 2006). Remarkably, it
seems that both retinas receive a focussed image: the lenses of E1 and E2
are bifocal with two focal planes separated by about 100 μm, roughly the
distance between the centres of the retinal layers (Stowasser et al. 2010).
The two images are also slightly displaced vertically relative to each other,
which should mean that in-focus and out of focus images degrade each
other less.
Movements of the eyes 239

Fig. 9.13 The jumping spider Phidippus,


Horizontal movement
showing the large movable principal eyes,
and smaller fixed antero-lateral eyes (see
also Plate 4 and Fig. 5.19). Below is a
diagram and record of the movements Torsional
of the boomerang-shaped retinae of the movement
two principal eyes while scanning a novel
target. These movements are conjugate,
and consist of a stereotyped pattern
of horizontal oscillations and slower
torsional rotations. This scanning pattern 10˚
apparently allows the narrow retinae to horizontal
determine the angular pattern of edges
in the target, and thus enables the spider Stimulus
to distinguish other jumping spiders from 10s 50˚
torsion
potential prey. From Land (1969).

Such eyes make no sense unless they scan vertically, but no eye mus-
cles have been found, and no eye movements have been seen. However,
during pursuit of prey the whole head moves vertically, pivoting around
the neck and the thorax-abdomen junction (Fig. 9.14 c and d). This pro-
vides a vertical scanning motion for the eyes with an average amplitude
of 23º, but occasionally there are scans of up to 50º. The upward scans are
faster (average 74ºs−1) than downward scans (49ºs−1). Typically the animal
will approach a prey such as a mosquito larva, scanning across its entire
vertical extent in a fairly unsystematic way, before striking from a distance
of a few millimetres. There remain many intriguing questions about this
behaviour. Why are there two eyes on each side with similar fields of view?
Do the eyes on either side cooperate to measure distance? What is the func-
tion of the double retina? What do the other eight eyes do? A comparative
study of other species may answer some of these questions.

Scanning eyes: conclusions


In the first of these examples we have seen a one-dimensional retina used as
a simple detector, rather like the scan line on a radar set. The second two are
240 Animal Eyes

Fig. 9.14 Scanning by the water beetle Thermonectus marmoratus. (a) Head of a larva showing
the two large ocelli on each side. The head is about 3 mm long. Photo from Stowasser et al.
(2010), courtesy of Elke Buschbeck. (b) Vertical and horizontal sections of eye 1 showing fields of
view. The eye has a total length of about 0.6 mm. (c) Larva scanning by pivoting around the neck
and thorax-abdomen joint. (d) Record of scanning behaviour during the approach to a vertical
mosquito larva. (b)–(d) redrawn from Buschbeck et al. (2007).

more interesting, however, because they combine more or less conventional


two-dimensional eyes with special purpose line scanners, determining local
colour in the mantis shrimp, and aspects of pattern geometry in the jumping
spider. One cannot help feeling that the mantis shrimp solution, with both
types of eye incorporated in the same structure, is an inherently clumsy
one, because it forces the brain to time-share between the two systems. The
eyes of the mysid shrimp Dioptromysis (Chapter 8, Fig. 8.10c), where highly
‘foveate’ vision alternates with wide-field vision, pose similar problems. In
contrast, the dual system of fixed and moveable eyes of jumping spiders,
with one set acting as target finder and the other as analyser, does seem to
have much to commend it.
An obvious question raised by all these eyes is the extent of blurring that
the scanning movements cause. Do they violate the blur rule limit, or not?
It looks as though they are all just slow enough to stay within the rule. We
do not know the response time of the receptors, but we know from other
animals that 20 ms is a typical value for light-adapted eyes (Fig. 9.8a).
Movements of the eyes 241

Table 9.2 Scanning eyes: speed, receptor acceptance angle and estimated response time

Animal Scan speed Acceptance angle Response time


(s) °s −1 (∆ρ) ° (∆t) ms

Labidocera (copepod) 219 3.5 16


Oxygyrus (mollusc) 80 1.1 15
Thermonectes (diving beetle) 49 0.9 18
Odontodactylus (stomatopod) 40 1.0 25
Metaphidippus (spider) 6.2 0.15 24

If we make the assumption that the eyes are working at the blur limit,
we can calculate a value for the response time (Δt) from the scan rate (s)
and the receptor subtense (Δρ), both of which are known (Δt = Δρ/s). This
has been done in Table 9.2 for the four species mentioned above, plus
the copepod Labidocera which also has a scanning eye (see Fig. 4.5c). The
table shows that the calculated values for Δt are all in the range 15–25
ms, i.e. close to the expected value, which means that the scanning rates
are all close to the blur limit, but do not exceed it. Thus the animals are
scanning as fast as they can without losing spatial information, which
is presumably the optimal way to scan. The other interesting point is
the inverse relationship, predicted by the blur rule, between resolution
(Δρ) and scanning speed. Labidocera, with poor eyesight, scans fastest,
but the jumping spider Metaphidippus, with excellent eyesight, scans very
slowly.

Summary
1. Animals with limited spatial vision use a variety of strategies (kineses
and taxes) to navigate towards appropriate light environments.
2. Nearly all animals with good vision have a repertoire of eye movements.
The majority show a pattern of stable fixations with fast saccades that
shift the direction of gaze. These movements may be made by the eyes
themselves, or the head, or in some insects the whole body.
3. The main reason for keeping gaze still during fixations is the need to
avoid the blur that results from the long response time of the photore-
ceptors. Blur begins to degrade the image at a retinal velocity of about 1
receptor acceptance angle per response time.
4. Some insects (e.g. hoverflies) stabilize their gaze much more rigidly than
this rule implies, and it is suggested that the need to see the motion of
small objects against a background imposes even more stringent condi-
tions on image motion.
242 Animal Eyes

5. A third reason for not allowing rotational image motion is to prevent


contamination of the translational flow-field, by which a moving animal
can judge its heading and the distances of objects.
6. Some animals do let their eyes rotate smoothly, and these include some
heteropod molluscs, copepods, mantis shrimps, jumping spiders, and
water beetle larvae, all of which have narrow linear retinas which scan
across the surroundings. None rotates so fast that they incur motion
blur.
Principal symbols used
in the text

A aperture diameter of a superposition eye


C contrast
D diameter of lens or ommatidial facet
d diameter of a photoreceptor or rhabdom
Δϕ inter-receptor or inter-ommatidial angle
Δρ acceptance angle (object space) of a photoreceptor or rhabdom
f focal length (posterior nodal distance)
F-number f/D
I, O sizes of image and object
i1, i2 angles of incidence, refraction
k absorption coefficient (of photopigment in a receptor)
L length of photoreceptor
λ wavelength of light
n refractive index or number of photons
n1, n2 refractive indices
ν frequency
νco cut-off frequency of the optical system (usually D/λ)
νs sampling frequency of receptor mosaic (usually 1/2Δϕ)
P power of an optical system (1/f )
r radius of curvature or reflectance of a surface
S sensitivity of an eye (Eqn 3.6)
s separation of the centres of adjacent receptors
U, V distances of object and image from nodal point
u, ν distances of object and image from principal plane (Fig. 5.3)

243
References

General reading
We include in this section a number of the most important books in the
history of the comparative study of animal eyes from the late nineteenth
century to the present. Others can be found in the ‘Further reading’ linked
to specific chapters. In the present section the books are given in chrono-
logical order.
Grenacher, H. (1879). Untersuchungen über das Sehorgan der Arthropoden,
insbesondere der Spinnen, Insecten und Crustaceen. Vanderhoeck und
Ruprecht, Göttingen.
[Impressively accurate account of the microanatomy of arthropod eyes.]
Exner, S. (1891). The physiology of the compound eyes of insects and crustaceans.
English edition (1988) translated and annotated by R.C. Hardie. Springer,
Berlin.
[Classic account of compound eye function.]
Hesse, R. (1908). Das Sehen der niederen Tiere. Fischer, Jena.
[The culmination of many years of excellent anatomical studies of a wide
range of eyes.]
Walls, G.L. (1942). The vertebrate eye and its adaptive radiation. Cranbrook
Institute, Bloomington Hills. Reprinted (1967) Hafner, New York.
[Unequalled book on the eyes of all vertebrates.]
Rochon-Duvigneaud, A. (1943). Les yeux et la vision des vertébrés. Masson et Cie,
Paris.
[Comprehensive account of vertebrate eye anatomy and histology.]
Duke-Elder, S. (1958). The eye in evolution. Henry Kimpton, London.

244
References 245

[Comprehensive, beautifully illustrated account of eyes of all kinds.


Sourcebook for the older literature.]
Wolken, J.J. (1971). Invertebrate photoreceptors: a comparative analysis. Academic
Press, New York.
[Account of invertebrate eye structure at the electron microscope level.
Complemented by the fine chapter by R.M. Eakin (1972). Structure of
invertebrate photoreceptors. In: Handbook of sensory physiology, Vol. VII/1
(ed. Dartnall, H.J.A.) pp. 625–84. Springer, Berlin.]
Recent compilations with a wide coverage are:
Sinclair, S. (1985). How animals see. Croom Helm, London.
[Notable for its impressive photographs of eyes, less so the text.]
Wolken, J.J. (1995). Light detectors, photoreceptors, and imaging systems in nature.
Oxford University Press, Oxford.
[Wide-ranging book covering subjects from phototaxis to bio-mimetic engi-
neering].
Ings, S. (2007). The eye: a natural history. Bloomsbury, London.
[Written for the layman, this includes a history of ideas about the human
eye, as well as its evolution, structure and function.]

Further reading for each chapter


The material in this section is intended to complement and extend the mate-
rial in the chapter itself.

Chapter 1
Conway-Morris, S. (1998). The crucible of creation. Oxford University Press,
Oxford.
[Account of the Burgess Shale fauna, by a geologist who studied it.]
Eldredge, G., and Eldredge N. (eds.) (2008). The evolution of eyes. Evolution:
Education and Outreach 1, 351–559.
[Series of papers on evolution in different phyla by knowledgeable
authors.]
Nilsson, D.E., and Arendt, D. (2008). Eye evolution: the blurry beginning.
Current Biology 18, R1096–8.
[Brief article discussing early metazoan eye origins.]
Nilsson, D.-E. (2009). The evolution of eyes and visually guided behav-
iour. Philosophical Transactions of the Royal Society of London B 364,
2833–47.
[General account of eye evolution, and an introduction to the evolution of
visually guided behaviour.]
246 References

Chapter 2
Hecht, E. (2001). Optics (4th edn). Addison Wesley, Reading, MA.
[A good all-purpose optics text.]
Horváth, G., and Varjú, D (2003). Polarized light in animal vision: polarization
patterns in nature. Springer, Berlin.
Kelber, A., Osorio, D. (2010). From spectral information to animal colour
vision: experiments and concepts. Proceedings of the Royal Society of London
B 277, 1617–25.
Lythgoe, J.N. (1979). The ecology of vision. Clarendon Press, Oxford.
[Aspects of vision in different environments.]
Rodieck, R.W. (1998). The first steps in seeing. Sinauer, Sunderland, MA.
[Particularly good account of the importance of photons for vision.]
Valberg, A. (2005). Light, vision and color. John Wiley, Chichester.
Wehner, R., and Labhart, T. (2006). Polarisation vision. In: Invertebrate vision
(eds. Warrant E.J., Nilsson, D.-E.) pp. 291–348. Cambridge University
Press, Cambridge.

Chapter 3
Charman, W.N. (1991). The vertebrate dioptric apparatus. In: Vision and visual
dysfunction, Vol. 2 (eds. Cronly-Dillon, J.R., and Gregory, R.L.) pp. 82–117.
Macmillan, Basingstoke.
[Concise account of the optics of vertebrate eyes.]
Pirenne, M.H. (1967). Vision and the eye. Chapman & Hall, London.
[Classic book by one of the discoverers of the photon limit in vision.]
Snyder, A.W. (1979). Physics of vision in compound eyes. In: Handbook of sen-
sory physiology, Vol. VII/6A (ed. Autrum, H.) pp. 225–313. Springer, Berlin.
[Thorough account of compound eye resolution and sensitivity.]
Warrant, E.J. (1999). Seeing better at night: life style, eye design and the
optimal strategy of spatial and temporal summation. Vision Research 39,
1611–30.
[Good discussion of the adaptations of eyes to dim conditions.]
Warrant, E.J., and McIntyre, PD. (1993). Arthropod eye design and the physi-
cal limits to spatial resolving power. Progress in Neurobiology 40, 413–61.
[The resolving power of compound eyes.]

Chapter 4
Messenger, J.B. (1991). Photoreception and vision in molluscs. In: Vision and
visual dysfunction, Vol. 2 (eds. Cronly-Dillon, J.R.E., and Gregory, R.L.),
pp. 364–97. Macmillan, Basingstoke.
References 247

Nicol, J.A.C. (1989). The eyes of fishes. Oxford University Press, Oxford.
Walls, G.L. (1967). The vertebrate eye and its adaptive radiation, Haffner, New
York. (Reprint of 1942 edition published by the Cranbrook Institute of
Science, Bloomington Hills, MI.)
[The classic reference book on vision in vertebrates.]

Chapter 5
Charman, W.N. (1991). [As Chapter 3: compact but wide-ranging account of
vertebrate optics.]
Hughes, A. (1977). The topography of vision in mammals of contrasting life
style: comparative optics and retinal organization. In: Handbook of sensory
physiology, Vol. VII/5 (ed. Crescitelli, F.), pp. 613–756. Springer, Berlin.
Oyster, C. (2000). The human eye. Sinauer, Sunderland, MA.
[A good modern text on the human eye, with a comparative introduction.]
Walls, G.L. (1967). [As Chapter 4.]

Chapter 6
Denton, E.J. (1970). On the organization of reflecting surfaces in some
marine animals. Philosophical Transactions of the Royal Society of London B
258, 285–313.
Fox, H.M., and Vevers, G. (1960). The nature of animal colours. Sidgwick and
Jackson, London.
[Old, but still very useful.]
Herring, P.J. (1994). Reflective systems in aquatic animals. Comparative
Biochemistry & Physiology 109A, 513–46.
[Modern account of structure and function.]
Land, M.F. (2000). Eyes with mirror optics. Journal of Optics A: Pure & Applied
Optics 2, R44–R50.
[Review.]
Parker, A.R. (1998). The diversity and implications of animal structural col-
ours. Journal of Experimental Biology 201, 2343–7.

Chapters 7 and 8
Exner, S. (1891). The physiology of the compound eyes of insects and crustaceans.
English edition (1988) translated and annotated by R.C. Hardie. Springer,
Berlin.[Classic account of compound eye function.]
Land, M.F. (1999). Compound eye structure: matching eye to environment.
In: Adaptive mechanisms in the ecology of vision (eds. Archer, S.N., Djamgoz,
248 References

M.B.A, Loew, E.R., Partridge, J.C., and Vallerga, S.), pp. 51–71. Kluwer,
Dordrecht.
[An ecological account of compound eye vision.]
Snyder, A.W. (1979). Physics of vision in compound eyes. In: Handbook of
sensory physiology. Vol. VII/6A (ed. Autrum, H.), pp. 225–313. Springer,
Berlin.
[Tough, but thorough.]
Stavenga, D.G., and Hardie, R.C. (eds.) (1989). Facets of vision. Springer, Berlin.
[Articles by most of the authors active in the field at the time.]
Warrant, E.J. (2006). Invertebrate vision in dim light. In: Invertebrate vision
(eds. Warrant E.J., and Nilsson, D.-E.), pp. 83–126. Cambridge University
Press, Cambridge.
[Discussion of factors that increase sensitivity, mainly in compound eyes.]
Wehner, R. (1981). Spatial vision in arthropods. In: Handbook of sensory physi-
ology, Vol. VII/6C (ed. Autrum, H.), pp. 287–616. Springer, Berlin.
[Encyclopaedic account of behaviour and its relation to visual function.]

Chapter 9
Carpenter, R.H.S. (1988). Movements of the eyes (2nd edn). Pion,
London.
[Excellent readable textbook on human eye movements.]
Fraenkel, G.S., and Gunn, D.L. (1961). The orientation of animals. Dover, New
York.
[Reprint of the 1940 account of early work on animal orientation mecha-
nisms.]
Land, M.F. (1999). Motion and vision: why animals move their eyes. Journal
of Comparative Physiology A 185, 341–52.
[Recent review of eye movements across the animal kingdom.]
Land, M.F., and Tatler, B.W. (2009). Looking and acting. Oxford University
Press, Oxford.
[Account of the roles of vision and eye movements in natural human behav-
iour.]
Srinivasan, M.V., and Venkatesh, S. (eds.) (1997). From living eyes to seeing
machines. Oxford University Press, Oxford.
[Contains a number of useful articles on ‘active’ vision.]
Walls, G.L. (1962). The evolutionary history of eye movements. Vision
Research 2, 69–80.
[Early wisdom on the functions of eye movements in animals.]
Yarbus, A. (1967). Movements of the eyes. Plenum Press, New York.
[Classic book on how we use our eyes.]
References 249

References
This section contains books and papers quoted in the text. These have been
selected from the vast potential pool of literature as being either seminal
papers (with a bias to the more recent), useful reviews, or papers from
which our figures were derived (as cited in the figure captions).
Aizenberg, J., Tkachenko, A., Weiner, S., Addadi, L., Hendler, G. (2001).
Calcitic microlenses as part of a photoreceptor system in brittlestars.
Nature 412, 819–22.
Arendt, D., Tessmar-Raible, K., Snyman, H., Dorresteijn, A.W., and Wittbrodt,
J. (2004). Ciliary photoreceptors with a vertebrate-type opsin in an inver-
tebrate brain. Science 306, 869–71.
Autrum, H. (1981). Light and dark adaptation in invertebrates. In: Handbook
of sensory physiology, Vol. VII/6C (ed. Autrum, H.), pp. 1–91. Springer,
Berlin.
Baccetti, B., and Bedini, C. (1964). Research on the structure and physiology
of the eyes of a lycosid spider I. Microscopic and ultramicroscopic struc-
ture. Archives Italiennes de Biologie 102, 97–122.
Bernard, G.D., and Miller, W.H. (1968). Interference filters in the corneas of
Diptera. Investigative Ophthalmology 7, 416–34.
Berry, R.P., Stange, G., and Warrant, E.J. (2007). Form vision in the insect dor-
sal ocelli: and anatomical and optical analysis of the dragonfly median
ocellus. Vision Research 47, 1394–409.
Blest, A.D. and Land, M.F. (1977). The physiological optics of Dinopis sub-
rufus L. Koch: a fish lens in a spider. Proceedings of the Royal Society of
London B 196, 198–222.
Blest, A.D. (1985). The fine structure of spider photoreceptors in relation
to function. In: Neurobiology of arachnids (ed. Barth, F.G.), pp. 79–102.
Springer, Berlin.
Bobkova, M.V., Gál, J., Zhukov, I.P., Shepeleva, V.V., and Meyer-Rochow, V.B.
(2004). Variations in the retinal designs of pulmonate snails (Mollusca,
Gastropoda): squaring phylogenetic backgrounds and ecophysiological
needs (I). Invertebrate Zoology 123, 101–15.
Boeddeker, N., and Hemmi, J.M. (2010). Visual gaze control during peering
manoeuvres in honeybees. Proceedings of the Royal Society of London B 277,
1209–17.
Boeddeker, N., Dittmar, L., Stürzl, W., and Egelhaaf M. (2010). The fine
structure of honeybee head and body yaw movements in a homing task.
Proceedings of the Royal Society of London B 277, 1899–906.
Bowmaker, J.K. (2008). Evolution of vertebrate visual pigments. Vision
Research 48, 2022–41.
250 References

Brady, P., and Cummings, M. (2010). Differential response to circularly polar-


ized light by the jewel scarab beetle Chrysina gloriosa. American Naturalist
175, 614–20.
Briggs, D.E.G. (1991). Extraordinary fossils. American Scientist, 79, 130–41.
Brocco, S.L., and Cloney, RA. (1980). Reflector cells in the skin of Octopus
dofleini. Cell & Tissue Research 205, 167–86.
Bryceson, K.P., and McIntyre, P. (1983). Image quality and acceptance angle in
a reflecting superposition eye. Journal of Comparative Physiology 151, 367–80.
Burns, M.E., Lamb, T.D. (2004). Visual transduction by rod and cone pho-
toreceptors. In: The visual neurosciences (eds. Chalupa, L.M., and Werner,
J.S.), pp. 215–33. MIT Press, Cambridge, MA.
Buschbeck, E., Ehmer, B., and Hoy, R. (1999). Chunk versus point sampling:
visual imaging in a small insect. Science 286, 1178–80.
Buschbeck, E., Sbita, S.J., and Morgan, R.C. (2007). Scanning behavior by lar-
vae of the predaceous diving beetle, Thermonectes marmoratus (Coleoptera:
Dytiscidae) enlarges visual field prior to prey capture. Journal of
Comparative Physiology A 193, 973–82.
Carpenter, R.H.S. (1988). Movements of the eyes (2nd edn.). Pion, London.
Carthy, J.D. (1958). An introduction to the behaviour of invertebrates. George
Allan & Unwin, London.
Chamberlain, S.C. (2000). Vision in hydrothermal vent shrimp. Philosophical
Transactions of the Royal Society of London 355, 1151–4.
Charman, W.N. (1991). The vertebrate dioptric apparatus. In: Vision and visual
dysfunction, Vol. 2. (eds. Cronly-Dillon, J.R., and Gregory, R.L.), pp. 82–117.
Macmillan, Basingstoke.
Chaudhuri, A., Hallett, PE., and Parker, JA. (1983). Aspheric curvatures,
refractive indices and chromatic aberration for the rat eye. Vision Research
23, 1351–64.
Chittka, L. (1996). Does bee color vision predate the evolution of flower
color? Naturwissenschaften 83, 136–8.
Chiou, T-H., Kleinlogel, S., Cronin, T., Caldwell, R., Loeffler, B., Siddiqi, A.,
Goldizen, A., and Marshall, J. (2008). Circular polarization vision in a
stomatopod crustacean. Current Biology 18, 429–34.
Collett, T.S., and Land, M.F. (1975). Visual control of flight behaviour in the
hoverfly Syritta pipiens L. Journal of Comparative Physiology 99, 1–66.
Collett, T.S., and Harkness, L. (1982). Depth vision in animals. In: Analysis
of visual behavior (eds. Ingle, D.J., Goodale, M.A., and Mansfield, R.J.),
pp. 111–76. MIT Press, Cambridge, MA.
Collett, T.S., and Lehrer, M. (1993). Looking and learning: a spatial pattern
in the orientation flight of the wasp Vespula vulgaris. Proceedings of the
Royal Society of London B 252, 129–34.
Collin, S.P., and Pettigrew, J.D. (1988). Retinal topography in reef teleosts.
I & II. Brain Behavior & Evolution 31, 269–95.
References 251

Collin, S.P., Hoskins, R.V., and Partridge, J.C. (1998). Seven retinal specializa-
tions in the tubular eye of the pearleye, Scopelarchus michaelsarsi: a case
study in visual optimization. Brain Behavior & Evolution 51, 291–314.
Conway-Morris, S. (1998). The crucible of creation. Oxford University Press,
Oxford.
Conway-Morris, S., and Whitington, H.B. (1985). Fossils of the Burgess Shale.
A national treasure in Yoho National Park, British Columbia. Geological
Survey of Canada, Miscellaneous Reports, 43, 1–31.
Cronin, T.W., and Marshall, J. (2004). The unique visual world of mantis
shrimps. In: Complex worlds from simpler nervous systems (ed. Prete, F.R.),
pp. 239–68. MIT Press, Cambridge MA.
Dacke, M., Nilsson, D.-E., Warrant, E.J., Blest, A.D., Land, M.F., and O’Carroll,
D.C. (1999). A new compass organ in spiders, using built-in polarizers.
Nature 401, 470–2.
Dahmen, H. (1991). Eye specialization in water striders: an adaptation to life
in a flat world. Journal of Comparative Physiology A 169, 623–32.
Denton, E.J. (1970). On the organization of reflecting surfaces in some
marine animals. Philosophical Transactions of the Royal Society of London B
258, 285–313.
Denton, E.J. (1990). Light and vision at depths greater than 200m. In: Light
and life in the sea (eds. Herring, P.J., Campbell, A.K., Whitfield, M., and
Maddock, L.), pp. 127–48. Cambridge University Press, Cambridge.
Denton, E.J., and Nicol, J.A.C. (1965). Reflexion of light by external surfaces
of the herring, Clupea harengus. Journal of the Marine Biological Association
of the UK 45, 711–38.
Duke-Elder, S. (1958). The eye in evolution. Henry Kimpton, London.
Easter, S.S., Johns, PR., and Heckenlively, D. (1974). Horizontal compensa-
tory eye movements in goldfish (Carrassius auratus). I. The normal animal.
Journal of Comparative Physiology 92, 23–35.
Eisner, T., Silberglied, R.E., Aneshansley, D., Carrel, J.E., and Howland H.C.
(1969). Ultraviolet video viewing: the television camera as an insect eye.
Science 166, 1172–4.
Exner, S. (1891). The physiology of the compound eyes of insects and crusta-
ceans. Translated from the German by R.C. Hardie (1989), republished by
Springer-Verlag.
Forster, L. (1985). Target discrimination in jumping spiders. In: Neurobiology
of arachnids (ed. Barth, F.G.), pp. 249–74. Springer, Berlin.
Fox, D.L. (1953). Animal biochromes and structural colours. Cambridge University
Press, Cambridge.
Fox, H.M., and Vevers, G. (1960). The nature of animal colours. Sidgwick &
Jackson, London.
Fraenkel, G.S., and Gunn, D.L. (1940). The orientation of animals. Oxford
University Press. Reprinted (1961). Dover, New York.
252 References

Franceschini, N. (1975). Sampling of the visual environment by the com-


pound eye of the fly: fundamentals and applications. In: Photoreceptor
optics (eds. Snyder, A.W., and Menzel, R.), pp. 98–125. Springer, Berlin.
Franze, K., Grosche, J., Skatchov, S.N., Schinkinger, S., Foja, C., Schild, D.,
Uckermann, O., Travis, K., Reichenbach, A., and Guck, J. (2007). Müller
cells are living optical fibers in the vertebrate retina. Proceedings of the
National Academy of Sciences of the USA 104, 8287–92.
Frederiksen, R., Wcislo, W.T., and Warrant, E.J. (2008). Visual reliability and
information rate in the retina of a nocturnal bee. Current Biology 18, 349–53.
Friederichs, H.F. (1931). Beiträge zur Morphologie und Physiologie der
Sehorgane der Cicindeliden (Col.). Zeitschrift für Morphologie und Ökologie
der Tiere 21, 1–172.
Frost, B.J. (1978). The optokinetic basis of head-bobbing in pigeons. Journal of
Experimental Biology 74, 187–95.
Garm, A., O’Connor, M., Parkefelt, L., and Nilsson, D.-E. (2007). Visually
guided obstacle avoidance in the box jellyfish Tripedalia cystophora and
Chiropsella bronzie. Journal of Experimental Biology 210, 3616–23.
Garm, A., Oskarsson, M., and Nilsson, D.-E. (2011). Box jellyfish use terres-
trial visual cues for navigation. Current Biology 21, 798–803.
Gehring, W.J., and Ikeo, K. (1999). Pax-6: mastering eye morphogenesis and
eye evolution. Trends in Genetics 15, 371–7.
Gibson, J.J. (1979). The ecological approach to visual perception. Houghton-
Mifflin, Boston, MA.
Gilchrist, ID., Brown, V., and Findlay, J.M. (1997). Saccades without eye
movements. Nature 390, 130–1.
Gomez, M., and Nasi, E. (2000). Light transduction in invertebrate hyperpo-
larizing photoreceptors: possible involvement of a Go-regulated guanylate
cyclase. Journal of Neuroscience 20, 5254–63.
Goodman, L.J. (1981). Organization and physiology of the insect dorsal ocel-
lar system. In: Handbook of sensory physiology, Vol. VII/6C (ed. Autrum, H.),
pp. 201–86. Springer, Berlin.
Goodale, M.A., Ellard, C.G., and Booth, L. (1990). The role of image size and
retinal motion in the computation of absolute distance by the mongolian
gerbil (Meriones unguiculatus). Vision Research 30, 399–413.
Görner, P., and Claas, B. (1985). Homing behavior and orientation in the
funnel-web spider, Agalena labyrinthica Clerk. In: Neurobiology of arachnids
(ed. Barth, F.G), pp. 275–97. Springer, Berlin.
Gregory, R.L. (1991). Origins of eyes – with speculations on scanning eyes.
Vision and visual dysfunction, Vol. 2 (eds. Cronly-Dillon, J.R., and Gregory,
R.L.), pp. 52–9. Macmillan, Basingstoke.
Greiner, B., Ribi, W.A., and Warrant, E.J. (2004). Retinal and optical adapta-
tions for nocturnal vision in the halictid bee Megalopta genalis. Cell and
Tissue Research 316, 429–37.
References 253

Greiner, B., Narendra, A., Reid, S.F., Dacke, M., Ribi, W.A., and Zeil, J. (2007).
Eye structure correlates with distinct foraging-bout timing in primitive
ants. Current Biology 17, R879–80.
Greuet, C. (1982). Photorécepteurs et phototaxie des flagellés et des stades
unicellulaires d’organismes inférieures. Annales de Biologie 21(2), 98–141.
Hanke, F.D., and Dehnhardt, G. (2009). Aerial visual acuity in harbor
seals (Phoca vitulina) as a function of luminance. Journal of Comparative
Physiology A, 195, 643–50.
Hanlon, R.T., and Messenger, J.B. (1996). Cephalopod behaviour. Cambridge,
Cambridge University Press.
Hardie, R.C. (2006). Phototransduction in invertebrate photoreceptors.
In: Invertebrate vision (eds. Warrant, E., and Nilsson, D.-E.), pp. 43–82.
Cambridge University Press, Cambridge.
Hardy, A. (1956). The open sea. Collins, London.
Harkness, L. (1977). Chamaeleons use accommodation cues to judge dis-
tance. Nature 267, 346–51.
Hateren, J. van (1989). Photoreceptor optics, theory and practice. In: Facets of
vision (eds. Stavenga, D.G., and Hardie, R.C.), pp. 74–89. Springer, Berlin.
Hateren, J.H. van, Hardie, R.C., Rudolph, A., Laughlin, S.B., and Stavenga,
D.G. (1989). The bright zone, a specialised dorsal eye region in the male
blowfly Chrysomyia megalocephala. Journal of Comparative Physiology A 164,
297–308.
Hecht, E. (2001). Optics (4th edn). Addison Wesley, Reading, MA.
Herring, PJ. (1994). Reflective systems in aquatic animals. Comparative
Biochemistry & Physiology 109A, 513–46.
Herring, P.J. (2002). The biology of the deep ocean. Oxford, Oxford University
Press.
Hesse, R. (1908). Das Sehen der niederen Tiere. Fischer, Jena.
Horridge, G.A. (1978). The separation of visual axes in apposition compound
eyes. Philosophical Transactions of the Royal Society of London B 285, 1–59.
Howard, J., Dubs, A., and Payne, R. (1984). The dynamics of photo-
transduction in insects. A comparative study. Journal of Comparative
Physiology A 154, 707–18.
Hughes, A. (1977). The topography of vision in mammals of contrasting
life style: comparative optics and retinal organization. In: Handbook of
sensory physiology, Vol. VII/5 (ed. Crescitelli, F), pp. 613–756. Springer,
Berlin.
Hunt, D.M., Carvalho, L.S., Cowing, J.A., and Davies, W.L. (2009). Evolution
and spectral tuning of visual pigments in birds and mammals. Philosophical
Transactions of the Royal Society of London 364, 2941–55.
Imms, A.D. (1956). Insect natural history (2nd edn). Collins, London.
Jagger, W.S. (1992). The optics of the spherical fish lens. Vision Research 32,
1271–84.
254 References

Kelber, A. (2006). Invertebrate colour vision. In: Invertebrate vision. (eds.


Warrant E., and Nilsson D.-E.), pp. 250–90. Cambridge University Press,
Cambridge.
Kirschfeld, K. (1976). The resolution of lens and compound eyes. In: Neural
principles in vision (eds. Zettler, F., and Weiler, R.), pp. 354–70. Springer,
Berlin.
Kleinlogel, S., and Marshall, N.J. (2009). Ultraviolet polarisation sensitivity in
the stomatopod crustacean Odontodactylus scyllarus. Journal of Comparative
Physiology A 195, 1153–62.
Kolb, G. (1987). Behavioral experiments on the visual processing of color
stimuli in Pieris brassicae L. (Lepidoptera). Journal of Comparative Physiology
A 160, 645–56.
Kröger, R.H.H., Campbell, M.C.W., Fernald, R.D., and Wagner, H.-J. (1999).
Multifocal lenses compensate for chromatic defocus in vertebrate eyes.
Journal of Comparative Physiology A 184, 361–9.
Kunze, P. (1979). Apposition and superposition eyes. In: Handbook of sensory
physiology, Vol. VII/6A (ed. Autrum, H.), pp. 442–502. Springer, Berlin.
Labhart, T., and Nilsson, D.-E. (1995). The dorsal eye of the dragonfly
Sympetrum: specializations for prey detection against the blue sky. Journal
of Comparative Physiology A 176, 437–53.
Lamb, T.D., Collin, S.P., and Pugh, E.N. (2007). Evolution of the vertebrate eye:
opsins, photoreceptors, retina and eye cup. Nature Reviews Neuroscience 8,
960–75.
Land, M.F. (1965). Image formation by a concave reflector in the eye of the
scallop, Pecten maximus. Journal of Physiology (London) 179, 138–53.
Land, M.F. (1969). Movements of the retinae of jumping spiders in response
to visual stimuli. Journal of Experimental Biology 51, 471–93.
Land, M.F. (1972). The physics and biology of animal reflectors. Progress in
Biophysics & Molecular Biology 24, 77–106.
Land, M.F. (1978). Animal eyes with mirror optics. Scientific American 239(6),
126–34.
Land, M.F. (1980). Eye movements and the mechanism of vertical steering in
euphausiid Crustacea. Journal of Comparative Physiology 137, 255–65.
Land, M.F. (1981a). Optics and vision in invertebrates. In: Handbook of sensory
physiology, Vol. VII/6B (ed. Autrum, H.), pp. 471–592. Springer, Berlin.
Land, M.F. (1981b). Optics of the eyes of Phronima and other deep-sea amphi-
pods. Journal of Comparative Physiology 145, 209–26.
Land, M.F. (1982). Scanning eye movements in a heteropod mollusc. Journal
of Experimental Biology 96, 427–30.
Land, M.F. (1984). The resolving power of diurnal superposition eyes meas-
ured with an ophthalmoscope. Journal of Comparative Physiology A 154,
515–33.
References 255

Land, M.F. (1984). Crustacea. In: Photoreception and vision in invertebrates (ed.
Ali, M.A.), pp. 401–38. Plenum Press, New York.
Land, M.F. (1985). The morphology and optics of spider eyes. In: Neurobiology
of arachnids (ed. Barth, F.G.), pp. 53–78. Springer, Berlin.
Land, M.F. (1988). The functions of the eye and body movements in Labidocera
and other copepods. Journal of Experimental Biology 140, 381–91.
Land, M.F. (1989). Variations in the structure and design of compound eyes.
In: Facets of vision (eds. Stavenga, D.G., and Hardie, R.C.), pp. 90–111.
Springer, Berlin.
Land, M.F. (1997). Visual acuity in insects. Annual Review of Entomology 42,
147–77.
Land, M.F. (1999). Motion and vision: why animals move their eyes. Journal
of Comparative Physiology A 185, 341–52.
Land, M.F. (1999). Compound eye structure: matching eye to environment.
In: Adaptive mechanisms in the ecology of vision (eds. Archer, S.N., Djamgoz,
M.B.A, Loew, E.R., Partridge, J.C., and Vallerga, S.), pp. 51–71. Kluwer,
Dordrecht.
Land, M.F. (2000). On the functions of double eyes in midwater animals.
Philosophical Transactions of the Royal Society of London B 355, 1147–50.
Land, M.F. (2002). The spatial resolution of the pinhole eyes of giant clams
(Tridacna maxima). Proceedings of the Royal Society of London B 270, 185–8.
Land, M.F. (2008). Biological optics: circularly polarised crustaceans. Current
Biology 18, R348–9.
Land, M.F., and Nilsson, D.-E. (1990). Observations on the compound eyes
of the deep-sea ostracod. Macrocypridina castanea. Journal of Experimental
Biology 148, 221–33.
Land, M.F., and Nilsson, D.-E. (2006). General purpose and special purpose
visual systems. In Invertebrate vision (eds. Warrant, E.J., and Nilsson, D.-E.),
pp. 167–210. Cambridge University Press, Cambridge.
Land, M.F., and Osorio, D. (1990). Waveguide modes and pupil action in
the eyes of butterflies. Proceedings of the Royal Society of London B 241,
93–100.
Land, M.F., and Tatler, B.W. (2009). Looking and acting. Oxford University
Press, Oxford.
Land, M.F., Gibson, G., Horwood, J., and Zeil, J. (1999). Fundamental dif-
ferences in the optical structure of the eyes of nocturnal and diurnal
mosquitoes. Journal of Comparative Physiology A 185, 91–103
Land, M.F, Marshall, N.J., Brownless, D., and Cronin, T. (1990). The eye
movements of the mantis shrimp Odontodactylus scyllarus (Crustacea:
Stomatopoda). Journal of Comparative Physiology A 167, 155–66.
Land, M.F, Mennie, N., and Rusted, J. (1999). The roles of vision and eye move-
ments in the control of activities of daily living. Perception 28, 1311–28.
256 References

Land, M.F., Horwood, J., Lim, M.L.M., and Li, D. (2007). Optics of the
ultraviolet-reflecting scales of a jumping spider. Proceedings of the Royal
Society of London B 274, 1583–9.
Larusso, N.D., Ruttenberg, B.E., Singh, A.K., and Oakley, T.H. (2008). Type
II opsins: evolutionary origin by internal domain duplication? Journal of
Molecular Evolution 66, 417–23.
Layne, J.E. (1998). Retinal location is the key to identifying predators in fid-
dler crabs (Uca pugilator). Journal of Experimental Biology 201, 2253–61.
Layne, J.E., Wicklein, M., Dodge, FA., and Barlow, R.B. (1997). Prediction of
maximum allowable retinal slip in the fiddler crab, Uca pugilator. Biological
Bulletin of Woods Hole 193, 202–3.
Lee, M.S.J., Jago, J.B., Garcia-Bellido, D.C., and Edgecombe, G.D., Gehling,
J.G., Paterson, J.R. (2011). Modern optics in exceptionally preserved eyes
of Early Cambrian arthropods from Australia. Nature 474, 631–4.
Lehrer, M., Srinivasan, M.V., Zhang, S.W., and Horridge, GA. (1988). Motion
cues provide the bee’s visual world with a third dimension. Nature 332,
356–57.
Levi-Setti, R. (1993). Trilobites. Chicago University Press, Chicago, IL.
Li, J., Zhang, Z., Liu, Q., Gan, W., Chen, M.L.M., and Li, D. (2008). UVB-
based mate-choice cues used by females of the jumping spider Phintella
vittata. Current Biology 18, 699–703.
Lim, M.L.M., Land, M.F., and Li, D. (2007). Sex-specific UV and fluorescence
signals in jumping spiders. Science 315, 481.
Lockett, N.A. (1977). Adaptations to the deep-sea environment. In: Handbook
of sensory physiology, Vol. VII/5 (ed. Crescitelli, F.), pp. 67–192. Springer,
Berlin.
Lythgoe, J.N. (1979). The ecology of vision. Clarendon Press, Oxford.
Lythgoe, J.N., and Shand, J. (1989). The structural basis for iridescent colour
changes in dermal and corneal iridophores in fish. Journal of Experimental
Biology 141, 313–25.
McIntyre, P., and Caveney, S. (1998). Superposition optics and the time of
flight of onitine dung beetles. Journal of Comparative Physiology A 183,
45–60.
Mallock, A. (1894). Insect sight and the defining power of composite eyes.
Proceedings of the Royal Society of London B 55, 85–90.
Malmström, T., and Kröger, R.H.H. (2006). Pupil shapes and lens optics in the
eyes of terrestrial vertebrates. Journal of Experimental Biology 209, 18–25.
Mandapaka, K., Morgan, R.C., Buschbeck, E.K. (2006). Twenty-eight retinas
but only twelve eyes: an anatomical analysis of the larval visual system
of the diving beetle Thermonectus marmoratus (Coleoptera: Dytiscidae).
Journal of Comparative Neurology 497, 166–81.
Marshall, N.B. (1979). Developments in deep-sea biology. Blandford, Poole.
References 257

Marshall, N.J., and Oberwinkler, J. (1999). The colourful world of the mantis
shrimp. Nature 401, 873–4.
Marshall, J., Cronin, T.W., Shashar, N., and Land, M. (1999). Behavioural evi-
dence for polarization vision in stomatopods reveals a potential channel
for communication. Current Biology 9, 755–8.
Martin, G.R. (1983). Schematic eye models in vertebrates. Progress in Sensory
Physiology 4, 43–81.
Martin, G.R. (1985). Eye. In: Form and function in birds, Vol. 3, pp. 311–73.
Academic, London.
Martin, G. (1999). Optical structure and visual fields of birds: their relation-
ship with foraging behaviour and ecology. In: Adaptive mechanisms in the
ecology of vision (eds. Archer, S.N., Djamgoz, M.B.A, Loew, E.R., Partridge,
J.C., and Vallerga, S.), pp. 485–508. Kluwer, Dordrecht.
Martin, G.R. (2009). What is binocular vision for? A birds’ eye view. Journal
of Vision 9, 1–19.
Mäthger, L.M., Land, M.F., Siebeck, U.E., and Marshall, N.J. (2003).
Rapid colour changes in multilayer reflecting stripes in the paradise
whiptail, Pentapodus paradiseus. Journal of Experimental Biology 206,
3607–13.
Mazza, C.A., Izaguirre, M.M., Curiale, J., and Ballaré, C.G. (2010). A look
into the invisible: ultraviolet-B sensitivity in an insect (Caliothrips phaseoli)
revealed through a behavioural action spectrum. Proceedings of the Royal
Society of London B 277, 367–73.
McAdam, A.G., and Kramer, D.L. (1998). Vigilance as a benefit of intermit-
tent locomotion in small mammals. Animal Behaviour 55, 109–17.
Messenger, J.B. (1981). Comparative physiology of vision in molluscs. In:
Handbook of sensory physiology, Vol. VII/6C (ed. Autrum, H.), pp. 93–200.
Springer, Berlin.
Messenger, J.B. (1991). Photoreception and vision in molluscs. In: Vision and
visual dysfunction, Vol. 2 (eds. Cronly-Dillon, J.R.E., and Gregory, R.L.),
pp. 364–97. Macmillan, Basingstoke.
Menzel, R. (1979). Spectral sensitivity and color vision in invertebrates. In:
Handbook of sensory physiology, Vol. VII/6A (ed. Autrum, H.), pp. 503–80.
Springer, Berlin.
Miller, W.H., and Bernard, G.D. (1968). Butterfly glow. Journal of Ultrastructural
Research 24, 286–94.
Millodot, M., and Sivak, J.G (1979). Contribution of the cornea and lens to
the spherical aberration of the eye. Vision Research 19, 685–7.
Mollon, J.D., and Sharpe, L.T. (eds.) (1983). Colour vision: physiology and psy-
chophysics. Academic Press, London.
Montani, R., Rothschild, B.M., and Wahl, W. Jr. (1999). Large eyeballs in div-
ing ichthyosaurs. Nature 402, 747.
258 References

Mueller, K.P., and Labhart, T. (2010). Polarizing optics in a spider eye. Journal
of Comparative Physiology A, 196, 335–48.
Munk, O. (1970). On the occurrence and significance of horizontal and
band-shaped retinal areae in teleosts. Videnskabelige Meddelelser tra Dansk
Natur-historisk Forening 133, 85–120.
Muntz, W.R.A., and Raj, U. (1984). On the visual system of Nautilus pompil-
ius. Journal of Experimental Biology 109, 253–63.
Newell, GE. (1965). The eye of Littorina littorea. Proceedings of the Zoological
Society of London 144, 75–86
Nakayama, K. (1981). Differential motion hyperacuity under conditions of
common image motion. Vision Research 21, 1475–82.
Narendra, A., Reid, S.F., Greiner, B., Peters, R.A., Hemmi, J.M., Ribi, W.A.,
and Zeil, J. (2011). Caste-specific visual adaptations to distinct daily activ-
ity schedules in Australian Myrmecia ants. Proceedings of the Royal Society
of London B 278, 1141–9.
Necker, R. (2007). Head-bobbing of walking birds. Journal of Comparative
Physiology A 193, 1177–83.
Nicol, J.A.C. (1989). The eyes of fishes. Oxford University Press, Oxford.
Nicol, J.A.C., Arnott, H.J., and Best, A.C.G. (1973). Tapeta lucida in bony
fishes. Canadian Journal of Zoology 51, 69–81.
Nilsson, D.-E. (1988). A new type of imaging optics in compound eyes.
Nature 332, 76–8.
Nilsson, D.-E. (1989). Optics and evolution of the compound eye. In: Facets of
vision (eds. Stavenga, D.G, and Hardie, R.C.), pp. 30–73. Springer, Berlin.
Nilsson, D.-E. (1990). Three unexpected cases of refracting superposition
eyes in crustaceans. Journal of Comparative Physiology A 167, 71–8.
Nilsson, D.-E. (1994). Eyes as optical alarm systems in fan worms and ark clams.
Philosophical Transactions of the Royal Society of London B 346, 195–212.
Nilsson, D.-E. (1996). Eye ancestry—old genes for new eyes. Current Biology
6, 39–42.
Nilsson, D-E. (2009). The evolution of eyes and visually guided behaviour.
Philosophical Transactions of the Royal Society of London B 364, 2833–47.
Nilsson, D.-E., and Kelber, A. (2007). A functional analysis of compound eye
evolution. Arthropod Development and Structure 36, 373–85.
Nilsson, D.-E., and Pelger, S. (1994). A pessimistic estimate of the time
required for an eye to evolve. Proceedings of the Royal Society of London B
256, 53–8.
Nilsson, D.-E., and Modlin, R.F. (1994). A mysid shrimp carrying a pair of
binoculars. Journal of Experimental Biology 189, 213–36.
Nilsson, D.-E., and Ro, A.-I. (1994). Did neural pooling for night vision
lead to the evolution of neural superposition eyes? Journal of Comparative
Physiology A 175, 289–302.
References 259

Nilsson, D.-E., Land, M.F., and Howard, J. (1988). Optics of the butterfly eye.
Journal of Comparative Physiology A 162, 341–66.
Nilsson, D.-E., Hamdorf, K., and Höglund, G. (1992). Localization of the pupil
trigger in insect superposition eyes. Journal of Comparative Physiology A
170, 217–26.
Nilsson, D.-E., Gislén, L., Coates, M.M., Skogh, C., and Garm, A. (2005).
Advanced optics in a jellyfish eye. Nature 435, 201–5.
O’Connor, M., Garm, A., and Nilsson, D.-E. (2009). Structure and optics of the
eyes of the box jellyfish Chiropsella bronzie. J Comp Physiol A 195, 557–69.
Oldberg, R.M., Seaman, R.C., Coats, M.I., and Henry, A.F. (2007). Eye move-
ments and target fixation during dragonfly prey-interception flights.
Journal of Comparative Physiology A 193, 685–93.
Ott, M. (2006). Visual accommodation in vertebrates: mechanisms, physiologi-
cal response and stimuli. Journal of Comparative Physiology A 192, 97–111.
Ott, M., and Schaeffel, F. (1996). A negatively powered lens in the chamaeleon.
Nature 373, 692–4.
Packard, A. (1972). Cephalopods and fish: the limits of convergence. Biological
Reviews 47, 241–307.
Parker, A.R. (2000). 515 million years of structural colour. Journal of Optics A:
Pure and Applied Optics 2, R15–R28.
Partridge, J.C., and Douglas, R.H. (1995). Far-red sensitivity of dragon fish.
Nature 375, 21–2.
Paul, H., Nalbach, H.-O., and Varjú, D. (1990). Eye movements in the rock
crab Pachygrapsus marmoratus walking along straight and curved paths.
Journal of Experimental Biology 154, 81–97.
Paulus, H.F (1979). Eye structure and the monophyly of the arthropoda. In:
Arthropod phylogeny (ed. Gupta, A.P), pp. 299–383. Van Nostrand Reinhold,
New York.
Pedler, C. (1963). The fine structure of the tapetum cellulosum. Experimental
Eye Research 2, 189–95.
Peirson, S.N., Halford, S., and Foster R.G. (2009). The evolution of irradiance
detection: melanopsin and the non-visual opsins. Philosophical Transactions
of the Royal Society of London B 364, 2849–65.
Pettigrew, J.D., Collin, S.P, and Ott, M. (1999). Convergence of specialized
behaviour, eye movements and visual optics in the sandlance (Teleostei)
and the chameleon (Reptilia). Current Biology 9, 421–4.
Piatigorsky, J. (2007). Gene sharing and evolution. Cambridge, Harvard
University Press.
Pirenne, M.H. (1967). Vision and the eye. Chapman & Hall, London.
Pix, W., Zanker, J.M., and Zeil, J. (2000). The optomotor response and spatial
resolution in the male Xenox vesparum (Strepsiptera). Journal of Experimental
Biology 203, 3397–409.
260 References

Pumphrey, R.J. (1961). Concerning vision. In: The cell and the organism (eds.
Ramsay, J.A., and Wigglesworth, V.B.), pp. 193–208. Cambridge University
Press, Cambridge.
Purnell, MA. (1995). Large eyes and vision in conodonts. Lethaia 28, 187–8.
Reymond, L. (1985). Spatial visual acuity of the eagle Aquila audax: a behav-
ioural, optical and anatomical investigation. Vision Research 25, 1477–91.
Rochon-Duvigneaud, A. (1943). Les yeux et la vision des vertébrés. Masson et
Cie, Paris.
Rodieck, R.W. (1998). The first steps in seeing. Sinauer, Sunderland, MA.
Rossel, S. (1980). Foveal fixation and tracking in the praying mantis. Journal
of Comparative Physiology A 139, 307–31.
Rossel, S. (1989). Polarization sensitivity in compound eyes. In: Facets of vision
(eds. Stavenga, D.G., and Hardie, R.C.), pp. 298–316. Springer, Berlin.
Salvini-Plawen, L. von, and Mayr, E. (1977). On the evolution of photorecep-
tors and eyes. Evolutionary Biology 10, 207–63.
Schaeffel, F., Glasser, A., and Howland, H.C. (1988). Accommodation, refrac-
tive error and eye growth in chickens. Vision Research 28, 639–57.
Scherer, C., and Kolb, G. (1987). Behavioral experiments on the visual
processing of color stimuli in Pieris brassicae L. (Lepidoptera). Journal of
Comparative Physiology A 160, 645–56.
Schilstra, C., and Hateren, J.H. (1998). Stabilizing gaze in flying blowflies.
Nature 395, 654.
Schmitz, H., and Bleckmann, H. (1998). The photomechanic infrared recep-
tor for the detection of forest fires in the beetle Melanophila acuminata.
Journal of Comparative Physiology A 182, 647–57.
Schöne, H. (1984). Spatial orientation: the spatial control of behavior in animals
and man. Princeton University Press, Princeton, NJ.
Schwind, R. (1980). Geometrical optics of the Notonecta eye: adaptations to
optical environment and way of life. Journal of Comparative Physiology 140,
59–69.
Schwind, R. (1983). A polarization-sensitive response of the flying water
bug Notonecta glauca to UV light. Journal of Comparative Physiology A 150,
87–91.
Schwind, R. (1991). Polarization vision in water insects and insects living on
a moist substrate. Journal of Comparative Physiology A 169, 531–40.
Shu, D.-G., Conway-Morris, S., Han, J., Zhang, Z.-F., Yasui, K., Janvierk, P., Chen,
L., Zhang, X.-L., Liu, J.-N., Li, Y., and Liu, H.-Q. (2003). Head and backbone of
the Early Cambrian vertebrate Haikouichthys. Nature 421, 526–9.
Sivak, J.G. (1976). The accommodative significance of the ‘ramp’ retina in the
eye of the stingray. Vision Research 16, 945–50.
Sivak, J.G., Hildebrand, T., and Lebert, C. (1985). Magnitude and rate of accom-
modation in diving and non-diving birds. Vision Research 25, 925–33.
References 261

Somanathan, H., Kelber, A., Borges, R.M., Wallén, R., and Warrant, E.J. (2009).
Visual ecology of Indian carpenter bees II. adaptations of eyes and ocelli
to nocturnal and diurnal lifestyles. Journal of Comparative Physiology A 195,
571–83.
Speiser, D.I., Eernisse, D.J., and Johnsen, S. (2011). A chiton uses aragonite
lenses to form images. Current Biology 21, 665–70.
Snyder, A.W. (1979). Physics of vision in compound eyes. In: Handbook of
sensory physiology, Vol. VII/6A (ed. Autrum, H.), pp. 225–313. Springer,
Berlin.
Snyder, A.W., and Miller, W.H. (1978). Telephoto lens system of falconiform
eyes. Nature 275, 127–9.
Sobel, E.C. (1990). The locust’s use of motion parallax to measure distance.
Journal of Comparative Physiology A 167, 579–88.
Stange, G. (1981). The ocellar component of flight equilibrium control in
dragonflies. Journal of Comparative Physiology A 141, 335–47.
Stavenga, D.G. (1979). Pseudopupils of compound eyes. In: Handbook of
sensory physiology, Vol. VII/6A (ed. Autrum, H.), pp. 225–313. Springer,
Berlin.
Stavenga, D.G. (2003). Angular and spectral sensitivity of fly photorecep-
tors. I. Integrated facet lens and rhabdomere optics. Journal of Comparative
Physiology A 189, 1–17.
Stavenga, D.G. (2006). Partial coherence and other delicacies of of lepidop-
teran superposition eyes. Journal of Experimental Biology 209, 1904–13.
Stavenga, D.G., Foletti, D., Palasantzas, G., and Arikawa, K. (2006). Light on
the moth-eye nipple array of butterflies. Proceedings of the Royal Society of
London B 273, 661–7.
Steinbrecht, R.A., Mohren, W., Pulker, H.K., and Schneider, D. (1985).
Cuticular interference reflectors in the golden pupae of danaine butter-
flies. Proceedings of the Royal Society of London B 226, 367–90.
Stowasser A., Rapaport, A., Layne, J.E., Morgan, R.C., and Buschbeck, E.
(2010). Biological bifocal lenses with image separation. Current Biology 20,
1482–6.
Talbot, C.M., and Marshall, J. (2010a). Polarization sensitivity of two species
of cuttlefish – Sepia plangon (Gray 1849) and Sepia mestus (Gray 1849) –
demonstrated with polarized optomotor stimuli. Journal of Experimental
Biology 213, 3364–70.
Talbot, C.M., and Marshall J. (2010b). Polarization sensitivity and retinal
topography of the striped pyjama squid (Sepioloidea lineolata – Quoi/
Gaimard 1832). Journal of Experimental Biology 213, 3371–7.
Temple S., Hart, N.S., Marshal, J.N., and Collin, S.P. (2010). A spitting image:
specializations in archerfish eyes for vision at the interface between air
and water. Proceedings of the Royal Society of London B 277, 2607–13.
262 References

Timney, B., and Keil, K. (1992). Visual acuity in the horse. Vision Research
32, 2289–93.
Vogt, K. (1980). Die Spiegeloptik des Flusskrebsauges. The optical system of
the crayfish eye. Journal of Comparative Physiology 135, 1–19.
Vukusic, P., Sambles, R. (2003). Photonic structures in biology. Nature 424,
852–5.
Wagner, H-J., Douglas, R.H., Frank, T.M., Roberts, N.W., and Partridge, J.C.
(2009). Dolichopteryx longipes, a deep-sea fish with a bipartite eye using
both refractive and reflective optics. Current Biology 19, 108–14.
Walls, GL. (1942). The vertebrate eye and its adaptive radiation. Cranbrook
Institute, Bloomington Hills. Reprinted (1967) Hafner, New York.
Walls, G.L. (1962). The evolutionary history of eye movements. Vision
Research 2, 69–80.
Warrant, E.J. (1999). Seeing better at night: life style, eye design and the
optimum strategy of spatial and temporal summation. Vision Research 39,
1611–30.
Warrant, E.J., and McIntyre, P.D. (1990). Screening pigment, aperture and
sensitivity in the dung beetle superposition eye. Journal of Comparative
Physiology A 167, 805–15.
Warrant, E.J., and Nilsson, D.-E. (1995). The absorption of white light by
photo-receptors. Vision Research 38, 195–207.
Warrant, E., Bartsch, K., and Günther, C. (1999). Physiological optics in the
hummingbird hawkmoth: a compound eye without ommatidia. Journal of
Experimental Biology 202, 497–511.
Warrant, E.J., Collin, S.P., and Locket, N.A. (2003). Eye design and vision in
deep-sea fishes. In: Sensory processing in aquatic environments (eds. Collin,
S.P., and Marshall N.J.), pp. 303–22. Springer, New York.
Warrant, E., Porombka, T., and Kirchner, W.H. (1996). Neural image enhance-
ment allows honeybees to see at night. Proceedings of the Royal Society of
London B 263, 1521–6.
Wehner, R. (1981). Spatial vision in arthropods. In: Handbook of sensory physi-
ology, Vol. VII/6C (ed. Autrum, H.), pp. 287–616. Springer, Berlin.
Wehner, R. (1987). ‘Matched filters’ – neural models of the external world.
Journal of Comparative Physiology A 161, 511–31.
Wilkens, L.A. (1984). Ultraviolet sensitivity in hyperpolarizing receptors of
the giant clam Tridacna. Nature 309, 446–8.
Williams, D.S., and McIntyre, P. (1980). The principal eyes of a jumping spi-
der have a telephoto component. Nature 288, 578–80.
Xianguang, H., and Bergström, J (1997). Arthropods of the Lower Cambrian
Chengjiang fauna, southwest China. Fossils & Strata 45, 1–116.
Yerramilli, D., and Johnsen, S. (2010). Spatial vision in the purple sea urchin
Strongylocentrotus purpuratus. Journal of Experimental Biology 213, 249–55.
References 263

Yarbus, A. (1967). Movements of the eyes. Plenum Press, New York.


Young, J.Z. (1964). A model of the brain. Oxford University Press, Oxford.
Zeil, J. (1983). Sexual dimorphism in the visual system of flies: the free flight
behaviour of male Bibionidae (Diptera). Journal of Comparative Physiology
150, 359–412.
Zeil, J. (1993). Orientation flights of solitary wasps (Cerceris; Sphecidae;
Hymenoptera). Journal of Comparative Physiology A 172, 189–222.
Zeil, J., Nalbach, G., and Nalbach, H.-O. (1989). Spatial vision in a flat world:
optical and neural adaptations in arthropods. In: Neurobiology of sensory
systems (eds. Singh, R.N., and Strausfeld, N.J.), pp. 123–37. Plenum Press,
New York.
Zeil, J., Boeddeker, N., Hemmi, J.M., and Stürzl, W. (2007). Going wild:
towards an ecology of visual information processing. In: Invertebrate neu-
robiology (eds. North, G., and Greenspan, R.J.), pp. 381–403. Cold Spring
Harbor Laboratory Press, New York.
Zurek, D.B., Taylor, A.J., Evans, C.S., and Nelson, X.J. (2010). The role of the
anterior lateral eyes in the vision-based behaviour of jumping spiders.
Journal of Experimental Biology 213, 2373–8.
This page intentionally left blank
Index

Page numbers in bold refer to figures and italic to tables.

Abalone, eye 72–73 Anomuran hermit crab 213 Barbatia 2


Absorption coefficient 61 Anti-reflection coatings 142 Bathylychnops 89 –90
Aberrations 57 Ant-lion, larval ocelli 125 –6 Bee
Acanthopleura 119 Aplocheilus lineatus, visual corneal lens 165
Acceptance angle 162, 168 streaks 87– 8 drone
Accommodation 108–10, 118–19 Apposition eye 157–64 resolution
lens function 108 acuity distribution distribution 181
reptiles and birds 108 patterns 180–1 sexual pursuit 183
vertebrate mechanisms 109 afocal 204–5 pseudopupil 174
Acute zones 177, 181 ancestral 157–8 rhabdom 162
Adaptation double 185– 6 spectral sensitivity
light and dark ecological variations curves 33
apposition eyes 172 176–88 Beetle, superposition eyes 192
superposition eyes 200–1 function 158 Bibio marci 185
Aeschna multicolor, eye 182 image formation Bibionid fly
Afocal apposition 165, mechanisms 165 –6 male eyes 182, 185
204–5 light-dark adaptation Binocular field,
Airy diffraction pattern 53 172–3 vertebrates 117
origins 54 –5 nocturnal 170–1 Bioluminescence 21
diameter 53 optical comparisons 158, Birds
Amegilla, pseudopupil 174 192, 199 accommodation 108
Amphibious eyes 117–19 resolution 166–8 eye movements 225
Amphioxus 72 size 169 head bobbing 227– 8
Amphitretus pelagicus 90 structure 161 Blowfly see Calliphora
Anableps, ovoid lens 118 –19 Aptychotrema rostrata, Blur
Anadara 19 sunshade Plate 1 avoidance 217–18
Anartia sp, tapetum Aquatic eyes 72–6 circle 56–7
multilayer 146 Arca 158–9 image degradation 229
Anax junius 181, 184 Archer fish 87 motion 217, 229–30
Anchoa mitchilli, tapetum 146 Arctosa variana 123 rule 230
Anchovy see Anchoa mitchilli Aristostomias 90 Box jellyfish 2, 77; see also
Anemone Plate 1 Ark shells see Arca Cubozoa
Angular velocities 177–80 Artemia 172–3 Bragg, W.L. 26
Animal groups, evolutionary Ascalaphus, pigment Branchiomma compound eye 159
relationships 17 migration 201 Brittle stars 160
Anomalocaris 157 Atalophlebid mayflies 213 Brücke’s muscle 108–9

265
266 Index

Burgess shale 2, 3 –4 Clydagnathus 5 Dinoflagellates, single cell


Butterflies Cockle see Cardium eye 2, 9
colours 151 Cod 84 Dinopis 121–2
eye 165, 204 Colour 32–9 Dioptromysis , double
afocal 204–5 structural 147 eyes 202–203
mode patterns 206, Colour vision 33–9 Diptera, neural
Plate 3 dodeca-chromatic 37, 236 superposition 163
optical system 205–6 minimum Dog, eye 106
reflectors 146 requirements 35 Dolichopodid, colours 151
ommatidial receptive spectral sensitivities 36 Dolichopteryx longipes 135–6
fields 178, 207 Conodont 5 Dragonet see Callionymus lyra
pupa, reflecting Contrast 31–2 Dragonfly
multilayer 155, Plate 2 transfer function 51–2, 65 eye size 170, 184
resolution pattern 178 Copepod 90–2 dorsal ocelli 128
Copilia 91–2 hunting 184
Callinectes 171–2 Cornea resolution
Callionymus lyra, reflector 151 insect eyes 125–8 distribution 181–2
Calliphora lens 17 Drassodes, polarizers 125
eye 180 nipple array 142 Dromedary, eye
ocellus 127 optics 94–7 structure 106
resolution distribution 181 shape and spherical Drosophila melanogaster
sexual pursuit 183 correction 110 pseudopupil 174
Cambrian explosion 2–6 Corner reflector 209–10 Duck, diving 118
Camouflage Cougar, eye 106 Dung beetle eye 192, 197
reflecting 152–5 Counter-illumination 155
in sea 153–4 Crabs Einstein, Albert 24, 29
Cardium 137 ecological variations 187 Elephant seal 105
Cat, tapetum 146 eye movements 224 Empid fly 182
ganglion cell pattern 116 Xanthid 213 Emu Bay Shale
pupil shape 112 Crayfish arthropod 183
Catfish pupil shape 112 mirror box eyes 209 Eriococcus 128
Centroptilum sp. Plate 3 superposition eyes 208 Erythropsidinium 2
Cephalopholis eye Plate 1 Crystallins 8–9 Euphausiid
ganglion cell pattern 87–8 Cubozoa 2, 12, 76–7 double eyes 203
Cephalopods 83–6 Cupiennius tapetum 141 refractive index
Cerceris, head Cuttlefish, polarization gradient 195
orientation 225– 6 vision 41 superposition eyes 192
Chalarus 184 Cypselurus heterurus, eye 118 Euploea core, pupa 155,
Chamaeleon, eye Cystisoma, eyes 185 Plate 2
structure 105–6, 108 Euroleon, larval ocelli 125–6
China, Chengjiang fauna 3 Daphnia, colour vision 163 Exner, Sigmund 193
Chiropsella bronzie 2 Dark adaptation Eye
Chitin, in reflectors 145–9 mechanisms 172, 201 apposition compound
Chiton, ocelli 119, 160 Decapod shrimp, 17, 160
Chromatic aberration 57, superposition eye 209 axis direction 117
81–2, 113 Deilephila, pigment basic compound 17, 159
Chromophore 37–8 migration 201 binocular field 117
Chrysina gloriosa 42 Dialommus fuscus 118 development 18
Chrysomyia, bright zone 183 Diffraction 47, 52 diurnal and nocturnal,
Cicindela, larval eyes 126 –7 apposition eyes 168–9 differences 69, 106
Cirolana 171–2 and image 47, 54 diversity 2, 17
Clam, pigment-pit eyes 19 limit 52 evolution 1–18
Clupea harengus, scale 153, pattern 53 course and pace 15 –16
Plate 2 Dilophus sp 182 focal length 98
Index 267

mirror optics 17, 130 locomotor 233 Heteronympha merope


movements velocity, on retina 177 178–9, 205–6, Plate 3
birds 225 Flowers, spectral Hilara sp 182
human 218–20, 223 reflectances 33, Plate 1 Hippocampus 87
optical types 17 Fly see Calliphora; Diptera Histioteuthis 83
pinhole 17, 73 Focal length 49–50, 96–8 Hollardops mesocristata 189
pit 17, 72 definition 98 Holochroal eyes 189
purpose 10–15 equivalent 102 Homoptera 128
reflecting Focal powers 99 Horse
superposition 17, 194 Four-eyed fish see Anableps behavioural resolution 115
refracting Fovea 115 eye size 105
superposition 17, 208 Fresnel’s formula 145 pupil shape 112
scanning systems 235 Horsefly, cornea 146,
sequential Gadus 84 Plate 3
modifications 15 Ganglion cells, Horseshoe crab see Limulus
single chambered distribution 87– 8, Housefly, sexual pursuit 183;
cornea 17, 94 115–16 see also Diptera
lens 17, 79 Gaze Hoverfly
size 21, 86, 104, 106, movements 220, 224 object detection 231
124, 169 stabilisation 217 tracking behaviour 226
under-focused 76 stabilising reflexes 221–2 Human cornea 110
see also Amphibious Gecko, Tokay, eye 105, 112 Human eye 106
eyes; Apposition Genes, eye development 7 model 100–3
eye; Aquatic eyes; Gennadas 213 movements, types and
Superposition eye Gerbil, head-bob 234 roles 218, 223
Eye-glow Gerris 180, 187 optic nerve 116
spider eyes, Plate 3 Ghost crab Human rods, absorption
superposition eyes 199, acuity band 186 spectra 33
Plate 3 pseudopupil 174 Hummingbird
Giant squid 69, 86 hawkmoth 203, Plate 3
Fiddler crab 187 Gigantocypris, reflector Huygens-Fresnel
Firefly eye eyes 137–8 scheme 24, 25
corneal image 192 Goldfish, eye Hypericum Plate 1
refractive index movements 223– 4 Hyperiid amphipod,
gradient 195 Guanine 145–9 apposition eyes 185–6
Fish Gullstrand model 100, 103 Hybomitra lasiophthalma,
camouflage 152–5, Plate 2 cornea 146
eye 79, 84 Habronattus americanus Hypsicomus, compound
amphibious 118 –19 Plate 4 eyes 156
and environment 87–91 Haematopota pluvialis Plate 3 Hyrax, pupil shape 112
focussing Haliotis 73
mechanisms 109 Haikouichthys 5 Ichthyosaur 86
size 86 Hawk Illuminance 28–30
sunshade 141 resolution 113 Image formation
tubular 89 telephoto optics 114 apposition 165
multilayer mirrors 145– 6 Head bobbing in birds apposition and
Fixation 218–20 227– 8 superposition 192
Flatworm, planarian 12, 217 Head movements curved cornea 96–7
Flight bees 224 lens 80
forward, pattern 177–9 birds 225– 8 lens-cornea combination
past vegetation 176 flies 224–5 100–3
Flow-field 176–7, 232–3 wasps 225–6 mirror 131, 134
distance measurement Herring see Clupea harengus superposition 193–4
232–4 Helix 73, 76 Image motion 228–9
268 Index

Insects sizes and shapes 106 Mammal


compound eyes 160–4, Lethrinus, ganglion cell accommodation 109
191–4 pattern 87– 8 eye structure 106
flight behaviour as eye Light pupil 112
movement 226 absorption by spectral sensitivity 36 –7
ocelli, adult 127–8 photoreceptors 60 Man, eye structure 106
ocelli, dorsal 125–7 adaptation Mantis shrimp Plate 4
Inter-ommatidial angle 162 mechanisms 138–9 colour scanning 236–7
Irradiance 28–30 distribution in Airy colour vision 37
Ipnops murrayi 89 disc 53 polarization vision 41–2, 44
Isia, larval ocelli 125 –6 environmental 27–8 Mating, acute zones 180–4
intensity 26–31 Matthiessen gradient 80
Jellyfish measurement 28–31 Matthiessen lens 80
box, eyes 2, 13, 76–7 photometric Matthiessen’s ratio 79–81
cubomedusan, spherical measurement Maxwell, James Clerk 24–5, 79
lens 77 system 30 Mayflies
Junonia villida, radiometric atalophlebid 213
pseudopupil 174 measurement eye glow Plate 3
Jumping spider 120 –3, system 30 Megalopta genalis 2, 170
Plate 4 interference 25 Melanitis leda 207
fields of view 121 low level effect 48, 63 Melanophila, infra-red
pattern recognition 237–9 nature of 24–6 radiation detection 32
principal eyes 122 polarization 23, 39–44 Merganser,
ultraviolet vision 32 polarized, reception 40 accommodation 118
propagation 25 Mesonychoteuthis hamiltoni 86
Kineses 216 quantum theory 24–5 Metaphidippus, scanning
Krill see Euphausiid refraction 25 speed 241
spectrum 33 Microcebus murinus 105
Labidocera 83 ultraviolet 32 Microvillous receptors 7,
scanning eye 241 wavefronts 25 12, 40
Leander, superposition Limnichthyes fasciatus Mirounga leonina 105
eye 199 86–7, 92 Mirror boxes 209, 210 –11
Leeuwenhoek, Antoni Limpet 73 Mirrors
van 160 Limulus 125 in eyes 130–41
Lens 79–83 apposition eye 120, 158 multilayer 144
in accommodation 109 lens-cylinder 165 colour 147
aquatic 17 refractive index display use 150–2
cephalopod 83 gradient 195 reflectance 145– 8
evolution 15, 78–80 Littorina 73 in reflecting
fish 83 Lobelia Plate 1 camouflage 152–5
land eyes 94–5 Locust spectral reflectance
optics 79–83 dorsal ocelli 127 148–50
paths of rays 80 peering 234 structure 146
refractive index resolution distribution 180 reflecting
gradients 79–81 Luminance 27, 30 sunshades 141
spherical 79–84 Lycosid spider, eye, physical optics 143–50
structure 85 tapeta 141, Plate 3 Mnierpes macrocephalus 118
Lens cylinder 165, 195 –6 Lycosidae 122–4 Moths
Limulus 165 Lymnea 82, 86 eye-glow 200, Plate 3
Phronima 165 Lynx, eye structure 106–7 image quality 197– 8
Lens-pad 89 refractive index
Lens/cornea combination Macroglossum eye 200, 203, gradient 195
land vertebrates 104–7 Plate 3 superposition eyes 192
optics 95–103 Macropipus 212 wing-scale 146
Index 269

Motion blur 217, 229–30 Optokinetic reflex 75, Phronima sedentaria


Mouse, eye structure 106 221–2, 223 eye 185– 6
Mouse lemur 105 Ostracod, reflector eyes 137 lens cylinder 165
Multifocal lens 81–2, 113 Owl, eye size 106 Phrosina semi-lunata,
Müller cells 60 Owl-fly see Ascalaphus divided eye 186
Multilayer interference Oxygyrus resolution distribution 181
144–50 prey detection 235– 6 Pigment migration 172,
Myopia, lower field 110 spherical lens eye 82 200–1
Myrmecia 171 Pigment cup eye 12, 17, 73
Mysid shrimp 202–3 Palaemonetes varians, eye 209 Pigeon
Papilio palinurus, wing eye structure 106
Nautilus 73–4 scales 152 ganglion cell pattern 116
optomotor response 75 Parabolic superposition 212–13 Pipunculid fly, female 184
pinhole eye 73–5 Paracheirodon, reflectors 151 Planck, Max 25
Nematobrachion boopis, Pardosa prativaga, Plate 3 Polarization 39–44
double eyes 202 Patella 73 linear 43
Nematoscelis atlantica, double Pattern recognition, jumping natural 41
eyes 202–3 spider 237–9 navigation aid 39–40
Nematoscelis megalops, double Pecten see Scallop eye circular 42–4
eyes 203 Pectunculus 158 Polarization vision 39–44
Neon tetra, reflectors’ 151 Pelargonium Plate 1 Polyphemus 182
Newton, Isaac 24, 32,131 Perga, larval ocelli 125 –6 Pontella, ventral eye 90 –1
Newton’s series, colours 143 Periwinkle 73 Portia fimbriata 120–1, 123
Nodal point 49–50, 97–8 Phacops 189 Portunus 176
Notodromas, reflector Phalanoides tristifica, Praying mantis
eyes 137 image 197– 8 acute zone 181
Notonecta 41, 163 Phidippus, eye movements 227
resolution distribution 187 eyes 123 head scanning 234
Nystagmus 221, 223 scanning 239 Precambrian 4–5
Phoca vitulina 107 Prey capture 180–6
Ocellus 12, 125–8 Phoebus rurina, UV Procavia, pupil shape 112–13
Octopus markings 34 Protula, compound eyes 159
colour blindness 37, 85 Photometric units 30 Pseudopupil 174
eye 84, Plate 1 Photons 25–6, 29 antidromic 175
eye muscles 85 available numbers 27 apposition eyes 173, 174 –6
reflecting cell 146 low numbers 48, 62–5 explanation 175
Oculomotor nuclei statistics and contrast Pterotrachea, spherical lens
221–2 detection 64 –5 eye 82–3
Ocypode, pseudopupil 174 Photonic reflectors 143 Pupil
Odontodactylus Plate 4 Photophores, luminescent 155 diameter and
circular polarization 44 Photopigment 36 –8 resolution 111
colour scanning 223–7 ratios of stimulation 35 form and function 111–13
colour vision 36, 163 Photopigment proteins 6, 8 longitudinal 172
Ogre-faced spider see Photoreceptors 7–8 shapes in vertebrates 112
Dinopis absorption by 60 slit 112–13
Ommatidium 161–4 ciliary 7–8, 20 superposition 200
Onitis aygulus 197 directional 11 Pursuit behaviour 176,
Onitis belial 197 microvillar 7–8, 20 180–5
Onitis westermanni 192 optics 58–60
Opossum, eye structure 106 response time 218, 230 Rabbit, ganglion cell
Opsins see Photopigment rod 38 pattern 116
proteins signals 37 Radiance 29–30
Optical cut-off Photuris sp, superposition Radiometric units 30
frequency 51–52 image 192 Ramp retina 108
270 Index

Rat eye Scanning eyes Spectral sensitivities 33


Resolution 115 colour, mantis invertebrates, table 36
ganglion cell pattern 116 shrimp 235–7 vertebrates, table 36
Reflectance pattern recognition, Spherical aberration 56 –7,
formulae 145, 148 jumping spiders 79–80, 110
spectral, multilayers 149 237–9 Sphodromantis, scanning
Reflection 130 planktonic predation, sea movements 234
law of 131 snail 235– 6 Spider eyes 120–5
Reflectors see Mirrors speed and resolution 241 Squalus acanthias, tapetum 140
Refractive index gradient diving beetle larvae Squid 76
79–81, 165, 195 238– 40 giant 69, 86
Resolution 47, 49–60, 113–15 Schistocerca gregaria, dorsal Japanese firefly 85
animal eyes, table 51 ocelli 127 mid-water, eyes 83
apposition eye 166–8, 179 Schizochroal eyes 189 Stabilisation reflexes 221–2
and contrast loss 65 Schmidt corrector plate 134 Stalk-eyed fly 225
and eye design 61–2 Scopelarchus 89 Stomatopoda see mantis
limits 48 Scypholanceola, mirror shrimp
loss in motion 228–31 eye 138 Streetsia challengeri,
superposition eye 196– 8 Sea, deep 27, 89 cylindrical eye 186
Retina Sea snail see Oxygyrus Strepsipterans, anomalous
adaptation mechanisms Sea urchin 160 eyes 188–9
68–70, 111, 112–13 Seals, cornea and lens Stylocheiron spp, double
area centralis 115 105, 107 eyes 202
ganglion cell Semi-circular canals 222 Sun, brightness 30
distribution 88, 116 Sensitivity 47, 62–9 Sunshade
one-dimensional, adaptations for 67 hyrax 112
scanning 235–41 animal eyes, table 68 ray Plate 1
organization 84 apposition eye 170–2 reflecting 141
sampling frequency 49–51 calculation for apposition Superposition eye 192– 4
Rhabdom 161–2 and superapposition apposition comparison
Rhabdomeres 162 eyes 199 192, 199
Rhamphomyia tephraea 187 definition 67 double 201–203
Rhodopsin 38; see also increasing 66–7 eye glow 199 –200, Plate 3
Photopigment range 68–9 neural 162–4
Rhopalium 77 spectral 36 parabolic 212–13
Rock-pool fish see Mnierpes superposition eye 198–9 reflecting 208–212
macrocephalus Sepia 2 ray paths 210
Rose-de Vries law 63, 65 Shrimp refracting 194–9
decapod 208 ray paths 193, 195
Sabella, compound eyes 159 mirror box eyes 209 resolution 196–8
Saccade and fixate ray paths 210 sensitivity 198–9
strategy 218–220, 223–4 Simuliid fly 184 Swimming crab see
Salticidae see Jumping spider Size see Eye Macropipus
Sampling frequency 50 Skipper butterfly Sympetrum sp 184
Sandlance, see Limnicthyes image quality 197 Syritta pipiens
fasciatus refractive index acute zone 182–3
Sapphirina 91–2 gradient 195 tracking behaviour 226 –7
Sawfly, larval ocelli 125 Snakes, infra-red
Scallop eye 131–2, Plate 1 wavelengths 32 Tapetum lucidum 124,
image formation 133–4 Snell’s law 25 139–41
image-forming Sparassidae 122 Taxes 216 –17
reflector 131–5 eye, tapeta 140 Tegenaria eyes 120
images 133 Spatial frequency 52 Telescopes, superposition
lens 134 Spatial summation 68 eyes 193–4
Index 271

Temporal summation 68 Vergence movements 223 Water beetle see


Tenodera australasiae 181 Vertebrate rod, diagram 38 Thermonectus
Thermonectus Vestibulo-ocular reflex Water-flea, colour vision 163
marmoratus 127, 221–2, 223 Water strider see Notonecta
238– 40 Vision Waveguide modes 59, 207,
Thin film 143– 4 motion 229–30 Plate 3
Thomisidae 122 non-directional 10–13, Wavelength 32–7
Thrips 32 216 specific behaviours 38–9
Tiger beetle, larval eyes 126–7 spatial 13 –14 White peacock butterfly see
Trachynotus, reflecting Visual information, Anartia
camouflage 154 human 218–20 Wolf spider, vision 123–4
Tridacna 37, 74 Visual pigments 36, 38
Trilobites, anomalous Visual streaks 88, 116 Xanderella 3
eyes 189 Visual tasks evolution 10–15, Xenos peckii, anomalous
Tripedalia cystophora 77 216–17 eyes 188–9
Tubular eyes 89 Xylocopa tranquebarica 170
Walcott, Charles 2
Uca pugilator 187 Walls, Gordon 219 Yarbus, Alfred 212
Ultraviolet 32, 34, 151 Wasp, orientation flights Young, Thomas, slit
Urania ripheus 226 experiment 25 –6
colours 151 Watasenia scintillans 85
wing-scale 146, Plate 2 Water, light polarization 41 Zizina labradus 206
This page intentionally left blank
(a)

(b) (c)

(d) (e)

Plate l (a) Four flowers— Hypericum, Anemone, Pelargonium, and Lobelia —whose spectral
reflectance curves are shown in Fig. 2.3b. (b) Eye of a coral cod (Cephalopholis), with an aphakic
space that permits forward vision. See Fig. 4.9. (c) Eye of an octopus, showing the horizontal slit
pupil. See also Fig. 5.11. (d) Eye of a shovel-nosed ray ( Aptychotrema rostrata) with an expanded
‘sun-shade’ operculum. (e) Two eyes of a scallop (Pecten) each about 1mm across. The images of
the light sources can be seen in the eye. See Figs. 6.2–6.4.
(a)

(b) (c)

Plate 2 (a) Photograph of a single scale from a herring (Clupea harengus) showing the different
colour zones of the reflecting platelets. (Photograph by Eric Denton.) See Fig. 6.17d. (b) The
underside of the hindwing of the Madagascan moth Urania riphius. The colours result from
constructive interference of light reflected from layers of chitin and air. See Fig. 6.13c. (c) The
golden pupa of the danaid butterfly Euploea core. The quality of the mirror can be judged from
the reflection of the animal’s name on the left hand side. (Photograph by Rudolph Steinbrecht.)
(a) (b)

(c) (d)

(e) (f)

Plate 3 (a) Appearance of a lycosid spider when illuminated from the direction of view. The large
postero-median eyes glow from light reflected from the tapetum (eye diameter 0.49 mm). See
Fig. 5.16. (b) Retina of the postero-median eye of the lycosid spider Pardosa prativaga showing
individual receptors on strips of tapetum. (Ophthalmoscope photograph by David O’Carroll.) See
also Fig. 6.9. (c) Eye of a horsefly (Haematopota pluvialis) with multilayer interference colours.
See Fig. 6.13e. (d) Red and green reflections from the ommatidia of a butterfly Heteronympha
merope. The refections come from multilayer mirrors at the base of each rhabdom See Fig. 6.13d.
Waveguide modes originating in the rhabdoms are also visible as lines and dots (Fig 3.7). (e) Blue
light reflected from the tapetum of the eye of the hummingbird hawk moth (Macroglossum). The
light zone corresponds to the superposition pupil (Fig. 8.6). (Photograph by Justin Marshall.)
(f) Superposition eye of the male mayfly Centroptilum sp. The yellow colour is not tapetal, but
caused by the scattering of long wavelengths by screening pigment in the retina.
(a)

(b)

Plate 4 (a) Male jumping spider (Habronattus americanus) showing colourful adornments of
the palps and face. See Figs. 5.19 and 9.13. (Photograph by Wayne Maddison.) (b) Mantis shrimp
(Odontodactylus scyllarus), displaying highly coloured appendages. Note the strip through the eye
which contains the colour vision system. See Fig. 9.12. (Photograph by Justin Marshall.)

Vous aimerez peut-être aussi