Vous êtes sur la page 1sur 12

Microprocessors and Microsystems 31 (2007) 408419

www.elsevier.com/locate/micpro

Bio-inspired optic ow sensors based on FPGA: Application


to Micro-Air-Vehicles
F. Aubepart *, N. Franceschini

Biorobotics Laboratory, Movement and Perception Institute, CNRS & University of the Mediterranean, 163 Avenue Luminy,
CP 938, F-13288 Marseille cedex 09, France
Available online 27 February 2007

Abstract
Tomorrows Micro-Air-Vehicles (MAVs) could be used as scouts in many civil and military missions without any risk to human life.
MAVs have to be equipped with sensors of several kinds for stabilization and guidance purposes. Many recent ndings have shown, for
example, that complex tasks such as 3-D navigation can be performed by insects using optic ow (OF) sensors although insects eyes have
a rather poor spatial resolution. At our Laboratory, we have been performing electrophysiological, micro-optical, neuroanatomical and
behavioral studies for several decades on the houseys visual system, with a view to understanding the neural principles underlying OF
detection and establishing how OF sensors might contribute to performing basic navigational tasks. Based on these studies, we developed
a functional model for an Elementary Motion Detector (EMD), which we rst transcribed into electronic terms in 1986 and subsequently
used onboard several terrestrial and aerial robots. Here we present a Field Programmable Gate Array (FPGA) implementation of an
EMD array, which was designed for estimating the OF in various parts of the visual eld of a MAV. FPGA technology is particularly
suitable for applications of this kind, where a single Integrated Circuit (IC) can receive inputs from several photoreceptors of similar (or
dierent) shapes and sizes located in various parts of the visual eld. In addition, the remarkable characteristics of present-day FPGA
applications (their high clock frequency, large number of system gates, embedded RAM blocks and Intellectual Property (IP) functions,
small size, light weight, low cost, etc.) make for the exible design of a multi-EMD visual system and its installation onboard MAVs with
extremely low permissible avionic payloads.
 2007 Elsevier B.V. All rights reserved.
Keywords: Optic ow sensor; Elementary Motion Detector; Field Programmable Gate Array; Micro-Air-Vehicle; Biorobotics

1. Introduction
One recent trend in the eld of Unmanned Air Vehicle
(UAV) and robotic aircraft design has been the development of Micro-Air-Vehicles (MAVs) in the 150 cm size

Abbreviations: ASF, Angular Sensitivity Function; AWHH, Angular


Width at Half Height; EMD, Elementary Motion Detector; FOV, Field of
View; FPAA, eld programmable analog array; FPGA, Field Programmable Gate Array; IC, integrated circuit; IP, Intellectual Property; LUT,
Look-Up Table; MAV, Micro-Air-Vehicles; lC, Micro-Controller; OF,
optic ow; UAV, Unmanned Air Vehicles; VHDL, Very High speed Integrated Circuit Description Language; VLSI, Very Large Scale Integration.
*
Corresponding authors. Tel.: +33 491 28 94 52; fax: +33 491 28 94 03.
E-mail addresses: fabrice.aubepart@univ-cezanne.fr (F. Aubepart),
nicolas.franceschini@univmed.fr (N. Franceschini).
0141-9331/$ - see front matter  2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.micpro.2007.02.004

range. MAVs could be used as scouts in many dangerous


civil and military missions without any risk to human life,
and they also have many potential industrial applications
such as plant supervision, power line [1] and construction
site inspection, pollution and weather monitoring, forest
re and disaster control, etc. Missions of this kind require
reactive vehicles equipped with onboard sensors and ight
control systems capable of performing the lowly tasks of
attitude stabilization, obstacle sensing and avoidance, terrain following and automatic landing [1,2]. The ability to
perform these tasks would give MAVs some degree of decision-making autonomy, while relieving ground operators
of the arduous task of constantly piloting and guiding a
particularly agile craft that is invisible most of the time.
One lesson we have learned from insects is that they are
able to sense and avoid obstacles and to navigate swiftly

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

through the most unpredictable environments without any


need for sonars or laser range-nders. Insects visually
guided behaviour depends on optic ow (OF) sensing processes. The optic ow perceived by a moving animal,
human or robot is a vector eld that gives the angular speed
(direction in degrees; magnitude in rad/s) at which any contrasting object in the environment is moving past the eye.
Measuring this angular speed is not a trivial task. Onboard
insects such as the y, this angular speed is not given
directly but is computed locally by a neuron called an
Elementary Motion Detector (EMD), which is driven by
at least two photoreceptors facing in dierent directions.
The ys eye has long been known to be equipped with a
whole array of these smart sensors, which contribute to
assessing the OF [3,4]. The ys eye is therefore one of
the best animal models available for studies on motion
detecting neurons [5]. Our EMD model was inspired by
the results of studies in which microelectrode recordings
were performed while applying microstimulation to individual photoreceptor cells on the ys retinal mosaics [5,6].
Psychophysical studies on motion detection in humans
and neurobiological studies on motion detection in various
animals have led to the development of two main kinds of
models for directionally selective motion detectors. These
models are based on what is known as intensity-based
schemes (correlation techniques and gradient methods)
and token-matching schemes [7,8]. Our y-inspired electronic EMDs are of the second kind. We have been using
them for 20 years onboard various mobile robots.
Our biorobotic approach consists in building terrestrial
and aerial robots [913] based on optic ow sensing techniques. The roboy (le robot-mouche) started o as a
small, completely autonomous robot equipped with a compound eye and 114 electronic EMDs implemented in analog technology using Surface Mounted Devices (SMD).
This robot was able to steer its way through an unknown
eld full of obstacles at a relatively high speed (50 cm/s)
[10,13]. During the last 10 years, we have further used
EMDs for the visual guidance of other miniature (mass
<1 kg) terrestrial [14] and aerial [1519] robots called SCANIA, FANIA, OSCAR, OCTAVE and LORA,
respectively.
In the latter robots, the EMD principle inspired by the
ys EMDs was initially implemented using conventional
analog technologies such as SMD and FPAA. Later on,
we turned to digital technology, using a Micro-Controller
(lC) [20]. The lC deals with just two photoreceptor inputs
and carries out a single task.
Onboard the MAV, visual sensors have to be installed in
various parts of the Field of View (FOV), and each of them
requires a given number of EMDs. Arrays of adjacent
EMDs can be needed in some parts to increase the MAVs
guidance accuracy [10] (Fig. 1).
However, neither FPAA nor lC devices provide sucient resources for carrying out the signal processing tasks
which arise when dealing with a whole array or mosaic of
EMDs. One solution consists in using Very-Large-Scale-

409

Fig. 1. Several visual sensors covering various Fields of View for the
guidance of a Micro-Air-Vehicle.

Integrated (VLSI) circuits. Several electronic EMDs, such


as those based on the Reichardt correlation sensor [21,22]
or Franceschini et al.s velocity sensor [23] or Barrows
design [24] were recently developed in the form of smart
VLSI circuits, which included an integrated photoreceptor
array forming their front-end. In this design, however, the
size, number and physical characteristics of the photoreceptors are xed once and for all, which means that it
may be necessary to obtain a dedicated chip for each specic application or each specic (indoor or outdoor) environment. In addition to this lack of exibility, another
disadvantage of VLSI circuits is the relatively long design
process involved: the chips are costly and they cannot be
obtained quickly because the silicon brokers deadlines
tend to be rather long. A more exible solution consists
of using a single, o-the-shelf Integrated Circuit (IC) to
process the signals from several photoreceptors, which
may or may not all have the same physical properties
and the same size. We decided to use the Field Programmable Gate Array (FPGA) technology in this hybrid
approach [25]. The remarkable characteristics of current
FPGAs, such as large number of gates, the use of several
simultaneous clock frequencies, the presence of embedded
RAM blocks, embedded multipliers, and embedded Intellectual Property functions, and their small size and light
weight, make it possible to design a exible multi-EMD
system and mount it onboard a MAV with a very low avionic payload [26].
In the next section, we present the bio-inspired visual
system and the principles underlying EMD operation.
The photodiode conguration and its use with a linear
array are explained. Section 3 presents the specic topdown method used for FPGA integration of the EMDs
using the Matlab (The Mathworks software) and ISE
(Xilinx software) programs. In Section 4, details of the
design specications (sampling frequency limits, digital
techniques, architecture) are explained. Lastly, in Section
5, we describe the hardware implementation and present
the experimental results obtained on a real test bed on
which various contrasting patterns were made to cross
the visual eld of an EMD array at various speeds.

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

410

K  Du K 0

Dt
Dt

2. Bio-inspired visual system

XEMD e s () K  X

The principle of the Elementary Motion Detector was


originally based on the results of experiments in which a
combined electrophysiological and micro-optical approach
was used. The activity of large eld motion detecting neurons in the houseys eye was recorded with a microelectrode while applying optical microstimuli to a single pair
of photoreceptor cells located behind a single facet [5,27].
Based on the results of these experiments, a principle was
drawn up for designing an articial EMD capable of measuring the angular speed X of a contrasting object [28,29].

Our original EMD functional scheme [28,29] consists of


ve processing steps giving XEMD (Fig. 2):

Dt

2.1. Spatial sampling and spatial ltering


The visual systems of advanced creatures include arrays
of motion detecting neurons, which compute the relative
motion of any contrasting objects that cross their visual
elds [27,30]. In the physical model that we have constructed, an array of articial photosensors is connected
to an array of neuromorphic Elementary Motion Detectors. Each single EMD is driven by two neighbouring
receptors and can only detect movements occurring within
the narrow visual eld corresponding to these two photoreceptors. The latter are mounted slightly defocused behind a
lens, which creates a bell-shaped Angular Sensitivity Function (ASF) for each of them [31]. The ASF, which is often
modeled in the form of a truncated Gaussian curve [11], is
characterized by the acceptance angle Dq, i.e. the Angular
Width at Half Height (AWHH). The ASF plays an important role in the visual processing chain, because it serves as
an eective low pass anti-aliasing spatial lter.
2.2. Elementary Motion Detector (EMD)
In each of the EMDs forming the array, the lens/photoreceptor combination transforms the motion of a contrasting object into two successive photoreceptor signals
separated by a delay Dt:
Dt

Du
X

where Du is the inter-receptor angle and X is the relative


angular speed (the optic ow, OF). An electronic device
based on some linear and nonlinear functions estimates
the angular speed XEMD:

1. A rst-order high-pass temporal lter (fc = 20 Hz) produces a transient response whenever a contrasting border crosses the photoreceptors visual eld. This lter
enhances the contrast information while eliminating
the DC components of the photoreceptor signals. In
addition, it makes a distinction between ON and
OFF contrasting edges (i.e., dark-to-light and lightto-dark edges, respectively).
2. A higher order low-pass temporal lter (fc = 30 Hz)
attenuates any high frequency noise, as well as any interferences brought about by the articial indoor lighting
(100 Hz) used.
3. A hysteresis thresholding device/step separates ON and
OFF transitions and normalizes the signals in each
channel.
4. A time delay circuit is triggered by one channel and
stopped by the neighbouring channel. This function
measures the delay time Dt elapsing between similar
(ON or OFF) transitions occurring in two adjacent
photoreceptors.
5. A converter translates the delay Dt measured into a
monotonic function that will approximate the angular
speed XEMD. A simple inverse exponential function
makes for a relatively large dynamic range (Eq. (2)).

2.3. Photoreceptor conguration


In an embedded system such as a Micro-Air-Vehicle,
which has to be as lightweight as possible, the number of
components, the Printed Circuit Board (PCB), and the size
and mass of all the electronic devices are crucial parameters. Yet some functions, such as the currentvoltage converter, the gain control, the anti-aliasing lter, the
Analog-to-Digital Converter (ADC) and the analog or digital multiplexer can be advantageously implemented outside the FPGA when using a photoreceptor array.
A conguration involving photodiodes in the currentintegrator mode was used, (Fig. 3) [32]. The photodiode
model is approximately equivalent to a current generator
IG in parallel with a junction capacitance CJ. The current

Fig. 2. Functional scheme of the Elementary Motion Detector (EMD) principle (adapted from [20,28,29]).

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

411

TINT. A refresh pulse lasting few nanoseconds closes the


analog switch K, thus temporarily dumping the photodiode
charges of the junction capacitance, while starting the
down-counting process. The comparator output stops the
down-counter when the voltage at the anode reaches Vth.
The down-counter bit number N is dened as:
T INT 6 2N  T DC 6 T S
Fig. 3. Photoreceptor in the Current-Integrator mode.

generated is the sum of the signal current which


depends on the number of photons detected the leakage current and the noise current. The photodiode junction capacitance depends on the depth of the depletion
layer and on the reverse bias voltage (CJ  5 pF at
3.3 V in the case of the 12-photodiode linear array with
the reference code Centronic LD12A-5T). The load
capacitance caused by the circuit routing and by the
comparator input has to be taken into account to obtain
the brightness sensitivity. The value of the threshold voltage Vth relative to the photodiode anode voltage can be
used to adjust the sensitivity so as to be able to cope
with the widest possible illuminance range. A DigitalAnalog Converter (DAC) managed by an I2C bus from
the FPGA, carries out a controlled direct voltage Vth of
around 1 V, for example.
The digital data corresponding to the photoreceptor signal originating from each channel is obtained from the output delivered by a down-counter integrated into the FPGA
(where high intensities correspond to high digital values).
These data are recovered at the end of an Integration Time

where TDC is the down-counter clock period and TS is the


sampling time. Since each photoreceptor drives its own
down-counter, we took N = 12 bits, thus keeping the
down-counter size to a minimum in the FPGA. In addition, the sampling time TS was taken to be equal to the
integration time TINT with a view to obtaining the optimum sampling frequency.
By using the photodiode array in the current-integrator
mode, we limit the number of ancillary components on the
PCB (each photodiode corresponds to just one switch and
one comparator). Moreover, we avoid the problems arising
with commercial CMOS cameras, where timing constraints
are imposed on both the photosensor and the FPGA.
Although CMOS cameras are available nowadays in small,
inexpensive packages, they do not have high frame rates
because they still need to scan every pixel internally [25,33].
Fig. 4 shows the Angular Sensitivity Function (ASF)
obtained from two neighbouring photodiodes equipped
with a lens with a focal length of 30 mm. The ASF of the
lens/photoreceptor system was determined by moving a
point light source across the FOV of two neighbouring
photoreceptors while measuring their relative output voltages with a Digital-to-Analog Converter (DAC) in the current-integrator mode. Defocusing the lens by +2.25 mm

Fig. 4. The Angular Sensitivity Function (ASF) of two adjacent photoreceptors. The inter-receptor angle is Du = 1.05, and the acceptance angle
(Angular Width at Half Height: AWHH) is 1.65.

412

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

yielded appropriate Gaussian ASFs with Du  1.05 and


Dq  1.65.
3. Top-down methodology
The digital signal processing methods used and the
linked framework required a top-down methodology
adapted to the hardware implementation of Elementary
Motion Detectors, Fig. 5. This methodology simplies
the integration problems by using detailed descriptions
[34,35].
First, the functional approach involves dividing the system into elementary functional blocks. In our case, these
functions are dened by the functional scheme presented
in Fig. 2, to which the lens/photoreceptor model has been
added. This approach can be described at two levels, in
terms of the system model, which validates the principle
of the visual sensor (lens/photoreceptors/EMD), and the
high level behavioural model, which denes the computation sequences and timing data (sampling frequency, etc.).
With each function, the operating approach identies
the data type and the procedures used in the algorithm.
At this stage, the algorithm implementation model takes
into account some factors, such as the binary format, in
order to optimize the digital calculations.
The logical approach introduces hardware constraints
on the blocks that are to be integrated. These are studied
in the architectural model, which denes one or several
implementation architectures that comply with the optimized digital algorithm. During this stage, the processing

time required by the algorithm is analyzed. Also, the architecture can be optimized as far as the hardware parameters
(the number of basic logic cells) are concerned. Finally, a
suitable compromise must be made between the processing
speed and the hardware functions.
The integration stage is that involving the physical
implementation in the FPGA. Two levels of description
levels are applied here: that of the logical model (RTL
model) and that of the electrical model. The logical model
describes the architecture as a netlist of interconnected
basic logic cells after a logic synthesis. The electrical model
is a low level hardware description obtained after the placing and routing of cells in the FPGA. In the framework of
this approach, a digital simulation deals with the electrical
and timing problems caused by the physical
implementation.
At the end, a le is set up for the hardware conguration
in the FPGA, which will be used to perform tests in a physical environment. The Matlab software program was used
to study the functional approach and operating approach
models presented above. However, this software is not suitable for use in the integration stage. We therefore chose the
ISE platform of CAD Xilinx tools for this purpose.
The logic approach stage was validated using stimuli
obtained from the Matlab environment and the graphical
facilities provided by this software were used to plot the
output signals. The low level hardware descriptions were
simulated only with digital stimuli.
4. EMD implementation
4.1. Photoreceptor conguration
The system model was easy to develop because the various functional blocks of an EMD were dened twenty
years ago [28,29]. With the high level behavioural model,
the sampling frequency has to be carefully chosen because
several parameters in the EMD design, such as the digital
lter coecients and the number of possible EMD channels that can be integrated into the FPGA, will depend
on the sampling time.
In aerial robotic applications, the sampling time must
comply with the requirements imposed on the EMD so that
the Micro-Air-Vehicle (MAV) can be controlled throughout its ight envelope. The maximum sampling time
TSMAX will depend on the minimum delay Dtmin encountered by the MAVs EMDs during the fastest maneuvers
in the most critical applications. One example of a fast
maneuver is automatic terrain-following, which is performed by measuring the optic ow in the downward direction [10,11,1517]. When an eye-bearing MAV is ying in
pure translation at speed vx and height h over an unknown
terrain, the image of the terrain underneath slips at an
angular speed X that depends on both vx and h:

Fig. 5. Top-down method of designing and simulating the EMD principle


with a view to implementing it in FPGA.

vx
h

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

If we take an extreme case where the MAV is allowed to y


at the minimum height h = 0.5 m at the maximum speed
vx = 10 m/s, Eqs. (1) and (4) show that the downward
oriented EMD onboard the MAV, with its inter-receptor
angle Du = 1.05 (Fig. 4), will be subject to a minimum delay Dtmin  0.9 ms. Accordingly, the sampling frequency
fSmin will have to be set at values of at least 1 kHz.
The maximum sampling frequency fSMAX must be in
keeping with the timing specications to which the lens/photoreceptors devices are subject, especially in the case of
CMOS sensors equipped with digital outputs [22] or photoreceptors using the current-integrator mode [23]. On the
other hand, the maximum sampling frequency fSMAX, is limited by the lower end of the illuminance range over which the
sensor is intended to operate, because at low illuminance levels, the integration of the photoreceptor signal takes a relatively long time and the sampling procedure will have to
wait for this integration process to be completed (Eq. (3)).
Taking the range [1002000 Lux] to be a reasonable working
illuminance range for the MAV, this gives fS = 2.5 kHz,
which we call the nominal sampling frequency. At twice this
sampling frequency (5 kHz), the MAV would still operate
eciently in the [2002000 Lux] range, but it would then
be dicult for it to detect low contrasts under articial
indoor lighting conditions (see Section 5).
4.2. Digital specications
The digital specications were dened during the lter
design. Due to the low values of the high-pass and low-pass
lter corner frequencies (fCHP = 20 Hz, fCLP = 30 Hz) in
comparison with the sampling frequencies (fS = 2.5 kHz
or 5 kHz), it was not possible to obtain a digital band-pass
lter meeting the Bode specications. The high-pass lter
section and low-pass lter section were therefore designed
separately and cascaded.
Innite Impulse Response (IIR) lters were synthesized
(see Eq. (5) below) because they require far fewer coecients than Finite Impulse Response (FIR) lters, given
the low cut-o frequencies and short sampling times
involved:
yn

n
X
i1

bi  xi 

n1
X

ai  yi

i1

A Direct-Form II structure was used because this structure reduces the number of delay-cells and decreases the
quantization errors. Ripples on the low-pass lter temporal response were prevented by using a 4th-order Butterworth Filter, the phase of which was linearized over the
frequency range of interest. The lters require 17 coecients in all (4 coecients for the 1st-order high-pass section, 12 for the 4th-order low-pass section, and 1 for the
adjustment between the two lters). Three Direct-Form II
lters suce in fact to perform all the ltering, including
that carried out by the two cascaded 2nd order low-pass
lters.

413

A specic binary format was developed and used to prevent oset and stabilization problems. A two-complement
xed-point binary format, denoted [s, mI, mD], was dened.
The bit number of integer parts, mI, and the decimal part,
mD, were dened so as to ensure maximum accuracy and
to eliminate overow from the lter calculations. Based
on the results of a study carried out with Filter Design and
Analysis and Fixed-point Blockset of the Mathworks tools,
6 bits were selected for the integer part mI and 29 bits for
the decimal part mD. The large mD bit number is due to
the low value required to make the coecients in the lowpass lter section comply with a Bode template characterized by a low cut-o frequency at high sampling frequencies.
Other digital specications were dened as regards (i)
the bit number of the counter output giving the delay time
Dt, and (ii) the inverse exponential function giving the
angular speed XEMD. The delay time Dt is measured in
terms of a count number at a given clock period. The minimum delay to be measured determines the minimum clock
period (200 ls for fS = 5 kHz, or 400 ls for fS = 2.5 kHz).
The maximum delay to be measured is taken to be
100 ms, which is compatible with the wide range of angular speed values encountered by the MAV (Eq. (4)):
X  10/s to 5000/s, for Du = 1.05. Using a 9 bits counter at fS = 5 kHz or an 8 bits counter at fS = 2.5 kHz gives
an delay of 102.4 ms.
The measured angular speed X measured is a hyperbolic
function of Dt (Eq. (1)), but we used a function that
decreases more slowly: an inverse exponential function
with a time constant s = 30 ms. A Look-Up Table (LUT)
was used to convert the delay Dt into an output that
decreases monotonically (exponentially) with the delay
and therefore approximately reects the angular speed
XEMD (Eq. (2)). The Look-Up Table features an 8-bit input
resolution (at fS = 2.5 kHz), or a 9-bit input resolution (at
fS = 5 kHz), and a 12-bit output resolution for memorizing
the results of the conversion.
An algorithm implementation model using the Matlab
language was used to check the digital specications.
Fig. 6 presents the results of simulations carried out with
this model when the EMD, with its eld of view (FOV as
dened in Fig. 4) oriented vertically downwards, was travelling horizontally at a constant speed of 2 m/s above a
gently rising terrain (Fig. 6a) covered with a randomly contrasting texture (Fig. 6b). The nal curve (Fig. 6i) is a plot
of the estimated angular speed XEMD , which is reminiscent
of the hilly relief shown in Fig. 6a.
Fig. 7 gives a magnication of the signals from two adjacent photoreceptors and their lter outputs between distances 0.8 and 2 m. The ON dark-to-light edges and
OFF light-to-dark edges are highlighted.
4.3. Architecture
Fig. 8 shows the system architecture described by the
architecture model and designed for the processing of each
EMD. This architecture has several important features,

414

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

Fig. 6. Simulation results of the EMD algorithm implementation (fS = 2.5 kHz, vx = 2 m/s, h = 1 m, 36-bit xed-point binary format). (a) Shallow relief
(from 0 to 0.6 m), (b) one-dimensional ground texture consisting of randomly distributed, variously contrasting rectangles, (c) and (d) output signals from
two adjacent photoreceptors, (e) and (f) band-pass ltered outputs, (g) and (h) outputs from the hysteresis comparators, (i) relative angular speed XEMD
estimated by the EMD facing vertically downwards over the terrain shown in (a) while translating at a constant speed.

Fig. 7. Zoom on the signals from two adjacent photoreceptors (see Fig. 6c and d) and their band-pass ltered versions (see Fig. 6e and f).

such as the optimization of the digital lters, the simplicity


of its design thanks to the use of Intellectual Property (IP)
cores, and the exibility of the circuit design. Special care

was taken to restrict the space taken up by the digital lter


implementation, in order to maximize the possible number
of EMDs in the FPGA.

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

415

Fig. 8. Elementary Motion Detector architecture.

Fig. 9. Filter Compute Unit architecture.

A single structure called the Filter Compute Unit was


developed, with which high speed sequential processing
can be performed, as shown in Fig. 9. This unit consists
of just one multiplier, one adder, one Read Only Memory
(ROM), one Random Access Memory (RAM), two multiplexers, three registers and two binary transformation
functions. These components are Xilinx Intellectual Property (IP) blocks or synthesizable VHDL (Very High speed
Integrated Circuit Hardware Description Language)
descriptions. The ROM contains the 17 lter coecients
obtained at a sampling frequency fS = 2.5 kHz or
fS = 5 kHz. These coecients can be quickly and easily
changed using CAD tools (ISE Xilinx) as required, in order
to run tests at other sampling frequencies. The RAM is
used to store the intermediate values computed. Multiplexers minimize the number of operators.
In this unit, each photoreceptor signal is processed
during the sampling time TS. When the processing has been
completed, the digital values are memorized in a register
and the intervals between the excitation of two neighbouring photoreceptor channels start to be measured. A
hysteresis comparator determines the instant at which the
band-pass ltered signal from each photoreceptor channel

reaches the threshold value. The resulting logical signal is


used to trigger the measurement of the delays Dt in question. The logical signal delivered by channel i starts a counter which is stopped by the logical signal delivered by the
neighbouring channel i + 1.
One of the most interesting facts learned from our studies
in which electrophysiological analysis of the y EMD was
combined with single photoreceptor microstimulation has
been that the motions of ON and OFF contrasting edges
are detected separately by the nervous system and measured
by two separate neural circuits operating in parallel [27].
These ON and OFF signals are not necessarily redundant and actually improve the refresh rate of motion
detection while alleviating the correspondence problem.
Each EMD channel was therefore split into two parallel
channels, one devoted to measuring the motion of ON
contrasting edges, and the other to measuring the motion
of OFF contrasting edges. This required multiplying the
number of comparators and counters by two, as shown
in Fig. 8.
Useful information about the delay Dt, and hence about
the angular speed X, can be obtained by suitably merging
the count data corresponding to the various intervals

416

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

measured. The data fusion procedure consisted here in


increasing the measurement accuracy by arithmetically
averaging the 14 ON and OFF channel intervals measured by a linear array of 7 EMDs (8 neighbouring photoreceptors) covering a total visual eld of 8 1.05 = 8.4.
The nal piece of the architecture is the inverse exponential function with a time constant s = 30 ms, which allows
for a wide range of delays ranging up to 102.4 ms. This
component was implemented in the form of a Look-Up
Table (IP block in FPGA) that was used to convert the
fused delay data into an estimated angular speed XEMD.
5. Experimental results

and reaches 400 mW at high illuminance levels (sunny outdoor environments).

5.1. Multi-EMD hardware integration


Among the members of the Virtex2 Xilinx FPGA family, we selected the XC2V250 type for the hardware implementation phase because its small size (12 12 mm) and
small mass (0.5 g) meet the stringent payload constraints
involved in designing Micro-Air-Vehicles, while it provides
250.000 possible system gates available for the computation. In addition, the hardware implementation phase
was carried out using the convenient ISE Xilinx CAD
Tools on a dedicated evaluation board, which made the
FPGA design relatively quick, easy and exible.
Table 1 shows the working characteristics of the device
obtained after the logical synthesis of a 7-EMD architecture (based on 8 photoreceptor inputs). The processing performed by each EMD is carried out in 163 clock cycles.
Final calculations after the placing and routing steps had
been carried out showed that a maximum FPGA frequency
as high as 100 MHz would be acceptable. This is compatible with processing 245 EMD channels in the FPGA at a
sampling frequency of 2.5 kHz.
Fig. 10 shows the two Printed Circuit Boards (PCB)
which we designed to test the visual sensor. A 12-photoreceptor linear array (Centronic LD12A-5T) was mounted
onto a miniature circular PCB (diameter 24 mm, total mass
1.25 g) incorporating the 12 SMD current integrating
ampliers and a DAC. This circular board was mounted
behind a lens (f = 30 mm) and nely shifted axially (by
means of a micrometer) to obtain the defocus described
in Section 2.3. The second board (measuring 33 60 mm
and weighing 5.5 g) consists of a Xilinx FPGA, a Read
Only Memory for the conguration architecture, and voltage regulators. Power consumption is 150 mW at low illuminance levels (indoor environments with articial light)
Table 1
Working characteristics of the XC2V250 device
Slices
Slice ip ops
Four input LUTs
Bonded IOBs
BRAMs
MULT18 18s
GCLKs

924 out of 1536


988 out of 3072
1563 out of 3072
33 out of 92
4 out of 24
9 out of 24
1 out of 16

Fig. 10. Photoreceptor linear array board (left) and electronic board
(right) with the 12 12 mm FPGA.

60%
32%
50%
35%
16%
37%
6%

5.2. Experimental test-bench


The experimental test-bench used to assess the performances of the experimental eye is shown in Fig. 11. The
eye consists here of a lens (focal length 30 mm) and the
linear photoreceptor array LD12A-5T with its current-integrator electronics. The eye is placed at a distance D = 1 m
from a white wall, in front of which a contrasting pattern (a
strip of black or grey cardboard) is moved. This pattern is
mounted onto the pen-holder arm of an analog plotter
which moves it linearly at a constant speed v0. Data from
each of the comparators in the photoreceptor board
(Fig. 3) are sent to the FPGA.
Fig. 12 shows the signals delivered by the two photoreceptor channels of an EMD when a dark stripe with contrast m = 0.8 crossed their visual eld at an angular
speed X = 58.2/s. The contrast m = 0.8 was determined
from the relative luminance of the contrasting pattern (I1)
and that of the white background (I2), as follows:
m

I2  I1
I1 I2

The apparent noise (amplitude peak-to-peak  60 mV) in


the photodiode outputs (Fig. 12a and b) is caused by the
100-Hz uctuation due to the articial light. Noise in the
band-pass outputs (Fig. 12c and d) is mainly a quantication noise due to the binary adaptation between the digital
lter output signal and the 12-bit DAC (36 bits ) 12 bits)
used to monitor the signals.
Fig. 13 gives the normalized EMD output with respect
to the delay time Dt, which was estimated from the various
speeds XC at which the moving pattern was presented. The
following parameters were used in this experimental test:
maximum delay range 102.4 ms, sampling frequency
fS = 2.5 kHz, contrast m  0.8, measured illuminance
420 Lux (corresponding to the usual illuminance of indoor
environments). The curve (solid line) is the theoretical
inverse exponential and the circles are the output measurement points at each angular speed XC. The slight mismatch
errors are mainly due to inaccurate estimates of the actual
delays, which were deduced from the linear speed v0 at

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

417

Fig. 11. Experimental test-bench.

Fig. 12. (a and b) Real signals from the 4th and 5th photoreceptors in the array, corresponding to the down-counter (Fig. 3), (c and d) band-pass ltered
outputs, (e, f, g and h) ON and OFF comparator outputs (e and f correspond to the ON and OFF transitions in the 4th photoreceptor; g and h
correspond to the ON and OFF transitions in the 5th photoreceptor). The interval e measured was Dt = 18.04 ms, which corresponds to the angular
speed of X = 58.2 of the moving stripe presented.

which the analog plotter (used to displace the pattern) was


driven. In this conguration, the smallest angular speed
detected was approximately 8.4/s and the highest angular
speed measured was 82/s (the speed was limited by the
plotter used to move the stripe).

Similar experiments were carried out with various grey


scale patterns to test the robustness of the digital EMD
to contrast. Table 2 indicates the minimum angular speed
detected by the digital EMD with 3 contrast values at
a mean illuminance of 420 Lux, at the two sampling

418

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

Fig. 13. Normalized EMD output versus delay Dt (circles). This delay is equal to the time taken by an edge to cross the visual axes of two neighbouring
photoreceptors. Dt was inferred from the speed of the analog plotter arm moving the dark pattern (Fig. 11). The continuous curve shows that there was a
good match with the inverse exponential curve.

Table 2
Minimum angular speeds measured, depending on the contrast m and the
sampling frequency fS (Illuminance = 420 Lux)
m = 18%
m = 50%
m = 80%

fS = 2.5 kHz

fS = 5 kHz

40.8/s
15.2/s
8.4/s

No detection
32.5/s
24/s

frequencies selected: fS = 2.5 kHz and fS = 5 kHz. As can


be seen from this table, the motion of highly contrasting
patterns is measurable down to lower speeds than the
motion of slightly contrasting patterns. This table also
shows that reducing the sampling period to 2.5 kHz helps
to detect the motion of low-contrast objects.
6. Conclusion
In this paper, we presented a bio-inspired visual system
based on optical ow (OF) measurements. The OF sensors
(Elementary Motion Detectors) are implemented in terms
of a miniature FPGA, which makes them small enough
to be embedded into miniature autonomous systems such
as Micro-Air-Vehicles, where they can be used for obstacle
avoidance, terrain following and stabilization purposes.
A top-down methodology was used to determine how
the EMDs would function at each stage of development
and to integrate and optimize the architecture. Our specic
EMD architecture is integrated into a Virtex2 Xilinx
FPGA (XC2V250), which is only 12 12 mm in size, while
featuring no less than 250.000 system gates. This device
was tested successfully using an experimental electro-optical test-bench. Using this same FPGA at a 100 MHz clock

frequency, it would be possible to implement up to 245 Elementary Motion Detectors on a less-than-one-gram piece
of integrated digital electronics that requires only a few
external components.
The maximum sampling frequency of 5 kHz makes it
possible to operate in a relatively large illuminance range.
The most suitable sampling frequency was found to be
2.5 kHz when the Centronic LD12A-5T photoreceptor linear array was used with a lens with a focal length of 30 mm.
These high sampling frequencies are compatible with the
fast dynamics of Micro-Air-Vehicles.
The FPGA solution is highly versatile, as it can accommodate photoreceptor arrays of various sizes and shapes
associated with lenses of various focal lengths covering various elds of view, in much the same way as the eyes of
many arthropods are able to do.
References
[1] M. Williams, D.I. Jones, G.K. Earp, Obstacle avoidance during aerial
inspection of power lines, Aircraft Engineering and Aerospace
Technology 73 (5) (2001) 472479.
[2] V.H.L. Cheng, B. Sridhar, Technologies for automating rotorcraft
Nap-of-the-earth ight, Journal of American Helicopter Society
(1993) 7887.
[3] W. Reichardt, R. Poggio, Visual control of orientation behavior in
the y, Quarterly Reviews of Biophysics 9 (3) (1976) 311375.
[4] K. Hausen, The lobula complex of the y: structure, function and
signicance in visual behaviour, in: M.A. Ali (Ed.), Photoreception
and Vision in Invertebrates, New York, 1984, pp. 523559.
[5] N. Franceschini, Early processing of color and motion in a mosaic
visual system, Neuroscience Research (Suppl. 2) (1985) 1749.
[6] N. Franceschini, Combined optical, neuroanatomical, electrophysiological and behavioural studies on signal processing in the y
compound eye, in: C. Taddei-Ferretti (Ed.), Biocybernetics of Vision:

F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419

[7]
[8]
[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]
[22]

[23]

[24]

[25]

Integrative Mechanisms and Cognitive Processes, World Scientic,


London, pp. 341361.
S. Ullman, Analysis of visual motion by biological and computer
systems, IEEE Computer 14 (1981) 5767.
S.S. Beauchemin, J. Barron, The computation of optical ow, ACM
Computing Surveys 27 (3) (1995) 433467.
J.-M. Pichon, C. Blanes, N. Franceschini, Visual guidance of a mobile
robot equipped with a network of self-motion sensors, in: W. Wolfe,
W. Chun (Eds.), Mobile Robots. SPIE, vol. 1195, Bellingham, USA,
1989, pp. 4453.
N. Franceschini, J.-M. Pichon, C. Blanes, From insect vision to robot
vision, Philosophical Transactions Royal Society of London B (337)
(1992) 283294.
T. Netter, N. Franceschini, A Robotic aircraft that follows terrain
using a neuromorphic eye, in: Proceeding IEEE International
Conference on Robots and Systems (IROS), Lausanne, Swiss, 2002,
pp. 129134.
F. Ruer, S. Viollet, N. Franceschini, Visual control of two aerial
micro-robots by insect-based autopilots, Advanced Robotics 18 (8)
(2004) 771786.
N. Franceschini, J-M. Pichon, C. Blanes, Bionics of visuomotor
control, in: T. Gomi (Ed.), Evolutionary Robotics: From Intelligent
Robots to Articial Life, AAAI books, Ottawa, 1997, pp. 4967.
F. Mura, N. Franceschini, Obstacle avoidance in a terrestrial
mobile robot provided with a scanning retina, in: M. Aoki, I.
Masaki (Eds.), Intelligent Vehicles, vol. II, MIT Press, Cambridge,
1996, pp. 4752.
T. Netter, N. Franceschini, Neuromorphic optical ow sensing for
nap-of-the-earth ight, in: Proceeding of SPIE Conference on Mobile
Robots XIV, vol. 3838, Boston, USA, 1999, pp. 208216.
F. Ruer, N. Franceschini, OCTAVE, a bio-inspired visuo-motor
control system for the guidance of Micro-Air-Vehicles, in: Proceeding
of SPIE Conference on Bioengineered and Bioinspired Systems, vol.
5119, Bellingham, USA, 2003, pp. 112.
F. Ruer, N. Franceschini, Optic ow regulation: the key to aircraft
automatic guidance, Journal of Robotics and Autonomous Systems
(50) (2005) 177194.
S. Viollet, N. Franceschini, Aerial Minirobot that stabilizes and
tracks with a bio-inspired visual scanning sensor, in: B. Webb, T.
Consi (Eds.), Biorobotics, MIT Press, Cambridge, 2001, pp. 6783.
J. Serres, F. Ruer, N. Franceschini, Two optic ow regulators for
speed control and obstacle avoidance, in: Proceedings of the rst
IEEE International Conference on Biomedical and Biomechatronics
(Biorob), Pisa, Italy, 2006, pp. 750757.
F. Ruer, S. Viollet, S. Amic, N. Franceschini, Bio-inspired optical
ow circuits for the visual guidance of Micro-Air-Vehicles, in:
Proceeding of the IEEE International Symposium on Circuits And
Systems, vol. III, Bangkok, Thailand, 2003, pp. 846849.
R.R. Harrison, C. Koch, A robust analog VLSI motion sensor based on
the visual system of the y, Autonomous Robots 7 (3) (1999) 211224.
S.C. Liu, A. Usseglio-Viretta, Fly-like visuomotor responses of a
robot using a VLSI motion-sensitive chips, Biological Cybernetics 85
(6) (2001) 449457.
J. Kramer, R. Sarpeshkar, C. Kock, Pulse-based analog VLSI
velocity sensors, IEEE Transactions on Circuits and Systems II (44)
(1997) 86101.
G. Barrows, C. Neely, Mixed-mode VLSI optic ow sensors for inight control of a Micro Air Vehicle, Proceedings of SPIE 4109 (2000)
5263.
H. Yamada, T. Tominaga and M. Ichikawa, An autonomous ying
object navigated by real-time optical ow and visual target detection,

[26]
[27]

[28]
[29]

[30]
[31]
[32]

[33]

[34]

[35]

419

in: Proceedings of the IEEE International Conference on Field


Programmable Technology, Tokyo, 2003.
D.S. Katz, R.R. Some, NASA advances robotic exploration, IEEE
Computer 36 (1) (2003) 5261.
N. Franceschini, A. Riehle, A. Le Nestour, Directionally selective
motion detection by insect neurons, in: D.G. Stavenga, R.C. Hardie
(Eds.), Facets of Vision, Springer, Berlin, 1989, pp. 360390.
N. Franceschini, C. Blanes, L. Oufar, Passive non-contact optical
velocity sensor, Dossier ANVAR/DVAR Nb 51,549, Paris, 1986.
C. Blanes, Appareil visuel elementaire pour la navigation a` vue dun
robot mobile autonome, DEA thesis (in French), Univ. Aix-Marseille,
1986.
K. Hausen, M. Egelhaaf, Neural mechanisms of visual course control
in insects, in: Facets of Vision, Springer, Berlin, 1989, pp. 391424.
R.C. Hardie, Functional organisation of y retina, in: D. Ottoson
(Ed.), Progress in Sensory Physiology, Berlin, vol. 5, pp. 179.
F. Aubepart, M. El Farji, N. Franceschini, FPGA implementation of
Elementary Motion Detectors for the visual guidance of Micro-AirVehicles, in: Proceeding of IEEE International Symposium on
Industrial Electronics, Ajaccio, France, 2004, vol. 1, pp. 7176.
J.C. Zuerey, A. Beyeler, D. Floreano, Vision-based navigation from
wheels to wings, in: Proceeding of IEEE International Conference on
intelligent Robots and Systems, Las Vegas, USA, 2003, pp. 2968
2973.
F. Aubepart, N. Franceschini, Optic ow sensors for robots:
Elementary Motion Detectors based on FPGA, in: Proceeding of
IEEE International Workshop on Signal Processing Systems, Athens,
Greece, 2005, pp. 182187.
C. Browy, G. Gullikson, M. Indovina, A Top-Down approach to
IC design, <www.indovina.us/  mai/a_top_down_approach_to_ic_
design.pdf.>.

Fabrice Aubepart was born in Chaumont, France.


In 1999, he obtained his Ph.D degree in Microelectronics at the Universite Louis Pasteur in
Strasbourg, France. He joined the Biorobotics
Laboratory at the Motion and Perception Institute, CNRS and University of the Mediterranean
in Marseille, France, in 2001. His research focuses mainly on Algorithms and Architectures for
Robotics, Computer Vision and VLSI Design.

Nicolas Franceschini was born in Macon, France.


He graduated in Electronics and Control Theory
at the National Polytechnic Institute in Grenoble
before studying Biophysics, Neurophysiology
and behavioural analysis at the University and
Max-Planck Institute for Biological Cybernetics
in Tubingen (Germany). He obtained his PhD at
the National Polytechnic Institute in Grenoble in
1972 and spent nine years as a research worker at
the Max-Planck Institute. He then settled down
in Marseille, where he set up a Neurocybernetics
Research Group at the National Centre for Scientic Research
(C.N.R.S.). He is now a C.N.R.S Research Director and Head of the
Biorobotics Laboratory at the Motion and Perception Institute, CNRS
and Univ. of the Mediterranean, Marseille, France.

Vous aimerez peut-être aussi