Vous êtes sur la page 1sur 51

REALTIME TRAFFIC CONGESTION CONTROL

WAQAS ALTAF
ZAFAR KHAN
FAHAD ALI KHAN

BCE-SP09-031
BCE-SP09-039
BCE-SP09-044

SPRING 2013
Department of Electrical Engineering

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY


ABBOTTABAD PAKISTAN
1

Acknowledgements
In the Name of ALLAH, the Most Kind and Most Merciful
We are grateful to the ALLAH Almighty who provides all the resources of every kind to us, so
that we make their proper use for the benefit of mankind. May He keep providing us with all the
resources, and the guidance to keep helping the humanity.
We would like to thank our parents who kept backing us up in all the times, both financially and
morally. We would also like to thank our supervisor Mr. Umair Ashiq for his guidance and
encouraging us to work hard and smart. We have found him very helpful while discussing the
optimization issues in this dissertation work. His critical comments on our work have certainly
made us think of new ideas and techniques in the fields of optimization and software simulation.
We are very thankful to all lab attendants who help us in the availability of labs for experiments.
Especially Mr. Saeed the lab attendant of VLSI lab, who cooperates with us very well.

Table of contents
1 INTRODUCTION............................................................................................................................. 6
1.1 BACKGROUND............................................................................................................................. 6
1.2 OVERVIEW OF IMPLEMENTATION ................................................................................................ 6
1.3 OBJECTIVES AND GOALS ............................................................................................................. 7
1.4 BLOCK DIAGRAM OF SYSTEM ...................................................................................................... 7
1.5 SCOPE OF THESIS ......................................................................................................................... 8
2 TECHNICAL APPROACHES ......................................................................................................... 9
2.1 VIDEO......................................................................................................................................... 9
2.2 FIELD PROGRAMMABLE GATE ARRAY (FPGA) ........................................................................... 9
2.2.1Spartan 3e............................................................................................................................. 9
2.3. IMAGE PROCESSING ..................................................................................................................... 10
2.3.1Edge detection.............................................................................................. 10
2.3.2 Hough Transform........................................................................................................ 10
2.4 Hardware Description Language............................................................................................ 11
2.4.1 Verilog...................................................................................................................... 11
2.4.2 VHDL.......................................................................................................... 11
2.5 User Constraint File................................................................................................................... 11
2.6 Xilinx ISE.11
2.7 Programming FPGA........................................................................................................... 12
2.8 FPGA Applications........................................................................................ 12
2.8.1 Video/Image Processing ................................................................................. 12
2.8.2 Control systems.................................................................................................................. 12
2.8.3 Embedded systems............................................................................................................ 12

..............................................................

3 Designs and Implementation


13
3.1 Methodology............................................................................................................................... 13
3.2.1 Design Description (Algorithm 1)..............................................................................................
13
3.2.2 Pre-processing............................................................................................................... 13
3.2.3 Background Removing ................................................................................................. 13
3.2.4 Vehicle Area Detection................................................................................................... 13
3.2.5 Line detection................................................................................................................ 13
3.2.6 Result...................................................................................... 13
3.3. Pre-processing..................................................................................... 13
3.3.1 RGB to Gray conversion............................................................................................ 14
3.3.2 Canny edge detection................................................................................................. 14

3.3.2.1 Image smoothing......................................................................................................... 15


3.3.2.2 CALCULATION OF THE STRENGTH AND DIRECTION OF EDGES....... 15
3.3.2.3 Non maximum suppression............................................................................................... 17
3.3.2.4 Thresholding...................................................................................................................... 18
3.4 Background removing......................................................................................... 19
3.5 Vehicle Area Detection.................................................................................................. 23
3.6 RESULT........................................................................................................25
3.7 FPGA Implementation........................................................................................ 32
3.7.1 Connecting pc and Fpga............................................................................................................. 32
3.7.2 Storing received image .................................................................................... 32
3.7.3 Image processing....................................................................... 32
3.7.4Sending back result to Pc....................................................................................... 32
4 Conclusions & Future Work............................................................................................. 33
4.1 Conclusion.................................................................................................... 33
4.2 Limitations ...................................................................... 33
4.3 Future works.................................................................... 33
5 REFERENCES ............................................................................................................................... 34
6 APPENDIX A ,B, C: VHDL CODE......................... 35,36..51

List of Figures
Figure 2:1 Spartan 3e Board Photo .10

Figure 2.2 rgb to gray conversion.14

Figure 3.1 canny algorithm steps15


Figure 3.2 canny result image..19
Figure 3.3 images and their edges.20
Figure 3.4 background.21
Figure 3.5 vehicle extraction 21
Figure 3.6 image with background22
Figure 3.7background removing..25
Figure 3.8 line detection result..25
Figure 3.9. 1,2,323 Results photos26,27..31
Figure 3.10 hardware model............................................32

List of Flow Diagrams


Flow diagram 1:1 System Block Diagram ..7

Introduction

1.1 Background
With the increase in population traffic also increases. But the implemented systems are still old
style .Which result in frequent congestions .Traffic is increasing day by day so we have to cope
with the congestion problem .implemented systems include timer based system, and in some
modern countries sensor based system are implemented .It is clear that timer based systems are
poor because whatever condition is they respond according to timer timings , in simple if one
route is empty and other is full, system will generate signals according to timer ,not according to
conditions. Other is sensor based system it is good but it is very expensive and complex need a lot
of maintenance .so we come with the currently researched methodology .using some image
processing techniques we can find congestion so generate signals according to current traffic
conditions( congested are not).
We have first made its software than try to implement it on fpga .Means in first part we
implemented over system in pc (matlab) than on fpga (Spartan 3e)
So over all system consist of video camera , Pc , Fpga later we will discuss some other
components and their usage.

1.2 Overview of Implementation


The project consists of implementations of real time Traffic congestion control in matlab. First of
all frames are taken from live video camera with delay of some seconds. Then send to Pc where
Mat lab will process image and then edge detection is applied on current frame (to get minimum
possible useful pixels) .Through some techniques road portion is extracted from frame (as we
have no concern with the background).
Then area of vehicles is calculated and if it is greater than threshold value (threshold is calculated
from different test images and can be varied for different roads) a signal for congestion occurred
is sent otherwise signal of no congestion occurred is send to server room or directly on road. It
has different uses we will discuss later in thesis.
Tried to implement it in fpga , in this system video camera send frames to pc and from pc to fpga
where edge detection is applied on image and then rectangle detection technique is applied to
detect vehicles .Count number of vehicles and decide wthere the road is congested or not

1.2 Objectives and Goals


Canny edge detection
Removing background
Vehicles detection
Find area of vehicles
Generation of signal (Result)
In system(2)
Canny edge detection
Rectangle detection
Generation of signal (Result)

1.3 Block Diagram of System

Flow diagram 1:1 System Block Diagram

1.5 Scope of Thesis


Chapter 1 is the introduction of the project that what actually is going to be implemented in the
project. The introduction of the implementations is also given. Then the block diagram is drawn
to understand the project.
Chapter 2 is to give a sketch of literature review. The technical terms are explained in it. The
softwares and the hardware used are described in it.
Chapter 3 is about the frame work of the system. This is the main chapter because the algorithms
implemented are explained in it. It shows things done in step by step form. It is tried to make
block diagrams in order to understand it more clearly.
Chapter 4 is written to show the final results.
Chapter 5 is the conclusion of the work. It also gives a way to show that certain things can be
done further in the project.

2 TECHNICAL APPROACHES
2.1 Video
The process of electronically capturing, storing, transmitting and reconstructing a pattern of still
images that representing flow in motion.

2.2 Field Programmable Gate Array (FPGA)


FPGA is the acronym for Field Programmable Gate Array. A FPGA is an integrated circuit that
consists of many configurable cells of digital logic. Each logic cell contains combinational logic
(gates) and state-dependent logic (Flip-Flops and latches). The user can configure the device and
combine circuitry of many different cells. With a FPGA is possible to build almost any digital
circuit, from a simple logical AND up to very complex processor -- just by reconfiguring the
device. The FPGA is therefore a powerful tool for prototyping or in applications with constantly
changing requirements. Limitations in the implementation of designs are given by the type of the
FPGA, which means by the number of logic cells and by the internal timing constraints.

2.2.1 Spartan 3e
The Spartan 3E Starter Board provides a powerful and highly advanced self-contained
development platform for designs targeting the Spartan 3E FPGA from Xilinx. It features a 500K
gate Spartan 3E FPGA with a 32 bit RISC processor and DDR interfaces.
The board also features a Xilinx Platform Flash, USB and JTAG parallel programming interfaces
with numerous FPGA configuration options via the onboard Intel Strata Flash and ST
Microelectronics Serial Flash. The board is fully compatible with all versions of the Xilinx ISE
tools including the free Web Pack. The board ships with a power supply and USB cable for
programming so designs can be implemented immediately with no hidden costs. The Spartan 3E
Starter board is also compatible with the MicroBlaze Embedded Development Kit (EDK) and
Pico Blaze from Xilinx.

Figure 2:1 Spartan 3e Board Photo

2.3 Image Processing


Image processing is a physical process used to convert an image signal into a physical image. The
image signal can be either digital or analog. The actual output itself can be an actual physical
image or the characteristics of an image. The most common type of image processing is
photography. In this process, an image is captured using a camera to create a digital or analog
image. In order to produce a physical picture, the image is processed using the appropriate
technology based on the input source type.

2.3.1 Edge detection


Edge detection is the name for a set of mathematical methods which aim at identifying points in
a digital image at which the image brightness changes sharply or, more formally, has
discontinuities. The points at which image brightness changes sharply are typically organized into
a set of curved line segments termed edges. The same problem of finding discontinuities in 1D
signal is known as step detection and the problem of finding signal discontinuities over time is
known as change detection. Edge detection is a fundamental tool in image processing, machine
vision and computer vision, particularly in the areas of feature detection and feature extraction.

2.3.2 Hough Transform


The Hough Transform was been invented by Paul Hough in 1962 and patented by IBM. It has
become a standard tool in the domain of computer vision for the recognition of straight lines,
circles and ellipses.The Hough Transform is particularly robust to missing and contaminated data.
It can conduct a search in parameter space for any number of lines (or other
geometric objects that have parametric descriptions)

2.4 Hardware Description Language


A language from class of Computer languages for description of design and electronic circuits
called Hardware Description Language.
Verilog and VHDL are called hardware description languages.
HDL is text based notation of temporal structure and behaviour of electronic System. It is used
for simulations like Test bench, Simulations on Xilinx.

2.4.1 Verilog
Verilog is called the Hardware description language, its a textual format foe implement a design
and then check out the simulations via test bench. Xilinx is used as an editor; we can implement
our design by using Verilog Instructions.
Verilog HDL is an IEEE standard-number 1364.
Its first version was published in 1995
A revised version came in 2005
IEE STD 1364 also defines the programming language interface (PLI).

2.4.2 VHDL
VHDL is also called the hardware description language and can be used to model a digital and
electronic system design. Its stands for VHSIC (very high speed integrated circuits) hardware
description language. VHDL can describe the behavioral, structural and timing of design.

2.5 User Constraint File


User Constraint File (UCF) is a file in which we define the signal pin out, switching level, slew
rates and drive strength of the FPGA board.

2.6 Xilinx ISE


Fully featured front-to-back FPGA design solution for Linux, Windows XP, and Windows 7.
ISE Web PACK is the ideal downloadable solution for FPGA and CPLD design offering HDL
synthesis and simulation, implementation, device fitting, and JTAG programming.

2.7 Programming FPGA


The platform use to program a FPGA called Xilinx. Initially a project has been made the by using
hardware description language VHDL we can design our system then check the syntax, RTL of a
design, then add up the User constraints file and finally synthesize our design in which the
mapping takes place and connections declared. Then at the end generate the programming file.
Then with the help of Xilinx Impact we can enter the FPGA Mode and then Initialize the USB
cable ,the system automatically download its drivers and now connection builds between the
computer and FPGA. Now we can download bit stream to FPGA and through VGA port we can
check the FPGA output on LCD.

2.8 FPGA Applications


Image processing based applications of FPGA's are now a days very popular in the world
because of their high processing and performance, like in ROBOTS, In System on Chip (SOC)
and in security System also.
Some applications of FPGAs are following

2.8.1 Video/Image Processing


A lot of processing power is needed to process a video. Matlab is the best example. FPGAs can
be used to implement a specific algorithm.

2.8.2 Control systems


Instead using microcontroller low latency operation might make use of FPGA.

2.8.3 Embedded systems


After implemented a project on FPGA now we can design a chip including the peripherals used in
FPGA through fabrication process and then easily replace the computer with that small chip.

3 Design and Implementation


3.1 Methodology
3.2 Design Description (Algorithm 1)
3.2.1 Video Input
An input is the real time video which is coming from the static camera fixed on road.

3.2.2 Pre-processing
The image is converted from RGB to grey scale. The noise is removed and finally the edges are
detected.
>> Algorithm

3.2.3 Background Removing


The background removing is the next step. An Algorithm is applied.

3.2.4 Vehicle Area Detection


When the background is removed, the next thing done is vehicle area detection, we calculate
numbers of ones and zeros in image.

>>Algorithm 2
3.2.5 Line detection
Lines are detected using Hough transform.
(Below step is for both algorithms)

3.2.6 Result
Finally result is displayed

3.3 Pre-processing
It has two main steps
(1) RGB to gray conversion.
(2) Canny edge detection

3.3.1 RGB to Gray conversion


Matlab provides a built in function, so it is easy to convert an RGB image into gray scale. As a
result an 8 bit image is received which have possible values from 0 to 255. But in real time,
image taken from the camera is an RGB image but there is no function available there and it is
converted in grey scale by using the following general formula .
Grey Scale = 0.3*R+0.59*G+0.11*B .. eq (3.1)

RGB and the converted grey scale image is shown below

Figure 3.0 rgb to gray conversion

3.3.2 Canny edge detection


Canny edge detection is one of the basic algorithms used in shape recognition. The
algorithm uses a multi-stage process to detect a wide range of edges in images. The stages, as
enumerated by Rao and Venkatesan [7] are:
a. Image smoothing. This is done to reduce the noise in the image.
b. Calculating edge strength and edge direction.
c. Directional non-maximum suppression to obtain thin edges across the image.
d. Invoking threshold with hysteresis [1] to obtain only the valid edges in an image.

A block diagram of the Canny edge detection algorithm is shown in Figure 2.1. The input to the
detector can be either a color image or a grayscale image. The output is an image
containing only those edges that have been detected.

Figure 3.1 canny algorithm steps

3.3.2.1 Image smoothing


Image smoothing is the first stage of the Canny edge detection. The pixel values of
the input image is convolved with predefined operators to create an intermediate image.
This process is used to reduce the noise within an image or to produce a less pixilated image.
Image smoothing is performed by convolving the input image with a Gaussian filter [8]. A
Gaussian filter is a discrete version of the 2-dimensional function shown in equation (2.1).

In (2.1), sigma is the standard deviation of the Gaussian filter, which describes the narrowness of
the peaked function, and x and y are spatial coordinates.
An example of the conversion of (2.1) to a 2-dimensional 5x5 discrete Gaussian filter
function is shown in equation (2.2). The function is obtained for sigma = 1.4 by substituting
integer values for x and y and renormalizing [8].

A is the 2-dimensional array of input pixel values and B is an output image. The
process of smoothing an image using an MxM block filter yields an image in which each
pixel value is a weighted average of the surrounding pixel values, with the pixel value at the
center weighted much more strongly than those at nearby points.
Image smoothing is done primarily to suppress noise and to get a continuous edge
contour during the non-maximum suppression process. The output thus obtained is a blurred
intermediate image. This blurred image is used by the next block to calculate the strength and
direction of the edges.

3.3.2.2 CALCULATION OF THE STRENGTH AND DIRECTION


OF EDGES
In this stage, the blurred image obtained from the image smoothing stage is

convolved with a 3x3 Sobel operator [1]. The Sobel operator is a discrete differential
operator that generates a gradient image. Horizontal and vertical Sobel operators that are
used to calculate the horizontal and vertical gradients, respectively

Here is example. I is image matrix and it is convolved by the above operators.

82.46 is the image strength now we will find direction.


A = arctan (Gy/Gx)
= arctan (80/20)
= arctan (4)
=75.96. (Rounded off to 90 degree for Non Maximal suppression)
The edge direction and the edge strength calculated are used to obtain the image with thin
edges, as discussed in the next section.

3.3.2.3 Non maximum suppression


Non-maximum suppression (NMS) [2] is used normally in edge detection algorithms.
It is a process in which all pixels whose edge strength is not maximal are marked as zero
within a certain local neighborhood. This local neighborhood can be a linear window at
different directions [3] of length 5 pixels. The linear window considered is in accordance
with the edge direction of the pixel under consideration [3] for a block in an image as
shown in Figure

Linear window at an angle of (a)135 (b)90 (c)45 (d) 0 Degree


The non-maximum suppression is used to suppress all the image information which is
not the local maxima. For each pixel in the linear window, edge strength of the neighbors is
compared, and if the pixels are not part of the local maxima then they are set to zero. Only the
local maximum is considered. The result of non-maximum suppression is shown in Figure 2.5. A
dynamic block [4] of linear window of size 5 pixel is used to scan the image to obtain the thin
edges throughout the image. The thin edges found are the result of non maximum suppression.

3.3.2.3 Thresholding
Thresholding is the last stage in canny edge detection, which is used
to eliminate spurious points and non-edge pixels from the results of non-maximum
Suppression. The input image for thresholding has gone through Image
Smoothing, Calculating edge strength and edge pixel, and the Non-maximum suppression
Stage to obtain thin edges in the image. Results of this stage should give us only the valid
Edges in the given image, which is performed by using the two threshold values, T1 (high) and
T2 (low), for the edge strength of the pixel of the image. Edge strength which is greater than T1 is
considered as a definite edge. Edge strength which is less than T2 is set to zero. The pixel with

edge strength between the thresholds T1 and T2 is considered only if there is a path from this
pixel to a pixel with edge strength above T1. The path must be entirely through pixels with edge
strength of at least T2. This process reduces the probability of streaking.
As the edge strength is dependent on the intensity of the pixel of the image,
thresholds T1 and T2 also depend on the intensity of the pixel of the image. Hence, the
thresholds T1 and T2 are calculated by the Canny edge detectors using adaptive algorithms, Or
we can set them by certain tests.
Thus all the edges in an image are detected using the canny edge detection.
The figure show the image passing through canny edge detection block

Figure 3.2 canny result image

3.4 Background removing


As we only concern with the vehicles so have to eliminate the background
For this certain algorithms are used, some of them are efficient but very complex.
We use a new algorithm which is very simple but has few drawbacks.
In this we take two frames to extract background and then subtract it from new frame.
Let see how it works
Take two frames from video with some delay. Perform canny edge detection algorithm.
As shown in below figure

Figure 3.3 images and their edges


Then perform simple and operation on them we get Background
In matlab
>> and(image1,image2)

Figure 3.4 background


Take new frame perform canny operation

Subtract from the Background


>>newimage - background

Figure 3.5 vehicle extraction


The other thing which comes in our mind is, when camera is center top of road only triangular
portion of image is useful for us

Figure 3.6 image with background

To remove these triangular portion the matlab code is below

%Code
>> g = 125;
for k = 49 : 115;
p = 1;
while p < g;
i1 (k , p) = 0;
p = p + 1;
end
g = g - 2;
end

inc = 200;
g = 300;
for k = 49 : 130;
p = inc;
while p < g;
i1 ( k , p) = 0;
p = p + 1;
end
inc = inc + 1;
end
After applying this algorithm to real time image results are below

Figure 3.7 background removing

3.5 Vehicle Area Detection


Simply count number of ones and zeros in image .Finally store them, in percentage, in any
variable.Matlab code is below
for i=1:totalrows
for j=1:totalcoloumns
if (totalrows , totalcoloumns)==0
zeros = zeros + 1;
else
ones = ones + 1;
end
end
end

Its matlab cede


>> I = imread (' gantrycrane.png ');
G = rgb2gray (I);
E = edge (G, 'canny');
Imshow (E )
[H,theta,rho] = Hough (E);
figure, imshow(H,[]);

peaks = houghpeaks(H,50,'Threshold',30);
figure, imshow(G,[]), hold on
lines = Houghlines (E,theta,rho,peaks ,'FillGap',5,'MinLength',15);
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot (xy(:,1),xy(:,2),'Lin eWidth',1,'Color','r');
end

Figure 3.8 line detection result

3.6 Result
We have total number of ones stored in variable. Take its percentage.
Now we have to set threshold value for congestion. After random experiments we choose
2.689.Any value above this will be considered as congestion and vice versa.
Matlab code for area detection is below
>>
hold on

if(percnt_o<=2.689)
text(10,10,strcat('\color{green}NO Conjestion Found:'))

else
hold on

title('#chking whether conjestion occurs or not');


text(10,10,strcat('\color{red}Conjestion Found:'))
title('#chking whether conjestion occurs or not');

end

Output images are below


(1)

(2)

(3)

(4)

(5)

(7)

(8)

(9)

(10)

(11)

(12)

(13)

(14)

(15)

(16)

(17)

(18)

(19)

(20)

(21)

(22)

(23)

Figure 3.9. 1,2,323 Results photos

As we have not finish with the fpga implementation here is what we have tried.
Explained in figure below.

Figure 3.10 hardware model

3.7 FPGA Implementation


FPGAs are called the field programmable gate array and have very fast processing speed because
parallel processing takes place. Internally in FPGA Look up tables (LUTs) exist and with the help
of them the desired digital design can be implemented.

3.7.1 Connecting Pc and fpga


As we are sending images from computer to fpga for processing so we need to deign a UART for
pc to Fpga communication. (Complete code of UART is in appendix A along with Matlab code)
We have used RS232 cable for connection

3.7.2 Storing Received Image


After image is received in fpga it has to be stored in fpga memory.
(Complete code of Memory is in appendix B )

3.7.3 Image Processing


In this we have used cores (cores have built-in functions) for this you need to install cores first in
simulink then this will work (Complete code is in appendix C )

3.7.4 Sending back result to pc


We can send result back to pc by using UART .also can lit an LED on Fpga.

4 Conclusions & Future Work


4.1 Conclusion
The goal of our project was develop a system that will detect traffic congestion from a real time
video. We have achieved this goal on Matlab.First of all images ,taken from live video, are
coverted to gray than converted to black and white using canny algorithm.Our algorithm removes
background in a frame and detect the vehicle .Calculate number of ones and zeros.Set threshold
value and finally show output
.
Overall results of our algorithm for Motorways are 90% and for old roads are 85%. The size of
the image is not the issue in our algorithm; it will efficiently handle this issue because our
algorithm will resize the image. license plate is variably decided with respect to frame/image size.
We have tested this algorithm on FPGA on a still image.

4.2 Limitations
The image/frame should not blur, the image must be clear.
The algorithm fails to detect the traffic congestion if there is dark or very foggy condition.
Camera screen must be cleaned weekly.
Camera quality and positions are very important (for different cameras we have to set threshold
point accordingly).

4.3 Future works


This is the beginning of the project and still needs a lot of improvement in order to implement in
a real time application. We have tried to implement our algorithm on a FPGA but due to time
limits and complexities it is not completed. We are looking forward to complete it on fpga and at
the end a small chip is fabricated and embeds in a camera to make a complete working system.

5 References
[1] R. C. Gonzalez and R. E. Woods, Digital Image Processing Second Edition[M].
Upper Saddle River, NJ: Pearson Education Inc., 2002.
[2] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal.
and Mach. Intell., vol. 8, pp. 679-714, Nov. 1986.
[3] C. Sun and P. Vallotton, Fast linear feature detection using multiple directional
nonmaximum suppression, in Proc. Int. Conf. Pattern Recognition, Hong Kong, China,
2006, pp. 288-291.
[4] G. M. Schuster and A. K. Katsaggelos, Robust circle detection using a weighted
mseestimator, in Proc. Inter. Conf. Image Processing, Singapore, 2004, pp. 2111-2114
All data regarding Hough transform was taken from
PROF. WILLIAM HOFF
EGGN 512 computer vision
(COLORODO SCHOOL OF MINES ENGINEERING DIVISION)

APPENDIX A
library IEEE,STD;
use IEEE.Std_Logic_1164.all;
use IEEE.Numeric_Std.all;
--**-package UART_Def is
------------------------------------------------------------------------------ Converts unsigned Std_LOGIC_Vector to Integer, leftmost bit is MSB
-- Error message for unknowns (U, X, W, Z, -), converted to 0
-- Verifies whether vector is too long (> 16 bits)
----------------------------------------------------------------------------function ToInteger (
Invector : in Unsigned(3 downto 0))
return Integer;
end UART_Def; --==================== End of package header
======================-package body UART_Def is
function ToInteger (
InVector : in Unsigned(3 downto 0))
return Integer is
constant HeaderMsg : String := "To_Integer:";
constant MsgSeverity : Severity_Level := Warning;
variable Value : Integer := 0;
begin
for i in 0 to 3 loop
if (InVector(i) = '1') then
Value := Value + (2**I);
end if;
end loop;
return Value;
end ToInteger;
end UART_Def; --================ End of package body ================--=====================================================================
======---- S Y N T H E Z I A B L E miniUART C O R E
--- Design units : miniUART core for the OCRP-1
--- File name : TxUnit.vhd
--- Purpose : Implements an miniUART device for communication purposes
-- between the OR1K processor and the Host computer through
-- an RS-232 communication protocol.
--- Library : uart_lib.vhd
--

-- Dependencies : IEEE.Std_Logic_1164
--=====================================================================
======--------------------------------------------------------------------------------------------------------------------------------------------------------------- Description :
-------------------------------------------------------------------------------- Entity for the Tx Unit -------------------------------------------------------------------------------library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
library work;
use work.Uart_Def.all;
-------------------------------------------------------------------------------- Transmitter unit
------------------------------------------------------------------------------entity TxUnit is
port (
Clk : in Std_Logic; -- Clock signal
Reset : in Std_Logic; -- Reset input
Enable : in Std_Logic; -- Enable input
Load : in Std_Logic; -- Load transmit data
TxD : out Std_Logic; -- RS-232 data output
TRegE : out Std_Logic; -- Tx register empty
TBufE : out Std_Logic; -- Tx buffer empty
DataO : in Std_Logic_Vector(7 downto 0));
end entity; --================== End of entity ==============================--------------------------------------------------------------------------------- Architecture for TxUnit
------------------------------------------------------------------------------architecture Behaviour of TxUnit is
------------------------------------------------------------------------------ Signals
----------------------------------------------------------------------------signal TBuff : Std_Logic_Vector(7 downto 0); -- transmit buffer
signal TReg : Std_Logic_Vector(7 downto 0); -- transmit register
signal BitCnt : Unsigned(3 downto 0); -- bit counter
signal tmpTRegE : Std_Logic; -signal tmpTBufE : Std_Logic; -begin
------------------------------------------------------------------------------ Implements the Tx unit
----------------------------------------------------------------------------process(Clk,Reset,Enable,Load,DataO,TBuff,TReg,tmpTRegE,tmpTBufE)
variable tmp_TRegE : Std_Logic;
constant CntOne : Unsigned(3 downto 0):="0001";
begin

if Rising_Edge(Clk) then
if Reset = '0' then
tmpTRegE <= '1'; tmpTBufE <= '1'; TxD <= '1'; BitCnt <= "0000"; elsif Load = '1' then TBuff <=
DataO; tmpTBufE <= '0'; elsif Enable = '1' then if ( tmpTBufE = '0') and (tmpTRegE = '1') then
TReg <= TBuff; tmpTRegE <= '0'; -- tmp_TRegE := '0'; tmpTBufE <= '1'; -- else -tmp_TRegE := tmpTRegE; end if; if tmpTRegE = '0' then case BitCnt is when "0000" =>
TxD <= '0'; BitCnt <= BitCnt + CntOne; when "0001" "0010" "0011" "0100" "0101" "0110"
"0111" "1000" =>
TxD <= TReg(0); TReg <= '1' & TReg(7 downto 1); BitCnt <= BitCnt + CntOne; when "1001"
=>
TxD <= '1'; TReg <= '1' & TReg(7 downto 1); BitCnt <= "0000"; tmpTRegE <= '1'; when others
=> null;
end case;
end if;
end if;
end if;
end process;
TRegE <= tmpTRegE; TBufE <= tmpTBufE; end Behaviour; --=================== End of
architecture ====================--=====================================================================
======---- S Y N T H E Z I A B L E miniUART C O R E
-- Design units : miniUART core for the OCRP-1
--- File name : RxUnit.vhd
--- Purpose : Implements an miniUART device for communication purposes
-- between the OR1K processor and the Host computer through
-- an RS-232 communication protocol.
--- Library : uart_lib.vhd
--- Dependencies : IEEE.Std_Logic_1164
--- Description : Implements the receive unit of the miniUART core. Samples
-- 16 times the RxD line and retain the value in the middle of
-- the time interval.
-------------------------------------------------------------------------------- Entity for Receive Unit - 9600 baudrate -------------------------------------------------------------------------------library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
library work;
use work.UART_Def.all;
-------------------------------------------------------------------------------

-- Receive unit
------------------------------------------------------------------------------entity RxUnit is
port (
Clk : in Std_Logic; -- system clock signal
Reset : in Std_Logic; -- Reset input
Enable : in Std_Logic; -- Enable input
RxD : in Std_Logic; -- RS-232 data input
RD : in Std_Logic; -- Read data signal
FErr : out Std_Logic; -- Status signal
OErr : out Std_Logic; -- Status signal
DRdy : out Std_Logic; -- Status signal
DataIn : out Std_Logic_Vector(7 downto 0));
end entity; --================== End of entity ==============================--------------------------------------------------------------------------------- Architecture for receive Unit
------------------------------------------------------------------------------architecture Behaviour of RxUnit is
------------------------------------------------------------------------------ Signals
----------------------------------------------------------------------------signal Start : Std_Logic; -- Syncro signal
signal tmpRxD : Std_Logic; -- RxD buffer
signal tmpDRdy : Std_Logic; -- Data ready buffer
signal outErr : Std_Logic; -signal frameErr : Std_Logic; -signal BitCnt : Unsigned(3 downto 0); -signal SampleCnt : Unsigned(3 downto 0); -- samples on one bit counter
signal ShtReg : Std_Logic_Vector(7 downto 0); -signal DOut : Std_Logic_Vector(7 downto 0); -begin
---------------------------------------------------------------------- Receiver process
--------------------------------------------------------------------RcvProc : process(Clk,Reset,Enable,RxD)
variable tmpBitCnt : Integer range 0 to 15;
variable tmpSampleCnt : Integer range 0 to 15;
constant CntOne : Unsigned(3 downto 0):="0001";
begin
if Rising_Edge(Clk) then
tmpBitCnt := ToInteger(BitCnt);
tmpSampleCnt := ToInteger(SampleCnt);
if Reset = '0' then
BitCnt <= "0000";
SampleCnt <= "0000";
Start <= '0';
tmpDRdy <= '0';
frameErr <= '0';
outErr <= '0';
ShtReg <= "00000000"; --

DOut <= "00000000"; -else


if RD = '1' then
tmpDRdy <= '0'; -- Data was read
end if;
if Enable = '1' then
if Start = '0' then
if RxD = '0' then -- Start bit,
SampleCnt <= SampleCnt + CntOne;
Start <= '1';
end if;
else
if tmpSampleCnt = 8 then -- reads the RxD line
tmpRxD <= RxD;
SampleCnt <= SampleCnt + CntOne;
elsif tmpSampleCnt = 15 then
case tmpBitCnt is
when 0 =>
if tmpRxD = '1' then -- Start Bit
Start <= '0';
else
BitCnt <= BitCnt + CntOne;
end if;
SampleCnt <= SampleCnt + CntOne;
when 12345678 =>
BitCnt <= BitCnt + CntOne;
SampleCnt <= SampleCnt + CntOne;
ShtReg <= tmpRxD & ShtReg(7 downto 1);
when 9 =>
if tmpRxD = '0' then -- stop bit expected
frameErr <= '1';
else
frameErr <= '0';
end if;
if tmpDRdy = '1' then -outErr <= '1';
else
outErr <= '0';
end if;
tmpDRdy <= '1';
DOut <= ShtReg;
BitCnt <= "0000";
Start <= '0';
when others =>
null;
end case;
else
SampleCnt <= SampleCnt + CntOne;
end if;
end if;
end if;

end if;
end if;
end process;
DRdy <= tmpDRdy;
DataIn <= DOut;
FErr <= frameErr;
OErr <= outErr;
end Behaviour; --==================== End of architecture ====================--

APPENDIX B
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.std_logic_unsigned.all;
USE ieee.std_logic_arith.all;
entity usrf_ram is
GENERIC (add_w: INTEGER :=4;
data_w: INTEGER :=8 );
port (
clk_WR : in STD_LOGIC;
clk_RD : in STD_LOGIC;
rst : in STD_LOGIC;
rc_srs : in STD_LOGIC:='0';
WR
: in STD_LOGIC;
RD
: in STD_LOGIC;
D
: in STD_LOGIC_VECTOR (data_w-1 downto 0);
Q
: out STD_LOGIC_VECTOR (data_w-1 downto 0);
empty : out STD_LOGIC;
q_fu : out STD_LOGIC;
h_fu : out STD_LOGIC;
a_fu : out STD_LOGIC;
full : out STD_LOGIC);
end entity;
architecture a of usrf_ram is
component binary2gray IS
GENERIC (size: INTEGER := 8);
PORT(
B : IN STD_LOGIC_VECTOR(size-1 DOWNTO 0);
G : out STD_LOGIC_VECTOR(size-1 DOWNTO 0)
);
end component;
component gray2binary IS
GENERIC (size: INTEGER := 8);
PORT(

G : IN STD_LOGIC_VECTOR(size-1 DOWNTO 0);


B : out STD_LOGIC_VECTOR(size-1 DOWNTO 0)
);
end component;
type mi_ram_mem_type is array (2**add_w-1 downto 0)
of STD_LOGIC_VECTOR (data_w-1 downto 0);
signal ram_mem : mi_ram_mem_type := (others => (others => '0'));
signal mi_iempty
: STD_LOGIC;
signal dmi_iempty
: STD_LOGIC;
signal mi_ifull
: STD_LOGIC;
signal mi_mi_add_WR_CE : std_logic;
signal mi_add_WR
: std_logic_vector(add_w downto 0);
signal mi_add_WR_GC : std_logic_vector(add_w downto 0);
signal imi_add_WR_GC : std_logic_vector(add_w downto 0);
signal n_mi_add_WR
: std_logic_vector(add_w downto 0);
signal mi_add_WR_RS : std_logic_vector(add_w downto 0);
signal mi_mi_add_RD_CE : std_logic;
signal mi_add_RD
: std_logic_vector(add_w downto 0);
signal mi_add_RD_GC : std_logic_vector(add_w downto 0);
signal imi_add_RD_GC : std_logic_vector(add_w downto 0);
signal mi_add_RD_GCwc : std_logic_vector(add_w downto 0);
signal imi_add_RD_GCwc : std_logic_vector(add_w downto 0);
signal iimi_add_RD_GCwc : std_logic_vector(add_w downto 0);
signal n_mi_add_RD
: std_logic_vector(add_w downto 0);
signal mi_add_RD_WS : std_logic_vector(add_w downto 0);
signal srs_w
: STD_LOGIC;
signal isrs_w
: STD_LOGIC;
signal srst_r
: STD_LOGIC;
signal isrs_r
: STD_LOGIC;
signal iWR
: STD_LOGIC;
signal cc_mi_add_RD
: std_logic_vector(add_w downto 0);
signal cc_mi_add_WR
: std_logic_vector(add_w downto 0);
signal cs_add
: std_logic_vector(add_w downto 0);
signal cs_zero
: std_logic_vector(add_w downto 1);
begin
iWR <= '1' when ((WR = '1') and (mi_ifull = '0')) else '0';
process (clk_WR)
begin
if (rising_edge(clk_WR)) then
if (iWR = '1') then

ram_mem(CONV_INTEGER(mi_add_WR(add_w-1 downto 0)))


<= D;
end if;
end if;
end process;
process (clk_RD)
begin
if (rising_edge(clk_RD)) then
Q <= ram_mem(CONV_INTEGER(mi_add_RD(add_w-1 downto 0)));
end if;
end process;
mi_mi_add_WR_CE <= '0' when (mi_ifull = '1') else
'0' when (WR = '0') else
'1';
n_mi_add_WR <= mi_add_WR + "01";
U1 : binary2gray
generic map (size => add_w+1)
port map(
B => n_mi_add_WR,
G => imi_add_WR_GC
);
process (clk_WR,rst)
begin
if (rst = '1') then
mi_add_WR <= (others => '0');
mi_add_RD_WS(add_w downto add_w-1) <= "11";
mi_add_RD_WS(add_w-2 downto 0) <= (others => '0');
mi_add_WR_GC <= (others => '0');
elsif (rising_edge(clk_WR)) then
mi_add_RD_WS <= mi_add_RD_GCwc;
if (srs_w = '1') then
mi_add_WR <= (others => '0');
mi_add_WR_GC <= (others => '0');
elsif (mi_mi_add_WR_CE = '1') then
mi_add_WR <= n_mi_add_WR;
mi_add_WR_GC <= imi_add_WR_GC;
else
mi_add_WR <= mi_add_WR;
mi_add_WR_GC <= mi_add_WR_GC;
end if;

end if;
end process;
full <= mi_ifull;
mi_ifull <= '0' when (mi_iempty = '1')
'0' when (mi_add_RD_WS /= mi_add_WR_GC) else
'1';
mi_mi_add_RD_CE <= '0' when (mi_iempty = '1') else
'0' when (RD = '0') else
'1';
n_mi_add_RD <= mi_add_RD + "01";
U2 : binary2gray
generic map (size => add_w+1)
port map(
B => n_mi_add_RD,
G => imi_add_RD_GC
);
iimi_add_RD_GCwc <= (not n_mi_add_RD(add_w)) & n_mi_add_RD(add_w-1
downto 0);
U3 : binary2gray
generic map (size => add_w+1)
port map(
B => iimi_add_RD_GCwc,
G => imi_add_RD_GCwc
);
process (clk_RD,rst)
begin
if (rst = '1') then
mi_add_RD <= (others => '0');
mi_add_WR_RS <= (others => '0');
mi_add_RD_GC <= (others => '0');
mi_add_RD_GCwc(add_w downto add_w-1) <= "11";
mi_add_RD_GCwc(add_w-2 downto 0) <= (others => '0');
dmi_iempty <= '1';
elsif (rising_edge(clk_RD)) then
mi_add_WR_RS <= mi_add_WR_GC;
dmi_iempty <= mi_iempty;
if (srst_r = '1') then
mi_add_RD <= (others => '0');

mi_add_RD_GC <= (others => '0');


mi_add_RD_GCwc(add_w downto add_w-1) <= "11";
mi_add_RD_GCwc(add_w-2 downto 0) <= (others => '0');
elsif (mi_mi_add_RD_CE = '1') then
mi_add_RD <= n_mi_add_RD;
mi_add_RD_GC <= imi_add_RD_GC;
mi_add_RD_GCwc <= imi_add_RD_GCwc;
else
mi_add_RD <= mi_add_RD;
mi_add_RD_GC <= mi_add_RD_GC;
mi_add_RD_GCwc <= mi_add_RD_GCwc;
end if;
end if;
end process;
empty <= dmi_iempty;
mi_iempty <= '1' when (mi_add_WR_RS = mi_add_RD_GC) else
'0';
U4 : gray2binary
generic map (size => add_w+1)
port map(
G => mi_add_RD_GC,
B => cc_mi_add_RD
);
U5 : gray2binary
generic map (size => add_w+1)
port map(
G => mi_add_WR_RS,
B => cc_mi_add_WR
);
cs_add <= (cc_mi_add_WR - cc_mi_add_RD);
q_fu <= '0' when (mi_iempty = '1') else
'0' when (cs_add(add_w downto add_w-2) = "000") else
'1';
h_fu <= '0' when (mi_iempty = '1') else
'0' when (cs_add(add_w downto add_w-1) = "00") else
'1';
cs_zero(add_w) <= '0';
cs_zero(add_w-1 downto 1) <= (others => '1');

a_fu <= '0' when (mi_iempty = '1') else


'0' when (cs_add(add_w downto 1) < cs_zero) else
'1';

process (clk_WR,rst)
begin
if (rst = '1') then
srs_w <= '0';
isrs_r <= '0';
elsif (rising_edge(clk_WR)) then
srs_w <= isrs_w;
if (srs_w = '1') then
isrs_r <= '1';
elsif (srs_w = '0') then
isrs_r <= '0';
end if;
end if;
end process;
process (clk_RD,rst)
begin
if (rst = '1') then
srst_r <= '0';
isrs_w <= '0';
elsif (rising_edge(clk_RD)) then
srst_r <= rc_srs;
if (rc_srs = '1') then
isrs_w <= '1';
elsif (isrs_r = '1') then
isrs_w <= '0';
end if;
end if;
end process;
end architecture;
APPENDIX C
library ieee,xilinxcorelib,unisim;
use ieee.std_logic_unsigned.all;
use ieee.numeric_std.all;
use ieee.std_logic_1164.all;
use ieee.img_edge.all;
entity edge is

end edge;
architecture IMG_ARCHITECTURE of edge is
component detection
port(
clk : in std_logic;
reset : in std_logic;
imgEdge : in std_logic;
no_edge : in std_logic_vector(1 downto 0);
Mono : in std_logic;
ImgColumns : in std_logic_vector(9 downto 0);
ImgRows : in std_logic_vector(8 downto 0);
edge_formating : out std_logic;
ProcessRGB : in std_logic;
ProcessingRGB : out std_logic;
imgRed : in std_logic_vector(7 downto 0);
imgGreen : in std_logic_vector(7 downto 0);
imgBlue : in std_logic_vector(7 downto 0);
address: out std_logic_VECTOR(15 downto 0);
datain: out std_logic_VECTOR(7 downto 0);
we_tool: out std_logic);
end component;
signal clk : std_logic;
signal reset : std_logic;
signal imgEdge : std_logic;
signal no_edge : std_logic_vector(1 downto 0);
signal Mono : std_logic;
signal ImgColumns : std_logic_vector(9 downto 0);
signal ImgRows : std_logic_vector(8 downto 0);
signal ProcessRGB : std_logic;
signal imgRed : std_logic_vector(7 downto 0);
signal imgGreen : std_logic_vector(7 downto 0);
signal imgBlue : std_logic_vector(7 downto 0);
signal edge_formating : std_logic;
signal ProcessingRGB : std_logic;
signal address : std_logic_vector(15 downto 0);
signal datain : std_logic_vector(7 downto 0);
signal we_tool : std_logic;
type ByteT is
(c0,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17,c18,c19,c20,c21,c22,c23,c24,c25
,c26,c27,c28,c29,c30,c31,c32,c33,c34,c35,c36,c37,c38,c39,c40,c41,c42,c43,c44,c45,c46,c47,c48,
c49,c50,c51,c52,c53,c54,c55,c56,c57,c58,c59,c60,c61,c62,c63,c64,c65,c66,c67,c68,c69,c70,c71,
c72,c73,c74,c75,c76,c77,c78,c79,c80,c81,c82,c83,c84,c85,c86,c87,c88,c89,c90,c91,c92,c93,c94,
c95,c96,c97,c98,c99,c100,c101,c102,c103,c104,c105,c106,c107,c108,c109,c110,c111,c112,c113,
c114,c115,c116,c117,c118,c119,c120,c121,c122,c123,c124,c125,c126,c127,c128,c129,c130,c131,
c132,c133,c134,c135,c136,c137,c138,c139,c140,c141,c142,c143,c144,c145,c146,c147,c148,c149
,c150,c151,c152,c153,c154,c155,c156,c157,c158,c159,c160,c161,c162,c163,c164,c165,c166,c16

7,c168,c169,c170,c171,c172,c173,c174,c175,c176,c177,c178,c179,c180,c181,c182,c183,c184,c1
85,c186,c187,c188,c189,c190,c191,c192,c193,c194,c195,c196,c197,c198,c199,c200,c201,c202,c
203,c204,c205,c206,c207,c208,c209,c210,c211,c212,c213,c214,c215,c216,c217,c218,c219,c220,
c221,c222,c223,c224,c225,c226,c227,c228,c229,c230,c231,c232,c233,c234,c235,c236,c237,c238
,c239,c240,c241,c242,c243,c244,c245,c246,c247,c248,c249,c250,c251,c252,c253,c254,c255);
subtype Byte is ByteT;
type ByteFileType is file of Byte;
file infile : ByteFileType open read_mode is "1.tif";

function int2bit_vec(A: integer; SIZE: integer) return BIT_VECTOR is


variable imgRESULT: BIT_VECTOR(SIZE-1 downto 0);
variable TMP: integer;
begin
TMP:=A;
for i in 0 to SIZE-1 loop
if TMP mod 2 = 1 then imgRESULT(i):='1';
else imgRESULT(i):='0';
end if;
TMP:=TMP / 2;
end loop;
return imgRESULT;
end;
begin
UUT : detection
port map (
clk => clk,
reset => reset,
imgEdge => imgEdge,
edge_formating => edge_formating,
no_edge => no_edge,
Mono => Mono,
ImgColumns => ImgColumns,
ImgRows => ImgRows,
ProcessRGB => ProcessRGB,
ProcessingRGB => ProcessingRGB,
imgRed => imgRed,
imgGreen => imgGreen,
imgBlue => imgBlue,
address => address,
datain => datain,
we_tool => we_tool
);
Clocket : process --40 MHz -> T = 25 ns
begin
clk <= '1';

wait for 10.5 ns;


clk <= '0';
wait for 10.5 ns;
end process;
reset <= '1', '0' after 10 ns;
imgEdge <= '1' , '0' after 45 ns;
no_edge <= "10"; --"01";
Mono <= '0';
Data : process(clk)
variable Clk_Up : std_logic := '1';
variable Prev_Process : std_logic;
variable imgBloque : std_logic_vector(511 downto 0) :=
X"303537A3E404504040353C3E414441421413B4045484341414144463474545444444444546
4747404040464646464542424247474648474242424747464648434343";
variable Row : std_logic_vector(8 downto 0);
variable Column : std_logic_vector(8 downto 0);
variable Pixelxa : integer range 0 to 511;
variable Pixelx : Byte;
variable Template : std_logic_vector(7 downto 0);
variable JHeader : std_logic := '0';
variable FinImg : std_logic;
begin
if reset = '1' then
imgRed <= (others => '1');
imgGreen <= (others => '1');
imgBlue <= (others => '1');
Prev_Process := '1';
Pixelxa := 0;
ProcessRGB <= '0';
FinImg := '0';
if JHeader = '0' then
for i in 0 to 53 loop
read(infile, Pixelx);
case i is
when 18 =>
Column(7 downto 0) := To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
when 19 =>
Template := To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
Column(9 downto 8) := Template(1 downto 0);
when 22 =>
Row(7 downto 0) := To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
when 23 => --2nd byte of Height
Template := To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
Row(8) := Template(0);
when 24 =>
ImgColumns <= Column - 1;

ImgRows <= Row - 1;


when others =>
null;
end case;
end loop;
JHeader := '1';
end if;
Row := (others => '0');
Column := (others => '0');
elsif (clk = '1' and clk'event) then
if Prev_Process = '1' and ProcessingRGB = '0' then
read(infile,Pixelx);
imgBlue <= To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
read(infile,Pixelx);
imgGreen <= To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
read(infile,Pixelx);
imgRed <= To_Stdlogicvector(int2bit_vec(ByteT'pos(Pixelx),8));
ProcessRGB <= '1';
if Column = ImgColumns then
Column := (others => '0');
if Row = ImgRows then
Row := (others => '0');
else
Row := Row + 1;
end if;
else
Column := Column + 1;
end if;
elsif ProcessingRGB = '1' then
ProcessRGB <= '0';
end if;
if FinImg = '0' then
Prev_Process := ProcessingRGB;
else
Prev_Process := '0';
ProcessRGB <= '0';
File_Close(infile);
end if;
assert not (FinImg='1' and edge_formating = '0')
report "no_edge Completed"
severity FAILURE;
if Row = 0 and Column = 0 then
FinImg := '1';

end if;
end if;
end process Data;
end IMG_ARCHITECTURE;
configuration TESTBENCH_FOR_gs of edge is
for IMG_ARCHITECTURE
for UUT : compressor
use entity work.compressor(jpg);
end for;
end for;
end TESTBENCH_FOR_gs;

Vous aimerez peut-être aussi