Vous êtes sur la page 1sur 7

The Journal of China

Universities of Posts and


Telecommunications
February 2015, 22(1): 5056
www.sciencedirect.com/science/journal/10058885

http://jcupt.xsw.bupt.cn

Traffic light detection and recognition for autonomous vehicles


Guo Mu (

), Zhang Xinyu, Li Deyi, Zhang Tianlei, An Lifeng

Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China

Abstract
Traffic light detection and recognition is essential for autonomous driving in urban environments. A camera based
algorithm for real-time robust traffic light detection and recognition was proposed, and especially designed for autonomous
vehicles. Although the current reliable traffic light recognition algorithms operate well under way, most of them are mainly
designed for detection at a fixed position and the effect on autonomous vehicles under real-world conditions is still limited.
Some methods achieve high accuracy on autonomous vehicle, but they cant work normally without the aid of
high-precision priori map. The authors presented a camera-based algorithm for the problem. The image processing flow can
be divided into three steps, including pre-processing, detection and recognition. Firstly, red-green-blue (RGB) color space
is converted to hue-saturation-value (HSV) as main content of pre-processing. In detection step, the transcendental color
threshold method is used for initial filterings, meanwhile, the prior knowledge is performed to scan the scene in order to
quickly establish candidate regions. For recognition, this article use histogram of oriented gradients (HOG) features and
support vector machine (SVM) as well to recognize the state of traffic light. The proposed system on our autonomous
vehicle was evaluated. With voting schemes, the proposed can provide a sufficient accuracy for autonomous vehicles in
urban enviroment.
Keywords autonomous vehicle, traffic light detection and recognition, histogram of oriented gradients

1 Introduction
Over the past few decades, lots of attempts have been
made to autonomous vehicles. Nowadays, driving on
highway with autonomous vehicles has become more and
more reliable [1], while fully autonomous driving in real
urban environment is still a tough and challenging task [2].
Robust detection and recognition of traffic lights is
essential for autonomous vehicle to take appropriate
actions on intersection in urban environment. However,
robust detection of traffic lights is not easy to be carried,
for there would be a dreadful mess of objects in an image
in which colors are similar to the one of the traffic lights,
and the shape of traffic light is so simple that its hard to
extract sufficient features [3]. The worse situation may be
met that the traffic lights have a variety of types, of which,
some are horizontal arrangements while some are verticals,

Received date: 14-10-2014


Corresponding author: Guo Mu, E-mail: guom08@gmail.com
DOI: 10.1016/S1005-8885(15)60624-0

and also, some are composed of only circles while some


include arrows. In cities like Tianjin, China, the traffic
lights just look like progress bars with their color
indicating stop and go and dynamic length indicating the
remaining time. Different types of traffic lights are shown
in Fig. 1.

Fig. 1

Various types of traffic lights

Human vision is seemed the only way to detect the state


of traffic lights. Almost all experimental autonomous
vehicles for urban environment are equipped with cameras
used to detect and recognize traffic lights. Some traffic
light recognition algorithms have been proposed in recent
years. Ref. [4] achieved the spotlight detection and
template matching methods to identify the traffic lights,

Issue 1

Guo Mu, et al. / Traffic light detection and recognition for autonomous vehicles

and their result is fairly accurate. But this method is


designed for non-real-time applications. Refs. [56] also
described methods for detecting traffic lights. But these
methods are based on fixed cameras. When applied to
autonomous vehicles where the on-vehicle camera moves
as the vehicle goes, these methods would lose efficieny.
Ref. [7] used statistic information in HSV color space to
obtain color thresholds, they use these thresholds to do
image segmentation, and then the machine learning
algorithm was applied for classifying. Ref. [8] suggested
that can detect traffic lights in a long distance, they search
centers of traffic lights by Gaussian mask, and verify
candidates
of
traffic
lights
using
suggested
existence-weight map. However, these two methods can
only detect and recognize the circular traffic lights.
Besides, the attempts mentioned above are used to detect
and classify traffic lights just from onboard images without
annotated priori maps. Ref. [9] designed methods for
automatically mapping the three dimensional positions of
traffic lights, they mapped more than four thousand traffic
lights to create a priori map. Ref. [10] presented a passive
camera based pipeline for traffic light detection and
recognition, which is used by Google driveless cars and
pass several tests in real urban environment successfully.
Both methods achieved high accuracy based on prior
knowledge of traffic light location. The main drawback of
the methods is that they need vehicle localization and prior
acquisition of traffic light positions. In this article, the
authors proposed a new approach to detect and recognize
traffic lights in vertical arrangement. both circular traffic
lights and those with arrows are handled thereafter. This
approach is designed for autonomous vehicles, so we
would use on-vehicle camera to do all processing in
real-time.
The rest of this article is organized as follows. In Sect. 2,
the system architecture of adopted autonomous vehicle is
described. Main steps of the detection and recognition
system are detailed in Sect. 3. Some experiments results
are shown in Sect. 4. Conclusions and future work are
given in Sect. 5.

2 System and vehicle


In the compitition of Future Challenge 2012 in China,
two types of contests are held: 7 km urban environment
and 16 km rural road. The autonomous vehicle we
employed won both contest at all. On November 24th,
2012, we successfully completed the Beijing-Tianjin

51

highway test without any manual intervention. The total


length of the highway test section is about 112 km and the
test took 85 min, with an average speed 79.06 km/h.
During the test, autonomous vehicle accomplished 12
overtaking, 36 lane-changing, and got a maximum speed
105 km/h.
2.1

Hardware Implementation

The vehicle used in research is a reformed Hyundai


Tucson, shown in Fig. 2. The throttle, shifter, brake and
steering system are reconstructed so that we could control
them by computer. We can also manipulate the turn or
brake signals by our command to let the vehicle act like
real human. An emergency button is designed for manual
takeover so driver could take control of the vehicle at any
time we need.

Fig. 2

Autonomous vehicle of our team

Currently, three lidars, one radar and three cameras are


equipped on the vehicle for environment perception. Two
SICK LD-LRS lidar scanners are setup in the front and on
top of the vehicle to detect other vehicles, pedestrians and
other obstacles in front. One IBEO Lux-4 lidar is mounted
on top of the vehicle. We use this lidar to detect bound of
the road. One millimeter wave radar is equipped at the rear
of the vehicle and obstacles in rear area would be detected
by it. Three cameras are mounted in the front of the car,
from left to right. They all fixed inside the car in order to
avoid dust, rain or other interference factor. We use these
cameras to detect traffic lights, traffic signs, lane-marks
and vehicles movement trends. Sensors mounting
positions, functions and coverage areas are showed in
Table 1.
Three on-board 4-core i7 computers provide sufficient
processing power: One server runs our vision algorithms
while the other takes care of lidar, radar algorithms,
decision-making, control and low-level communication.

52

The Journal of China Universities of Posts and Telecommunications

Electricity power is provided by two groups of lead acid


cell.
Table 1

Sensors mounting positions, functions and coverage areas

Sensor type

Mounting
position

Camera1

Front

Camera2
Camera3
SICK lidar1
SICK lidar2
IBEO Lux4
Millimeter radar

Front
Front
Front
Top
Top
Rear

2.2

Function
Traffic lights /
signs
Lane-marks
Obstacles
Front obstacles
Front obstacles
Road boundary
Rear obstacles

Coverage
30 m~90 m, 24
4 m~30 m, 72
4 m~120 m, 72
0 m~50 m, 180
0 m~50 m, 180
0 m~200 m, 72
0 m~200 m, 72

Software architecture

The development of autonomous vehicles involves the

Fig. 3

2015

mechanism and electric reconstruction, environment


perception, decision making and automatic control.
Whereas the mechanism and electric reconstruction is the
foundation of project and belongs to the hardware portion,
the other three sections are linked serially, they constitute
the architecture of software. Environment perception
section includes information gathering, image processing,
radar and lidar signal processing. Decision making section
comprehensively analyzes the results provided by the
environment perception and makes appropriate decision.
The last section, the automatic control fulfills the decision
made by previous section by adjusting posture of the
vehicle by appropriate control method. Fig. 3. shows
information processing flow of the autonomous vehicles.

Information processing flow

Traffic light detection and recognition is designed as part


of image processing module in the environment perception
section, as well as detection of lane-marks and traffic signs.

3 Proposed algorithm
The authors used an off-the-shelf camera (AVT Pike
F-100C) as vision sensor for detecting traffic lights. This
camera is mounted behind the windshield of our vehicle.

We label about 5 000 sample images of 13 categories for


offline training. Each sample image is converted to a
high-dimensional feature by a HOG descriptor. These
samples are then used to train a hierarchical SVM
classifier. For online processing, the pipeline can be
divided to three phases: pre-processing, detection and
recognition. Pipeline of proposed algorithm is shown in
Fig. 4.

Fig. 4 Processing pipeline of proposed algorithm

Issue 1

3.1

Guo Mu, et al. / Traffic light detection and recognition for autonomous vehicles

Pre-processing

The most significant feature of traffic lights is colored


sensitive. However, the colors of traffic lights would vary
in different lighting conditions caused by weather, time or
other factors while RGB color space is sensitive to the
light intensity. So the color space should be converted in
order to uniform the drift in colors. HSV color space is not
affected by the lighting changes and hue-component H
describes color. Thus, we first convert image from RGB to
HSV.
3.2

Detection

The goal of detection phase is to extract candidates for


traffic lights. We will do some initial filtering by color
information first. We manually select about 2 000 images
containing traffic lights, which are all taken by traffic
detection camera on different environment and weather
conditions. Then we semi-automatically segment the color
of red, yellow and green light, and set them as three
training set. We assume the traffic light colors distribution
to be a Gaussian one and trained red, yellow and green of
traffic light separately. Note that hue component of red is
discontinuous near 0 and 360 while hue component of
yellow and green is less than 200, we do a one-to-one
mapping to help training color. Let H m be new mapped
variable following:
H + 60; H 300
Hm =
H 300; H > 300

(1)

53

Then, we perform the prior knowledge to screen the


bounding boxes, so as to establish candidate regions for
recognition. Three necessary conditions must be satisfied.
First, whether the traffic lights are circular or arrow shaped,
the color pixels must account for more than 70% of the area
of the bounding box. Second, for a camera in a fixed
position on a vehicle, there is a certain relationship between
the area and the position of a bounding box. As the camera
is mounted on the vehicle and is always below the traffic
light, the larger the distance between the camera and the
traffic light, the smaller the area of the bounding box.
Likewise, the smaller the distance between the camera and
the traffic light, the larger the area of the bounding box
becomes. This phenomenon is shown in Fig. 5. We analyze
a group of images and determined the relationship between
the area and position of the bounding box. This relationship
is used to do further filtering. Finally, there are basic
templates for traffic lights. For example, a red region R has
bounding box B with width l and height h. Shifting B down
by h and 2h creates new bounding boxes B1 and B2.
Intuitively, if R is a candidate for a traffic light, B1 and B2
should be mainly composed of dark pixels; otherwise the
region would not be a qualified candidate. Similar
restrictions could be applied to yellow and green situations.
After this triple filtering, we may gain several candidates.
Before we go to the recognition phase, however, we have to
extend the candidate regions using the last condition
mentioned so that the extended regions involve the entire
traffic light.

Three colors distribution by Gaussian fitting is shown


below:
H m_R ~ N(54.2,12.4)

H m_Y ~ N(112.6, 23.5)


(2)

H m_G ~ N(194.7, 21.6)


We assume that if a pixel satisfies certain conditions, it
belongs to corresponding color of traffic light. So, we set
following transcendental color threshold to do color-based
segmentation:
red; H m [41.8, 66.6], S100,V 50
yellow; H [89.1,136.1], S100, V 50

m
pixel
(3)
H
green;
m [173.1, 216.3], S100, V 50

other; otherwise
where S is saturation-component, V is value-component.
After color segmentation, if the detected pixels form
connected regions, we can perceive their bounding boxes.

Fig. 5

Relationship between the area and position of traffic lights

We need all three conditions above to filter the candidate


regions for recognition without high-precision integrated
navigation systems (INS). However, we can do some
pre-mapping to get precision location of traffic lights with
the aid of high-precision INS. The high-precision global
position system (GPS) provides accurate location and the
compass provides absolute heading angle of host vehicle.
Based on this, we may collect images for all intersections

54

The Journal of China Universities of Posts and Telecommunications

2015

and label the traffic light pixel region with alignment to the
location and heading angle of host vehicles. When we come
to the same intersection, accurate regions for traffic lights
can be obtained by linear interpolation based on the
real-time data from high-precision INS.
3.3

Recognition

Once the candidates for traffic lights are found, we can


use an SVM trained with HOG features to confirm the
state of the traffic light. HOG features, proposed by
Dalal et al. [11] for pedestrian detection, became popular
for recognition. We choose this feature for traffic light
recognition due to its scale-invariance, local contrast
normalization, coarse spatial binning and weighted
gradient orientations. Samples are classified into 13
categories including circular, right arrow, forward arrow
and left arrow with red, yellow, and green colors,
respectively, as well as false candidates. All samples are
correctly labeled manually.
We use a 9-bin histogram in the experiment. The traffic
light training samples are rescaled to 40 80 pixel images,
as are some non-traffic light samples. We set the size of
block and cell to 10 10 and 5 5, separately. The stride of
block is set to 5 5. Therefore, the size of the HOG feature
set would be 3 780. We use these features to train a linear
hierarchical SVM classifier using OpenCV. The
parameters of the SVM classifier are optimized using cross
validation results. Well-defined SVM classifiers are save in
a file for recognition.
We use the extended regions provided by the detection
phase for recognition. Note that after applying prior
knowledge, entire traffic lights may be detected in a region
with very little other content. This is excellent for
recognition.

Fig. 6

Detection rate at different times of day

Its shown that in ranges less than 60 m, the detection rate


is higher than 90% in any time we test. The detection of
stop status (red) is better than go for the red color is easy
to recognize than the green one. Besides, results in the
afternoon are not as good as those in the morning and night.
This is because we come into some backlighting cases in the
afternoon which is hard for cameras to capture correct colors.
Besides the detection rate of intersection status, we also
need to distinguish arrow lights from the circular ones. The
detection rate of arrows at different time is show in Fig. 7.

4 Experiments
We ran the proposed algorithm on autonomous vehicles
and made tests in a real urban environment. The computer
is equipped with Intel i7 Quad-Core processor with 8GB
Memory. The images are captured with 1 000 1 000
pixels at 30 frams/s.
The experiment was done without high-precision INS at
different times of the day. Therefore, the traffic light
images include varying illumination, scale and pose. The
intersection state (go/stop) detection rate is shown in
Fig. 6.

Fig. 7

Detection rate of arrows at different times of day

Issue 1

Guo Mu, et al. / Traffic light detection and recognition for autonomous vehicles

It can be concluded that in distances less than 40 m, we


can distinguish arrow from circular at a detection rate
higher than 80%. With a voting scheme, this accuracy is
sufficient for autonomous vehicle to make decision. At
distances further than 50 m, there is no distinct difference
between circular and arrow traffic lights in the image.
As suggested above, the traffic lights can be detected up
to 120 m, but the average guaranteed detection distance is
about 40 meters within which we can detect the status of
intersection and distinguish arrows from circulars at a
relative high accuracy.
In distances less than 40 m, the detection result of the
proposed method during all time of days is shown in Table
2. Precision and recall numbers in stop and go status are
shown in Fig. 8 and Fig. 9, respectively.
Table 2 Detection and recognition results of the proposed
method within 40 m
Type
Circular red
Circular green
Left arrow red
Left arrow green
Forward arrow red
Forward arrow green
Right arrow red
Right arrow green

Fig. 8

Fig. 9

Correct
684
921
102
184
84
204
34
264

Missing
35
64
17
22
12
19
6
31

False alarm
43
14
1
0
2
1
3
2

Precision and recall in stop status

Precision and recall in go status

5 Conclusions and future work


A camera-based algorithm for real-time robust traffic
light detection and recognition was proposed. This

55

algorithm is designed mainly for autonomous vehicles.


Experiments show that our algorithm performs well in
accurately detecting targets and in determining the distance
and time to those targets. However, the current method
proposed here does have some drawbacks. First, the
method performs well in the daytime but not as well at
night. The false alarm rate increases at night due to more
light interference. While the method can detect both
circular traffic lights and those with arrows, only the
classical suspended, vertical traffic lights were detected.
Detection and recognition of more types of traffic lights
will meet an important area for future work.
Acknowledgements
This work was supported by Natural Basic Research Program of
China (91120306, 61203366).

References
1. Luettel T, Himmelsbach M, Wuensche H J. Autonomous ground
vehiclesConcepts and a path to the future. Proceedings of the IEEE,
2012,100 (Special Centennial Issue ): 18311839
2. Levinson J, Askeland J, Becker J, et al. Towards fully autonomous
driving: Systems and algorithms. Proceedings of the 2011 IEEE
Intelligent Vehicles Symposium (IVS11), Jun 59, 2011, Baden-Baden,
Germany. Piscataway, NJ, USA: IEEE, 2011: 163168
3. Buch N, Velastin S A, Orwell J. A review of computer vision techniques
for the analysis of urban traffic. IEEE Transactions on Intelligent
Transportation Systems, 2011, 12(3): 920939
4. de Charette R, Nashashibi F. Real time visual traffic lights recognition
based on spot light detection and adaptive traffic lights templates.
Proceedings of the 2009 IEEE Intelligent Vehicles Symposium (IVS09),
Jun 35, 2009, Xian, China. Piscataway, NJ, USA: IEEE, 2009: 358363
5. Chung Y C, Wang J M, Chen S W. A vision-based traffic light detection
system at intersections. Journal of Taiwan Normal University:
Mathematics, Science and Technology, 2002, 47(1): 6786
6. Yung N H C, Lai A H S. An effective video analysis method for detecting
red light runners. IEEE Transactions on Vehicular Technology, 2001,
50(4): 10741084
7. Gong J W, Jiang Y H, Xiong G M, et al. The recognition and tracking of
traffic lights based on color segmentation and CAMSHIFT for intelligent
vehicles. Proceedings of the 2010 IEEE Intelligent Vehicles Symposium
(IVS10), Jun 2124, 2010, San Diego, CA USA. Piscataway, NJ, USA:
IEEE, 2010: 431435
8. Hwang T H, Joo I H, Cho S I. . Detection of traffic lights for vision-based
car navigation system. Advances in Image and Video Technology:
Proceedings of the 1st Pacific Rim Symposium (PSIVT06), Dec 1013,
2006, Hsinchu, China. LNCS 4319. Berlin, Germany: Springer, 2006:
682691
9. Fairfield N, Urmson C. Traffic light mapping and detection. Proceedings
of the 2011 IEEE International Conference on Robotics and Automation
(ICRA11), May 913, 2011, Shanghai, China. Piscataway, NJ, USA:
IEEE, 2011: 54215426

56

The Journal of China Universities of Posts and Telecommunications

10. Levinson J, Askeland J, Dolson J, et al. Traffic light mapping,


localization, and state detection for autonomous vehicles. Proceedings of
the 2011 IEEE International Conference on Robotics and Automation
(ICRA11), May 913, 2011, Shanghai, China. Piscataway, NJ, USA:
IEEE, 2011: 57845791

2015

11. Dalal N, Triggs B. Histograms of oriented gradients for human detection.


Proceedings of the 2005 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR05): Vol 1, Jun 2026,
2005, San Diego, CA, USA. Los Alamitos, CA, USA: IEEE Computer
Society, 2005: 886893

(Editor: Zhang Kexin)

From p. 10
10. Xu L, Hou C L. An improved affine projection algorithm and its
application in acoustic echo cancellation. Proceedings of the 3rd
International Conference on Image and Signal Processing (CISP10): Vol
9, Oct 1618, 2010, Yantai, China. Piscataway, NJ, USA: IEEE, 2010:
43444348
11. Ping L, Lu R. An improved variable step-size affine projection algorithm
for narrowband interference suppression in DSSS systems. Proceedings of
the 2013 International Conference on Quality, Reliability, Risk,

Maintenance, and Safety Engineering (QR2MSE13), Jul 1518, 2013,


Chengdu, China. Piscataway, NJ, USA: IEEE, 2013: 20432046
12. Lee C H, Park P G. Optimal step-size affine projection algorithm. IEEE
Signal Processing Letters, 2012,19(7): 431434
13. Diniz P S R. Adaptive filtering: Algorithms and practical implementation.
2nd ed. Boston, MA, USA: Kluwer Academic Publishers, 2002
14. Choi Y S, Shin H C, Song W J. Adaptive regularization matrix for affine
projection algorithm. IEEE Transactions on Circuits and Systems II:
Express Briefs, 2007, 54(12): 10871091

(Editor: Zhang Kexin)

From p. 23
12. Wang Z, Han Y, Lin T, et al. Virtual network embedding by exploiting
topological information. Proceedings of the IEEE Global
Communications Conference (GLOBECOM12), Dec 37, 2012,
Anaheim, CA, USA. Piscataway, NJ, USA: IEEE, 2012: 26032608
13. Liu J, Huang T, Chen J Y, et al. A new algorithm based on the proximity

principle for the virtual network embedding problem. Journal of Zhejiang


University: Science C, 2011,12(11), 910918
14. Chowdhury N M M K, Rahman M R, Boutaba R. Virtual network
embedding with coordinated node and link mapping. Proceedings of the
28th Annual Joint Conference of the IEEE Computer and
Communications (INFOCOM09), Apr 1925, 2009, Rio de Janeiro,
Brazil. Piscataway, NJ, USA: IEEE, 2009: 783791

(Editor: Zhang Kexin)

From p. 30
3. Feng J H, Yang L, Fan P Z. A new multiple access protocol based on the
active state of neighboring nodes for mobile ad hoc networks.
Proceedings of the 3rd International Workshop on Signal Design and Its
Applications in Communications (IWSDA07), Sept 2327, 2007,
Chengdu, China. Piscataway, NJ, USA: IEEE, 2007: 270274
4. Yang L, Fan P Z, Hao L. Performance analysis of multi-channel slotted
ALOHA protocol based on power capture and backoff. Journal of
Southwest Jiaotong University, 2013, 48(4): 761768 (in Chinese)
5. Yao N M, Peng Z, Zuba M, et al. Improving aloha via backoff tuning in
underwater sensor networks. Proceedings of the 6th International ICST
Conference on Communications and Networking in China
(CHINACOM11), Aug 1719, 2011, Harbin, China. Piscataway, NJ,
USA: IEEE, 2011: 10381043

6. Zhang Z F, Wei G. A new random access mode for mobile internet.


Journal of Communications, 2003, 24(4): 916 (in Chinese)
7. Abdelkader T, Naik K, Nayak A, et al. Adaptive backoff scheme for
contention-based vehicular networks using fuzzy logic. Proceedings of
the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE09),
Aug 2024, 2009, Jeju Island, Republic of Korea. Piscataway, NJ, USA:
IEEE, 2009: 16211626
8. Su H W, Cheng L L. Backoff algorithm of MAC protocol with flow
predicted and quality of service distinguished. Application Research of
Computers, 2013, 30(10): 30913095 (in Chinese)
9. Liu S F, Dang Y G, Fan Z G. The grey system theory and its application.
4nd ed. Beijing, China:Science Press, 2008 (in Chinese)
10. Harada H, Prasad R. Simulation and software radio for mobile
communications. Boston, MA, USA: Artech House, 2002

(Editor: Zhang Kexin)

Vous aimerez peut-être aussi