Vous êtes sur la page 1sur 4

2015 International Conference on Circuit, Power and Computing Technologies [ICCPCT]

Vehicle detection and Attribute based search of


vehicles in video surveillance system
Bashirahamad. F. Momin Tabssum. M. Mujawar
Department of Computer Science & Engineering Department of Computer Science & Engineering
Walchand college of Engineering, Sangli Walchand college of Engineering, Sangli
Maharashtra, India Maharashtra, India
1 2
bfmomin@yahoo.com tmujawar66@gmail.com

vehicles are close to each other they are detected as single


Abstract— Vehicle detection is important in traffic blob. Appearance based vehicle detection includes work of
monitoring and control. Traditional methods which are based
on license plate recognition or vehicle classification which may
not be effective for low resolution cameras or when number
plate is not available. Also, Vehicle detection in urban
scenarios, based on traditional methods like background
subtraction fails. To overcome this limitation, this paper
present co-training based approach for vehicle detection[1].
Feature selected for detection is haar. Based on haar-training
classifier is trained and adaboost is used to get strong classifier.
After detection of vehicle, next step is to search for particular
vehicles based on its description. Searching of suspicious
vehicles is important in criminal investigation. Search
framework allows the user to search for vehicles based on
attributes such as color, date and time, speed, direction in
which vehicle is travelling. Attribute based Vehicle search
includes example query "Search for yellow cars moving into
horizontal direction from 5.30pm to 8pm".Output of search
query is reduced size version of detected vehicles are
displayed.
Fig. 1. Model for Vehicle Detection and Attribute Based Search
Index Terms—— vehicle detection, vehicle tracking,
attribute extraction, vehicle search

I. INTRODUCTION
Automatically Search for suspicious vehicles is important
in criminal investigation. Vehicle detection in urban is more
challenging due to high volume of data, different weather
conditions and different lighting conditions like shadows. In
this situation traditional approach like background
subtraction fails. This paper presents a method for vehicle
detection which works well under these conditions and
completes end-to-end system for vehicle retrieval based on
semantic attributes. Semantic attribute selected for vehicle
search are speed, date and time, colour of detected vehicle
and direction in which vehicle is travelling. This paper Fig. 2. Shape free appearance apace b. Changing sliding window[1]
present training based approach called co-training method
for detection. To deal with different types of vehicles like viola and Jones [2]. Detection by using edge lets feature [3]
buses, cars, trucks and heavy vehicles deformable aspect and strip feature [4].Support vector machines with
ratio sliding window used. histogram of oriented gradients also used for vehicle
detection[5][6].Vehicle detection based Statistical learning
II. RELATED WORK of object parts proposed by Schneiderman and kanade[7].
Most surveillance system uses background modelling for These methods showed good performance but it requires
vehicle detection but it fails in crowded scenes as multiple large labeling of training samples and works below 15
frames/second .

978-1-4799-7075-9/15/$31.00 ©2015 IEEE


2015 International Confference on Circuit, Power and Computing Technologies [IICCPCT]

III. VEHICLE DETECTIO


ON width and height of rectanglle and α is inclination of
The proposed model for vehicle deteection and attribute rectangle. Rectangle feature sh
hown in Fig. 4.
based search is shown in Fig.1. Thiss section describes 3) Motion and Haar Feeature Based Detection:
procedure for training dataset formattion, dealing with Connected component analy ysis algorithm is applied to
multiple vehicle types and automatic dettection of vehicle in motion image to get bounding rectangle around object
urban.
A. Data Collection For Training
We collected videos from our cam mpus using traffic
surveillance cameras which capturing data over several
months we got data in variety of weathher and illumination
conditions. From collected videos traaining samples is
obtained by following
steps-.
• Specify Region of Interest (RO OI).
• In specified region of interesst, to get foreground
blobs that is to get vehiclee, algoritm used is
background subtraction [8].
• We collect vehicles by annalyzing shape of
foreground blob.
B. Multiple Vehicle Types
To handle different and multiple typpe of vehicles, all Fig. 3. Motion im
mage [13]
training samples are resized to 24 by 24 sso have same aspect
ratio. Here dealing with only appeaarance rather than
considering size of vehicle. In this wee are dealing with
shape not with size so this approach iss called shape free
appearance models[9] shown in Fig 2.
Training images containing multiple vehicle types are
resized to have the same aspect ratio annd during testing of
video for object detection the aspect rratio of the sliding
window is changed to detect various tyypes of vehicles is
shown in Fig.2.Changing sliding windoow allows to detect
various types of vehicles such as buses, trrucks and cars.
C. Vehicle Detection
The basis for learning algorithm is frramework given by
Viola and Jones[11]. It consists of a caascade of Adaboost
classifiers, in this the weak classifiers aare threshold using Fig. 4. Haar feeature
Haar-like features. Each stage of the cascade classifier
minimizes false negatives. Only haar feature leads to false
alarm to minimize it motion feature used..
Detection result improved by combiining haar-like and
motion features .With this combined approach vehicle
detector shows off on average a 5%llower false alarm.
Motion features are used to given the caandidate region and
haar-like features are used to detect the vehicles at the above
results.
1) Motion Feature[13]: Algorithm m to obtain motion
image is shown in figure 3. Inter-frame difference is used to
get motion pixels. The first step of alggorithm is to convert
frame into gray by using(1)
Gray value=0.2125R+0.7154G+0.07221B (1) Fig. 5. Multiple feature planee to select feature[1]
Difference of two gray frames used to generate difference
image and canny edge detection is appplied to get edges of and object pixels are in these recttangle. This output used to
moving vehicles. After applying dilatiion operation determine area of vehicles. Selectted region in previous step
motion image is formed. is used for haar feature calculation
n.
2) Haar-like feature: A rectangle ddenoted by 5-tuples Classification methodology useed is boosting. It performs
r={x1,y1 ,w1 ,h1 ,α} x1, y1 is top-leeft point, w1, h1 is reweighting to training data and combines performance of
weak classifier to get strong classiifier.
2015 International Conference on Circuit, Power and Computing Technologies [ICCPCT]

Cascade classifier is like decision tree in each stage 3) Color:


classifier trained to detect vehicle. If 30 stages are used for Allows user to search for vehicles based on six colours
training detection rate is 99.9% and false alarm rate is 50% red, green, blue, yellow, black and white. Dominant colour
in each stage. can be calculated by initially converting each input video
1.Start with weights Wi=1/N , i=1,^,N frame into HSL space and quantizing HSL into six colours

2.Repeat for m=1,2,^,M: 1.RGB values are transformed into XYZ values of CIE-(X,Y,Z)
a. Fit the regression function fm(x) by weighted least square of yi to colour space by following
xi and weights Wi
b. Update F(x) <- F(x)+ fm(x) X=0.607R+0.174G+0.201B (2)
c. Update Wi <- Wi exp(-yi fm(xi)) and normalize
Y=0.299R+0.587G+0.114B (3)
3.Output the classifier sign[F(x)]=sign[∑ fm x ]
Z=0.000R+0.066G+1.117B (4)
* * * * * *
2. we can L u v values of (L ,u .v ) colour space by following
Fig. 6 Adaboost Training algorithm[14]
L*=25(100Y/Y0)1/3-16 (5)
Feature of learning algorithm is, it uses multiple feature * *
u =13L (u'-u0) (6)
planes to select feature, as shown in Fig. 5. Considered * *
v =13L (v'-v0) (7)
feature planes are red, green, and blue , gradient magnitude,
* * *
local binary patterns, and allow the final detector combined 3.calculate HSI values from L u v by following
with Haar-like features. AdaBoost algorithm used for I=L* (8)
training is shown in Fig. 6. * *
Boosting works sequentially. It applies a classification H=arctan(v /u ) (9)
algorithm to the reweighted version of training data and take S=[(u*)2+(v*)2]1/2 (10)
weighted majority vote of sequence of classifiers produced.
Fig. 7. Color detection algorithm[12]
Trained data (x1,y1),...(xN,yN) with xi is vector valued
feature. fm(x) is classifier. The AdaBoost procedure trains
by computing hue angle cut-off. Histogram with six bins is
the fm(x) classifiers on weighted versions of the training
built over the vehicle images belonging to a specific track.
sample, giving higher weight to cases that are currently
The colour corresponding to the bin which receives the
misclassified.
majority of votes is then assigned as the dominant colour.
algorithm is shown in Fig. 7.
IV. ATTRIBUTE EXTRACTION
Y0, u0, v0 depends on characteristics of display device
For each vehicle track, attributes extracted and stored into and sensitivity of human eye. Y0, u0, v0 are 1.000, 0.201,
backend database. Web form is generated which allows user and 0.461 respectively.
to specify parameters on which basis vehicle is searched. H represents hue, angle 222 represents pure red, and
The reference framework used is IBM Smart Surveillance angle 318 represents pure green and 80 represents pure blue.
Solution[11].
We currently extract following attributes. 3) Speed:
The vehicle motion is detected and tracked along the
1) Date and Time: frames using optical flow algorithm. Optical flow is
We store date and time indicating the beginning, end and distribution of apparent velocities of movement of
duration of track. For this srt(SubRip) file can be used. brightness pattern in an image. Optical flow can arise from
srt file consist of four parts: relative motion of objects. For every detected vehicle in
1. A number indicating subtitle is in sequence. video frame we computed the centroid of rectangle drawn
2. The time that subtitle should appear on screen and around detected vehicle.
then disappear. The distance travelled by the vehicle is calculated using the
3. Subtitle itself. movement of the centroid over the frames and the speed of the
4. The blank line indicating start of new subtitle. vehicle is estimated.
Frame in which vehicle is detected, its time extracted and
stored indicating beginning, end and duration of travel.
2) Direction of Travel:
Optical flow is the distribution of the apparent
velocities of objects in an image. By calculating optical flow
between frames of video, you can measure the velocities of
objects in the video. Optical flow is used to determine
direction of travel of vehicle by quantizing it into 4
directions. In this paper, pyramidal lucas-kanada method
used to compute optical flow.
2015 International Conference on Circuit, Power and Computing Technologies [ICCPCT]

V. EXPERIMENTS VI. CONCLUSION


This paper presents a method for vehicle detection and
attribute based search of vehicle in video surveillance system.
For vehicle detection motion and haar features used. Training
algorithm used is Adaboost. Once vehicle is detected attributes
are extracted in order to perform search for vehicle based on its
semantic attributes. Attributed selected are data and time,
direction of travel. For more accuracy more features like colour
of vehicle, speed of vehicle can be selected.

Fig. 8. Detection Result on traffic video

Fig. 10. Performance of Proposed system

ACKNOWLEDGMENT
I would like to express my sincere thanks to all those who
helped me directly or indirectly in this esteemed work. This is
Fig. 9. Detection result in college campus video part of project sponsored under Research Promotion Scheme
Set of experiments carried out on database obtained from (RPS), AICTE, New Delhi, India (2013-16).
college campus and videos collected from toll booth to evaluate
the performance of proposed algorithm. REFERENCES
Detection by only Haar-like feature and General Adaboost
[1] Rogerio Schmidt Feris, Behjat Siddiquie, James Petterson,Yun Zhai,
algorithm, rescaling factor of 1.25 and 20 stages cascade Ankur Datta, Lisa M. Brown, Sharath Pankanti "Large-Scale Vehicle
classifiers are trained gives 80% accurate detection with 120 Detection, Indexing, and Search in Urban Surveillance Videos " in
false alarm. Performance result is shown in table 1 and Fig. 10. IEEE Transaction on Multimedia Feb 2012(reference).
Detection by motion and Haar-like feature combined gives [2] P. Viola and M. Jones, “Robust real-time object detection,” Int.
5% reduction in false alarm. J.Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
In experiment, the training samples were randomly selected, [3] B. Wu and R. Nevatia, “Cluster boosted tree classifier for multi-view,
multi pose object detection,” in Proc. ICCV, 2007.
and the remaining was for testing. Detection algorithm applied
[4] W. Zheng and L. Liang, “Fast car detection using image strip
on urban video is shown in Fig. 8.This represents the time at features,” in Proc. CVPR, 2009.
which that vehicle is detected along with direction of moving [5] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
vehicle. detection,” in Proc. CVPR, 2005.
The robustness of our vehicle detection method to crowded [6] P. Felzenszwalb, R. B. Girshick, and D. McAllester, “Cascade object
scenes and different lighting conditions comes from the data detection with deformable part models,” in Proc. CVPR, 2010.
itself. Large scale feature selection improves accuracy. Also, [7] H. Schneiderman and T. Kanade, “A statistical approach to 3D object
detection applied to faces and cars,” in Proc. CVPR, 2000.
several false detection occurs so adding those to negative [8] Y. Tian, M. Lu, and A. Hampapur, “Robust and efficient foreground
training set can improve result. Colour estimation can be analysis for real-time video surveillance,” in Proc. CVPR, 2005.
improved by using learning different lighting conditions. [9] T. Cootes, G. Edwards, and C. Taylor, “Active appearance models,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 681–685,
Table 1 Performance of proposed system Jun. 2001.
Detect- False Date Direction Colour [10] Y. Tian, M. Lu, and A. Hampapur, “Robust and efficient foreground
ion alarm and of travel estimation analysis for real-time video surveillance,” in Proc. CVPR, 2005.
rate time estimation [11] P. Viola and M. Jones, “Robust real-time object detection ,” Int. J.
estim- Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
ation [12] D.-C. Tseng and C.-H. Chang, “Color segmentation using ucs
Detection 80% 10% 100% 90% 70% perceptual attributes,” Proc. Nat. Sci. Council: Part A, vol. 18, pp.
with 305–314,1994.
Haar-only [13] Hongliang Bai, ,Tianping Wu, Changpin Liu," Motion and Haar-like
Detection 82% 5% 100% 92% 72% Features Based Vehicle Detection," in IEEE 2006.
with Haar [14] Jerome Friedman, Trevor Hastie and Robert Tibshirani, "Additive
and logistic regression :A statistical view of Boosting," The Annals of
motion Statistics 2000, Vol. 28, No. 2, 337–407
Result of work is represented in table 1.

Vous aimerez peut-être aussi