Académique Documents
Professionnel Documents
Culture Documents
I. INTRODUCTION
Automatically Search for suspicious vehicles is important
in criminal investigation. Vehicle detection in urban is more
challenging due to high volume of data, different weather
conditions and different lighting conditions like shadows. In
this situation traditional approach like background
subtraction fails. This paper presents a method for vehicle
detection which works well under these conditions and
completes end-to-end system for vehicle retrieval based on
semantic attributes. Semantic attribute selected for vehicle
search are speed, date and time, colour of detected vehicle
and direction in which vehicle is travelling. This paper Fig. 2. Shape free appearance apace b. Changing sliding window[1]
present training based approach called co-training method
for detection. To deal with different types of vehicles like viola and Jones [2]. Detection by using edge lets feature [3]
buses, cars, trucks and heavy vehicles deformable aspect and strip feature [4].Support vector machines with
ratio sliding window used. histogram of oriented gradients also used for vehicle
detection[5][6].Vehicle detection based Statistical learning
II. RELATED WORK of object parts proposed by Schneiderman and kanade[7].
Most surveillance system uses background modelling for These methods showed good performance but it requires
vehicle detection but it fails in crowded scenes as multiple large labeling of training samples and works below 15
frames/second .
2.Repeat for m=1,2,^,M: 1.RGB values are transformed into XYZ values of CIE-(X,Y,Z)
a. Fit the regression function fm(x) by weighted least square of yi to colour space by following
xi and weights Wi
b. Update F(x) <- F(x)+ fm(x) X=0.607R+0.174G+0.201B (2)
c. Update Wi <- Wi exp(-yi fm(xi)) and normalize
Y=0.299R+0.587G+0.114B (3)
3.Output the classifier sign[F(x)]=sign[∑ fm x ]
Z=0.000R+0.066G+1.117B (4)
* * * * * *
2. we can L u v values of (L ,u .v ) colour space by following
Fig. 6 Adaboost Training algorithm[14]
L*=25(100Y/Y0)1/3-16 (5)
Feature of learning algorithm is, it uses multiple feature * *
u =13L (u'-u0) (6)
planes to select feature, as shown in Fig. 5. Considered * *
v =13L (v'-v0) (7)
feature planes are red, green, and blue , gradient magnitude,
* * *
local binary patterns, and allow the final detector combined 3.calculate HSI values from L u v by following
with Haar-like features. AdaBoost algorithm used for I=L* (8)
training is shown in Fig. 6. * *
Boosting works sequentially. It applies a classification H=arctan(v /u ) (9)
algorithm to the reweighted version of training data and take S=[(u*)2+(v*)2]1/2 (10)
weighted majority vote of sequence of classifiers produced.
Fig. 7. Color detection algorithm[12]
Trained data (x1,y1),...(xN,yN) with xi is vector valued
feature. fm(x) is classifier. The AdaBoost procedure trains
by computing hue angle cut-off. Histogram with six bins is
the fm(x) classifiers on weighted versions of the training
built over the vehicle images belonging to a specific track.
sample, giving higher weight to cases that are currently
The colour corresponding to the bin which receives the
misclassified.
majority of votes is then assigned as the dominant colour.
algorithm is shown in Fig. 7.
IV. ATTRIBUTE EXTRACTION
Y0, u0, v0 depends on characteristics of display device
For each vehicle track, attributes extracted and stored into and sensitivity of human eye. Y0, u0, v0 are 1.000, 0.201,
backend database. Web form is generated which allows user and 0.461 respectively.
to specify parameters on which basis vehicle is searched. H represents hue, angle 222 represents pure red, and
The reference framework used is IBM Smart Surveillance angle 318 represents pure green and 80 represents pure blue.
Solution[11].
We currently extract following attributes. 3) Speed:
The vehicle motion is detected and tracked along the
1) Date and Time: frames using optical flow algorithm. Optical flow is
We store date and time indicating the beginning, end and distribution of apparent velocities of movement of
duration of track. For this srt(SubRip) file can be used. brightness pattern in an image. Optical flow can arise from
srt file consist of four parts: relative motion of objects. For every detected vehicle in
1. A number indicating subtitle is in sequence. video frame we computed the centroid of rectangle drawn
2. The time that subtitle should appear on screen and around detected vehicle.
then disappear. The distance travelled by the vehicle is calculated using the
3. Subtitle itself. movement of the centroid over the frames and the speed of the
4. The blank line indicating start of new subtitle. vehicle is estimated.
Frame in which vehicle is detected, its time extracted and
stored indicating beginning, end and duration of travel.
2) Direction of Travel:
Optical flow is the distribution of the apparent
velocities of objects in an image. By calculating optical flow
between frames of video, you can measure the velocities of
objects in the video. Optical flow is used to determine
direction of travel of vehicle by quantizing it into 4
directions. In this paper, pyramidal lucas-kanada method
used to compute optical flow.
2015 International Conference on Circuit, Power and Computing Technologies [ICCPCT]
ACKNOWLEDGMENT
I would like to express my sincere thanks to all those who
helped me directly or indirectly in this esteemed work. This is
Fig. 9. Detection result in college campus video part of project sponsored under Research Promotion Scheme
Set of experiments carried out on database obtained from (RPS), AICTE, New Delhi, India (2013-16).
college campus and videos collected from toll booth to evaluate
the performance of proposed algorithm. REFERENCES
Detection by only Haar-like feature and General Adaboost
[1] Rogerio Schmidt Feris, Behjat Siddiquie, James Petterson,Yun Zhai,
algorithm, rescaling factor of 1.25 and 20 stages cascade Ankur Datta, Lisa M. Brown, Sharath Pankanti "Large-Scale Vehicle
classifiers are trained gives 80% accurate detection with 120 Detection, Indexing, and Search in Urban Surveillance Videos " in
false alarm. Performance result is shown in table 1 and Fig. 10. IEEE Transaction on Multimedia Feb 2012(reference).
Detection by motion and Haar-like feature combined gives [2] P. Viola and M. Jones, “Robust real-time object detection,” Int.
5% reduction in false alarm. J.Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
In experiment, the training samples were randomly selected, [3] B. Wu and R. Nevatia, “Cluster boosted tree classifier for multi-view,
multi pose object detection,” in Proc. ICCV, 2007.
and the remaining was for testing. Detection algorithm applied
[4] W. Zheng and L. Liang, “Fast car detection using image strip
on urban video is shown in Fig. 8.This represents the time at features,” in Proc. CVPR, 2009.
which that vehicle is detected along with direction of moving [5] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
vehicle. detection,” in Proc. CVPR, 2005.
The robustness of our vehicle detection method to crowded [6] P. Felzenszwalb, R. B. Girshick, and D. McAllester, “Cascade object
scenes and different lighting conditions comes from the data detection with deformable part models,” in Proc. CVPR, 2010.
itself. Large scale feature selection improves accuracy. Also, [7] H. Schneiderman and T. Kanade, “A statistical approach to 3D object
detection applied to faces and cars,” in Proc. CVPR, 2000.
several false detection occurs so adding those to negative [8] Y. Tian, M. Lu, and A. Hampapur, “Robust and efficient foreground
training set can improve result. Colour estimation can be analysis for real-time video surveillance,” in Proc. CVPR, 2005.
improved by using learning different lighting conditions. [9] T. Cootes, G. Edwards, and C. Taylor, “Active appearance models,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 681–685,
Table 1 Performance of proposed system Jun. 2001.
Detect- False Date Direction Colour [10] Y. Tian, M. Lu, and A. Hampapur, “Robust and efficient foreground
ion alarm and of travel estimation analysis for real-time video surveillance,” in Proc. CVPR, 2005.
rate time estimation [11] P. Viola and M. Jones, “Robust real-time object detection ,” Int. J.
estim- Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
ation [12] D.-C. Tseng and C.-H. Chang, “Color segmentation using ucs
Detection 80% 10% 100% 90% 70% perceptual attributes,” Proc. Nat. Sci. Council: Part A, vol. 18, pp.
with 305–314,1994.
Haar-only [13] Hongliang Bai, ,Tianping Wu, Changpin Liu," Motion and Haar-like
Detection 82% 5% 100% 92% 72% Features Based Vehicle Detection," in IEEE 2006.
with Haar [14] Jerome Friedman, Trevor Hastie and Robert Tibshirani, "Additive
and logistic regression :A statistical view of Boosting," The Annals of
motion Statistics 2000, Vol. 28, No. 2, 337–407
Result of work is represented in table 1.