Académique Documents
Professionnel Documents
Culture Documents
4, ISSUE 1
MARCH 2014
I. INTRODUCTION
Unconcealed weapon detection is a modern research area in
the field of Computer Vision. It is an application of smart
surveillance which provides a guaranteed, cost efficient and
accurate security system. Weapon detection is not a new
field for research because much work has been done on
concealed weapon detection. The interesting point is that
Unconcealed Weapon Detection is a relatively new field of
research.
34
MARCH 2014
This section examines the related work of HOG and Haarlike features.
The Haar-like features are being used extensively for face
detection. These features have proved to be successful for
face detection as in [7]. The famous HOG features have been
used successfully in Human and Pedestrian Detection in [9].
The research not only stops there. But the researchers are
striving hard to compare both the feature types for many
other applications.
(a)
(b)
(c)
Figure 1: Haar-like features (a) Two-Rectangle features, (b) ThreeRectangle features, and (c) Four-Rectangle features
B. HOG FEATURES
HOG Features were introduced by Dalal and Triggs in [6].
HOG refers to Histogram of Oriented Gradients. The name
itself is defining it i.e. these features are actually the
histograms of the gradient orientations of the pixels. The
directional change in the intensity or color of a picture is
known as Image Gradient. To extract information from
images, the image gradients are used. HOG is suitable where
the classification is required on the basis of shape. HOG uses
overall appearance and shape of an object because it works
on the basis of edges and direction of pixels in an image.
The overall silhouette of an object is captured by the HOG
features. In our case, to detect unconcealed guns, we need to
35
focus on its shape, which is the main reason for using HOG
features.
MARCH 2014
(b)
The test dataset contains the best cases (i.e. images which
contain clearer view of the gun and less occlusion) as well as
the worst case (i.e. images in which the gun is not clearly
visible). The images are taken under different lightning
conditions.
(c)
Figure 4: HOG features. (a) Input image, (b) HOG features using cell size
8x8, (c) HOG features by using cell size 32x32.
36
MARCH 2014
v.
VII.
VI. METHODOLOGY
EXPERIMENTAL RESULTS
Image Preprocessing
In this step, all the training images are converted into grayscale images. Then the images are sharpened and blurred to
get the variety of training images and in this way, we can
extend our image dataset.
ii.
ROI Selection
37
MARCH 2014
Evaluation Metrics
HOG Based
Classifier
HAAR Based
Classifier
81.14%
884.28%
4%
1100%
96%
00%
18.86%
115.71%
Precision
95.30%
445.73%
Accuracy
88.57%
442.14%
VIII. CONCLU
USION
This paper has proposed a security
y system focused around
Unconcealed Gun Detection. To deetect any object the two
main issues are: i) feature selectio
on and ii) classification.
There are a number of feature typess available, each type has
its own importance and application
ns. Every feature will not
suit for every application. So the mo
ost challenging task is to
identify the feature type which best suits the application. For
that purpose we studied various featture types and found that
HOG and HAAR-like features hav
ve greatly contributed in
the field of Pattern Recognition. Most
M
of the recent object
recognition algorithms extensively use
u both of these feature
types.
38
MARCH 2014
[7] Viola, P., and Jones, M., "Rapid Object Detection using a
Boosted Cascade of Simple Features", Proceedings of the
2001 IEEE Computer Society Conference, Volume 1, 15
April 2001, pp. 511518.
FUTURE WORK
The future work is:
-
REFERENCES
[13] Peleshko, D., Soroka, K., Research of Usage of Haarlike Features and AdaBoost Algorithm in Viola-Jones
Method of Object Detection, 12th International Conference
on the Experience of Designing and Application of CAD
Systems in Microelectronics (CADSM), pp. 284-286, 2013.
[3] Rosten, E., and Drummond, T.; "Fusing Points and Lines
for High Performance Tracking", Proceedings of the IEEE
International Conference on Computer Vision, Vol. 2
(October 2005): pp. 15081511.
39