Académique Documents
Professionnel Documents
Culture Documents
CANDIDATES DECLARATION
We hereby declare that the work presented in this project report entitled Unattended Object Detection In Surveillance Video, submitted towards completion of major project in 8th semester of B.Tech. (CSE) at Modinagar Institute of Technology,Modinagar Ghaziabad, is an authenticated record of our original work carried out from February 2013 to April 2013 under the guidance of Mr Sachin Goel&Mr Shivam Sharma. Due acknowledgements have been made in the text to all other material used.
This is to certify that the above statements made by the candidate is correct to the best of my knowledge.
Date
Place:Modinagar
Mr Sachin Goel
&
Mr Shivam Sharma
ABSTRACT
This project describes a novel approach for detecting unattended Objects in surveillance video. Unlike the traditional approach to just detecting stationary objects in monitored scenes, our approach detects unattended objects based on accumulated knowledge about human and nonhuman objects from continuous object tracking and classification. We design different reasoning rules for detecting different scenarios of the unattended object events. In the case where a object is left unattended by a single person explicitly, a rule using human activity recognition is introduced to decide the object ownership. In the case where a suspicious object is dropped down by a group of humans or under heavy occlusions, a rule based on historic tracking and classification information is proposed. Furthermore, an additional rule is given to reduce false alarms that may happen with traditional stationary object detection methods.
CONTENTS
Topics
Introduction Motivation Literature Survey Methodology Tools Used Conclusion Future scope Bibliography
Page No
5 6 7 9 19 19 19 20
Introduction
The project Unattended Object Detection In Surveillance Video is a framework for detecting the stationary package and human performing activity recognition .Monitor the activity of owner with respect to the frame and keep a track of it whether it disappears or not .If the object has no owner in the frame then object is declared as unattended .Now comes the role of time, if time for which object is kept unattended is greater than a threshold value then an alarm is produced indicating presence of an unusual activity.So this application has wide role for security purposes . .
Motivation
Our primary motivation is to build a visual surveillance system that draws heavily upon human reasoning.Visual surveillance system helps us to ensure safety and security by detecting the occurrence of activities of interest within an environment. This typically requires the capacity to robustly track individuals not only when they are within the field of regard of the cameras, but also when they disappear from view and later reappear.
It can be widely used in following areas Railway Station Airports Hospitals At Red Lights for monitoring traffic Theatres and malls.
Literature Survey
The Literature survey is a brief documentation of a comprehensive review of the published and unpublished work from secondary sources data in the areas related to Classification of moving object for their tracking and surveillance . Reviewing the literature on the above stated topic area help us to focus further more meaningfully on certain aspects found to be important and helpful in the published studies even if these had not surfaced earlier.
The following are some work based on various journals and other source of data.
A system for videos surveillance and monitoring by R.T Collins , may 2000 at Carnegie Mellon University.
They have used adaptive background subtraction model which works on gray scale video imagery from a static camera. In this model of background subtraction method initializes a reference background with the first few frames of video input. Then it subtracts the intensity value of each pixel in the current image from the corresponding value in the reference background image. The difference is filtered with an adaptive threshold per pixel .The reference background image and the threshold values are updated with an IIR filter to adapt to dynamic scene changes. A sample foreground region detection is below .The first image is the estimated reference background of the monitored site. The second image is captured at a later step and contains two foreground objects (two people). The third image shows the detected foreground pixel map using background subtraction.[1]
Voting-based simultaneous tracking of multiple video objects by A. Amer in January 2003 at Santa Clara ,USA.
The aim of tracking method which is used in this work is to establish a correspondence between objects or objects parts in consecutive frames and to extract temporal information about object such as trajectory ,posture ,speed and direction. This approach makes use of the object features such as size,centre of mass , bounding box and colour histogram which are extracted in previous steps to establish a matching between objects in consecutive frames. Further it can detect occlusion and distinguishes object information .
Tracking detected objects frame by frame in video is a significant and difficult task. The system could notextract cohesive temporal information about objects and higher level behavior analysis steps would not be possible. On the other hand, inaccurate foregroundobject segmentation due to shadows, reflectance and occlusions makes tracking adifficult research problem.[2]
Moving target classification and tracking from real-time video by A.J. Lipton , H. Fujiyoshi and R.S. Patil in 1998
The approach presented in [29] makes use of the objects silhouette contour length and area information to classify detected objects into three groups: human,vehicle and other. The method depends on the assumption that humans are, in general, smaller than vehicles and have complex shapes.Dispersedness is used as the classification metric and it is defined in terms of objects area and contour length (perimeter) as follows: Dispersedness= (Perimeter*Perimeter)/AreaClassification is performed at each frame and tracking results are used to improve temporal classification consistency.[3]
Methodology
Left luggage
Left luggage is defined as the items that had been abandoned by their owner.
Unattended Rules :
The luggage item is unattended if the owner is further than b meter (b>=a) away from the luggage. The region in between a and b meters is referred as warning zone where the luggage is neither attended to nor left unattended.
On the second pass: 1. Iterate through each element of the data by column, then by row 2. If the element is not the background 1. Relabel the element with the lowest equivalent label
2. After the first pass, the following labels are generated. Note that a total of 7 labels are generated in accordance with the conditions highlighted above.
3. Array generated after the merging of labels is carried out. Here, the label value that was the smallest for a given region "floods" throughout the connected region and gives two distinct labels, and hence two distinct labels.
4. Final result in color to clearly see two different regions that have been found in the array.
Trajectory PostProcessing module performs a blob trajectory smoothing function. This module is optional and cannot be included in specific pipeline.
Modules descriptions :
CvFGDetector
Frames
FG mask
This is virtual class, describing interface of FG/BG Detection module. Input Data: Image of current frame. Output Data: FG/BG mask of current frame.
CvBlobDetector
FG mask Frames
This is virtual class, describing interface of Blob Entering Detection module. Input Data: FG/BG mask of current frame; List of existed Blobs. Output Data:
CvBlobTracker
Blobs (Id,Pos,Size)
This is virtual class, describing interface of Blob Tracking module Input Data: BGR Image of current frame; FG/BG mask of current frame; Output Data: Blobs (Id, pos, size) on current frame.
CvBlobTrackGen
This is virtual class describing interface for Trajectory Generator module. The purpose of this module is to save whole trajectory to specified file. Also this module can calculate some features (using original image and FG mask) for each blob and saves it too. Input Data: Blobs on current frame Output Data:
CvBlobTrackPostProc
Blobs (Id,Pos,Size)
Blobs (Id,Pos,Size)
This is virtual class describing interface for Trajectory Post Processing module. The purpose of this module is to produce some filtering operation on blob trajectory. For example this module can be Klaman filter or another smoothing filter. Input Data: Blobs on current frame Output Data: Blobs on current frame
Tools Used
Development Tool: MS Visual Studio 2008 Languages Used: C++, OpenCV Platform : Windows 7
Conclusion
This was our project of Unattended Object Detection In Surveillance Video. Development of this System takes a lot of efforts from us. We think this system gave a lot of satisfaction to all of us. Though every task is never said to be perfect in this development field even more improvement may be possible in this system.We learned so many things and gained a lot of knowledge about development field.We hope this will prove fruitful to us.
Future Scope
This project will help the police in detecting unattended object. This project is developed such that it can view a wide area through multiple cameras, typically in the public areas which are susceptible to be crowded. Tracking people in crowded place is a big problem due to occlusion, so its helps in avoiding theft . This project can prevent bomb explosion at public / crowded place or any such Terrorist activities . Easy to maintain in future prospect.
References
[1].A System for Video Surveillance and Monitoring ?? Robert T. Collins, Alan J. Lipton, Takeo Kanade,Hironobu Fujiyoshi, David Duggins, Yanghai Tsin,David Tolliver, Nobuyoshi Enomoto, Osamu Hasegawa,Peter Burt1 and Lambert Wixson1 CMU-RI-TR-00-12 [2]. Voting-based simultaneous tracking of multiple video objects Aishy Amer Concordia University, Electrical and Computer Engineering,Montreal, Quebec, Canada
[3],Learning Feature Extraction and Classification for Tracking Multiple Objects:A Unified Framework, Xiaotong Yuan, Stan Z. Li ,Center for Biometrics and Security Research & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of Science , Beijing, China, 100080 [4] I. Haritaoglu, D. Harwood, and L.S. Davis. W4: A real time system for detecting and tracking people. In Computer Vision and Pattern Recognition,pages 962967, 1998.
[5]http://opencv.sourcearchive.com/documentation/1.0.0-6.1/cvbgfg__gaussmix_8cpp-
source.html
[6]http://read.pudn.com/downloads99/sourcecode/graph/text_recognize/405572/cvGaussMo
del.cpp.htm
[7]http://mmlab.disi.unitn.it/wiki/index.php/Mixture_of_Gaussians_using_OpenCV [8]http://en.wikipedia.org/wiki/Connected-component_labeling
(2006).
Stephen
Johnson
on
Digital
Photography.
O'Reilly.
[10]http://books.google.com/books?id=0UVRXzF91gcC&pg=PA17&dq=grayscale+black-andhite-continuous-tone&ei=XlwqSdGVOILmkwTalPiIDw. [11] "Conversion to Greyscale", http://gimp-savvy.com/BOOK/node54.html. [12] Foreground Object Detection from Videos Containing Complex Background Liyuan Li, Weimin Huang Institute for Infocomm Research 21 Heng Mui Keng Terrace Singapore, 119613 lyli,wmhuang@i2r.astar.edu.sg
[13] L. Wang, W. Hu, and T. Tan. Recent developments in human motion analysis. Pattern Recognition, 36(3):585601, March 2003. [14] C. Stauffer and W. Grimson. Adaptive background mixture models for realtime tracking. In Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, page 246252, 1999.