Vous êtes sur la page 1sur 21

Modinagar Institute of Technology,Modinagar

A MAJOR PROJECT REPORT


On

Unattended Object Detection In Surveillance Video

Submitted By: Rajesh Patel (0938010029) Shashikant (0938010038)

Under the Guidance of: Mr Sachin Goel & Mr Shivam Sharma

CANDIDATES DECLARATION

We hereby declare that the work presented in this project report entitled Unattended Object Detection In Surveillance Video, submitted towards completion of major project in 8th semester of B.Tech. (CSE) at Modinagar Institute of Technology,Modinagar Ghaziabad, is an authenticated record of our original work carried out from February 2013 to April 2013 under the guidance of Mr Sachin Goel&Mr Shivam Sharma. Due acknowledgements have been made in the text to all other material used.

Place: Modinagar Date:

Rajesh Patel (0938010029) Shashikant (0938010038)

This is to certify that the above statements made by the candidate is correct to the best of my knowledge.

Date
Place:Modinagar

Mr Sachin Goel
&

Mr Shivam Sharma

ABSTRACT
This project describes a novel approach for detecting unattended Objects in surveillance video. Unlike the traditional approach to just detecting stationary objects in monitored scenes, our approach detects unattended objects based on accumulated knowledge about human and nonhuman objects from continuous object tracking and classification. We design different reasoning rules for detecting different scenarios of the unattended object events. In the case where a object is left unattended by a single person explicitly, a rule using human activity recognition is introduced to decide the object ownership. In the case where a suspicious object is dropped down by a group of humans or under heavy occlusions, a rule based on historic tracking and classification information is proposed. Furthermore, an additional rule is given to reduce false alarms that may happen with traditional stationary object detection methods.

CONTENTS

Topics
Introduction Motivation Literature Survey Methodology Tools Used Conclusion Future scope Bibliography

Page No
5 6 7 9 19 19 19 20

Introduction
The project Unattended Object Detection In Surveillance Video is a framework for detecting the stationary package and human performing activity recognition .Monitor the activity of owner with respect to the frame and keep a track of it whether it disappears or not .If the object has no owner in the frame then object is declared as unattended .Now comes the role of time, if time for which object is kept unattended is greater than a threshold value then an alarm is produced indicating presence of an unusual activity.So this application has wide role for security purposes . .

Motivation
Our primary motivation is to build a visual surveillance system that draws heavily upon human reasoning.Visual surveillance system helps us to ensure safety and security by detecting the occurrence of activities of interest within an environment. This typically requires the capacity to robustly track individuals not only when they are within the field of regard of the cameras, but also when they disappear from view and later reappear.

Some of the key applications are :


Its not possible for police and security to keep an eye on everyone in a crowded place . But their activities can be traced with its help . Video surveillance is commonly used in security systems . Increasing need of security in public places leads to the detection of suspicious things while they occur rather recording them up. Surveillance applications are developed such that they can view a wide area through multiple cameras, typically in the public areas which are susceptible to be crowded.Tracking people in crowded place is a bigproblem due to occlusion, so its helps in avoiding theft . Combat Terrorism It can Prevent bomb explosion at public / crowded place or any such Terrorist activities . It Alarms when an unattended object at public place lies for more than grace period . And also keeps track of owner and physical details, which helps in tracing him . Monitoring Traffic : It can monitor traffic by detecting stopped Traffic obstacles .

It can be widely used in following areas Railway Station Airports Hospitals At Red Lights for monitoring traffic Theatres and malls.

Literature Survey
The Literature survey is a brief documentation of a comprehensive review of the published and unpublished work from secondary sources data in the areas related to Classification of moving object for their tracking and surveillance . Reviewing the literature on the above stated topic area help us to focus further more meaningfully on certain aspects found to be important and helpful in the published studies even if these had not surfaced earlier.

The following are some work based on various journals and other source of data.

A system for videos surveillance and monitoring by R.T Collins , may 2000 at Carnegie Mellon University.
They have used adaptive background subtraction model which works on gray scale video imagery from a static camera. In this model of background subtraction method initializes a reference background with the first few frames of video input. Then it subtracts the intensity value of each pixel in the current image from the corresponding value in the reference background image. The difference is filtered with an adaptive threshold per pixel .The reference background image and the threshold values are updated with an IIR filter to adapt to dynamic scene changes. A sample foreground region detection is below .The first image is the estimated reference background of the monitored site. The second image is captured at a later step and contains two foreground objects (two people). The third image shows the detected foreground pixel map using background subtraction.[1]

Voting-based simultaneous tracking of multiple video objects by A. Amer in January 2003 at Santa Clara ,USA.
The aim of tracking method which is used in this work is to establish a correspondence between objects or objects parts in consecutive frames and to extract temporal information about object such as trajectory ,posture ,speed and direction. This approach makes use of the object features such as size,centre of mass , bounding box and colour histogram which are extracted in previous steps to establish a matching between objects in consecutive frames. Further it can detect occlusion and distinguishes object information .

Tracking detected objects frame by frame in video is a significant and difficult task. The system could notextract cohesive temporal information about objects and higher level behavior analysis steps would not be possible. On the other hand, inaccurate foregroundobject segmentation due to shadows, reflectance and occlusions makes tracking adifficult research problem.[2]

Moving target classification and tracking from real-time video by A.J. Lipton , H. Fujiyoshi and R.S. Patil in 1998
The approach presented in [29] makes use of the objects silhouette contour length and area information to classify detected objects into three groups: human,vehicle and other. The method depends on the assumption that humans are, in general, smaller than vehicles and have complex shapes.Dispersedness is used as the classification metric and it is defined in terms of objects area and contour length (perimeter) as follows: Dispersedness= (Perimeter*Perimeter)/AreaClassification is performed at each frame and tracking results are used to improve temporal classification consistency.[3]

Real Time periodic motion detection , analysis and applications in 2000.


It is based on the temporal self-similarity of a movingobject. As an object that exhibits periodic motion evolves, its self-similarity measurealso shows a periodic motion. The method exploits this clue to categorize optical flow analysis is also useful to distinguish rigid and non-rigid objects.It is expected that non-rigid objects suchas humans will present high average residual flow whereas rigid objects such asvehicles will present little residual flow. Also, the residual flow generated byhuman motion will have a periodicity. By using this cue, human motion, thushumans, can be distinguished from other objects such as vehicles.[4]

Methodology
Left luggage
Left luggage is defined as the items that had been abandoned by their owner.

Rules for Attended and Unattended Objects : Attended Rules :


The luggage is owned and attended to by a person when it enters the scene with the luggage until such point that the luggage is not in physical contact with the person. At this point the luggage is attended to by a person if the person is less than a meter from the luggage.

Unattended Rules :
The luggage item is unattended if the owner is further than b meter (b>=a) away from the luggage. The region in between a and b meters is referred as warning zone where the luggage is neither attended to nor left unattended.

Algorithm CONNECTED COMPONENT LABELLING


Connected component labeling (alternatively connected component analysis) is an algorithmic application of graph theory , where subsets of connected components are uniquely labeled based on a given heuristic . We are working on binary digital images , which simplifies our work . Our image would comprise of a grid of pixels . Each pixel represent two things Color of pixel and its intensity . So we have got a graph , in which each vertex is a pixel and edges represents the neighbors of vertex . Goal is to find all disconnected objects in a video with respect to certain frame(reference frame extracted from video) .

A Raster scanning algorithm for connected-region extraction is presented below.


On the first pass: 1. Iterate through each element of the data by column, then by row (Raster Scanning) 2. If the element is not the background 1. Get the neighboring elements of the current element 2. If there are no neighbors, uniquely label the current element and continue 3. Otherwise, find the neighbor with the smallest label and assign it to the current element 4. Store the equivalence between neighboring labels

On the second pass: 1. Iterate through each element of the data by column, then by row 2. If the element is not the background 1. Relabel the element with the lowest equivalent label

Graphical example of two-pass algorithm


1. The array from which connected regions are to be extracted is given below

2. After the first pass, the following labels are generated. Note that a total of 7 labels are generated in accordance with the conditions highlighted above.

3. Array generated after the merging of labels is carried out. Here, the label value that was the smallest for a given region "floods" throughout the connected region and gives two distinct labels, and hence two distinct labels.

4. Final result in color to clearly see two different regions that have been found in the array.

General Description of Blob Tracking


The blob tracking system includes 5 modules as depicted below
FG/BG Detectionmodule performs foreground/background segmentation for each pixel. Blob Entering Detection module uses the result (FG/BG mask) of FG/BG Detection module to detect new blob object entered to a scene on each frame. Blob Tracking module initialized by Blob Entering Detection results and tracks each new entered blob. Trajectory Generation module performs a saving function. It collects all blobs positions and save each whole blob trajectory to hard disk when it finished (for example tracking is lost).

Trajectory PostProcessing module performs a blob trajectory smoothing function. This module is optional and cannot be included in specific pipeline.

Modules descriptions :
CvFGDetector

Frames

FG/BG Detection Module

FG mask

This is virtual class, describing interface of FG/BG Detection module. Input Data: Image of current frame. Output Data: FG/BG mask of current frame.

CvBlobDetector

FG mask Frames

Blob Entering Detection Module

New Blobs (Pos,Size)

This is virtual class, describing interface of Blob Entering Detection module. Input Data: FG/BG mask of current frame; List of existed Blobs. Output Data:

List of newly detected blobs.

CvBlobTracker

New Blob Position FG mask Frames

Blob Tracking Module

Blobs (Id,Pos,Size)

This is virtual class, describing interface of Blob Tracking module Input Data: BGR Image of current frame; FG/BG mask of current frame; Output Data: Blobs (Id, pos, size) on current frame.

CvBlobTrackGen

Blobs (Id,Pos,Size) Frames FG mask

Trajectory Generation Module

This is virtual class describing interface for Trajectory Generator module. The purpose of this module is to save whole trajectory to specified file. Also this module can calculate some features (using original image and FG mask) for each blob and saves it too. Input Data: Blobs on current frame Output Data:

Saved trajectory list

CvBlobTrackPostProc

Blobs (Id,Pos,Size)

Trajectory PostProcessing Module

Blobs (Id,Pos,Size)

This is virtual class describing interface for Trajectory Post Processing module. The purpose of this module is to produce some filtering operation on blob trajectory. For example this module can be Klaman filter or another smoothing filter. Input Data: Blobs on current frame Output Data: Blobs on current frame

Tools Used
Development Tool: MS Visual Studio 2008 Languages Used: C++, OpenCV Platform : Windows 7

Conclusion
This was our project of Unattended Object Detection In Surveillance Video. Development of this System takes a lot of efforts from us. We think this system gave a lot of satisfaction to all of us. Though every task is never said to be perfect in this development field even more improvement may be possible in this system.We learned so many things and gained a lot of knowledge about development field.We hope this will prove fruitful to us.

Future Scope
This project will help the police in detecting unattended object. This project is developed such that it can view a wide area through multiple cameras, typically in the public areas which are susceptible to be crowded. Tracking people in crowded place is a big problem due to occlusion, so its helps in avoiding theft . This project can prevent bomb explosion at public / crowded place or any such Terrorist activities . Easy to maintain in future prospect.

References
[1].A System for Video Surveillance and Monitoring ?? Robert T. Collins, Alan J. Lipton, Takeo Kanade,Hironobu Fujiyoshi, David Duggins, Yanghai Tsin,David Tolliver, Nobuyoshi Enomoto, Osamu Hasegawa,Peter Burt1 and Lambert Wixson1 CMU-RI-TR-00-12 [2]. Voting-based simultaneous tracking of multiple video objects Aishy Amer Concordia University, Electrical and Computer Engineering,Montreal, Quebec, Canada

[3],Learning Feature Extraction and Classification for Tracking Multiple Objects:A Unified Framework, Xiaotong Yuan, Stan Z. Li ,Center for Biometrics and Security Research & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of Science , Beijing, China, 100080 [4] I. Haritaoglu, D. Harwood, and L.S. Davis. W4: A real time system for detecting and tracking people. In Computer Vision and Pattern Recognition,pages 962967, 1998.

[5]http://opencv.sourcearchive.com/documentation/1.0.0-6.1/cvbgfg__gaussmix_8cpp-

source.html
[6]http://read.pudn.com/downloads99/sourcecode/graph/text_recognize/405572/cvGaussMo

del.cpp.htm
[7]http://mmlab.disi.unitn.it/wiki/index.php/Mixture_of_Gaussians_using_OpenCV [8]http://en.wikipedia.org/wiki/Connected-component_labeling

[9]Stephen Johnson ISBN 059652370X.

(2006).

Stephen

Johnson

on

Digital

Photography.

O'Reilly.

[10]http://books.google.com/books?id=0UVRXzF91gcC&pg=PA17&dq=grayscale+black-andhite-continuous-tone&ei=XlwqSdGVOILmkwTalPiIDw. [11] "Conversion to Greyscale", http://gimp-savvy.com/BOOK/node54.html. [12] Foreground Object Detection from Videos Containing Complex Background Liyuan Li, Weimin Huang Institute for Infocomm Research 21 Heng Mui Keng Terrace Singapore, 119613 lyli,wmhuang@i2r.astar.edu.sg

[13] L. Wang, W. Hu, and T. Tan. Recent developments in human motion analysis. Pattern Recognition, 36(3):585601, March 2003. [14] C. Stauffer and W. Grimson. Adaptive background mixture models for realtime tracking. In Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, page 246252, 1999.

Vous aimerez peut-être aussi