Vous êtes sur la page 1sur 12

IPASJ International Journal of Mechanical Engineering (IIJME)

Web Site: http://www.ipasj.org/IIJME/IIJME.htm


A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

IRIS SCAN BASED EMERGENCY SYSTEM


Pathik Desai, Hitarth Mehta
B.Tech Student, Dept of Mechanical Engineering, Charusat University, Anand, Gujarat
B.E Student, Dept of Mechanical Engineering, L.J.I.T (GTU), Ahmedabad, Gujarat

ABSTRACT
This project presents a method of real time detection of the human face. It outlines a novel approach for the real-time detection
of car driver drowsiness. There are number of accidents which take place due it the fatigue and alcohol drinking of driver.
Computer scanner and eye detection application is combined to an embedded system to achieve this goal. Real time faces as
well as eyes detection is performed using Haar-feature based cascade classifier and shape predictor 68 face landmarks. The
proposed system is realized with laptop camera embedded with Python-IDLE and OpenCV installed.
Keywords: Human face, Drowsiness, Embedded System, Python-IDLE

1. INTRODUCTION
With the increase in number of road accidents due to drowsiness of the driver and driving while drunk, one should
come forward with a real time embedded system which warns driver when he/she is not paying attention to the road.
This embedded system uses the combined application of Electronic systems with programming language Python. It can
also be extended and linked with Mechanical system (i.e., Disc brake) to apply automatic braking. The main objective
of this project is to warn driver about drowsiness while he/she is driving and to prevent road accidents,

2. PAST WORK AND FIGURES


We have detected eyes and face by using different Modules and Haar Cascade classifier before we came to know about
Shape Predictor for face landmarks.
2.1 Module
A module is a python object with arbitrarily named attributes that one can bind and reference. Simply a module is a file
consisting of Python code. Consider a module to be the same as a code library. A file containing a set of functions one
wants to include in one s application. There are numbers of modules available on the internet. Below mentioned are
some of them which have been imported in this project.
 OpenCV
 Numpy
 Dlib
 Distance
 VideoStream
 Face_utils
 Playsound
 Imutils
 Time
2.2 Haar-Cascade classifier
Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul
Viola and Michael Jones in their paper, "Rapid Object Detection uses a Boosted Cascade of Simple Features" in 2001.
It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images.
It is then used to detect objects in other images. It is well known for being able to detect faces and body parts in an
image, but can be trained to identify almost any object.
The algorithm has four stages:

Volume 7, Issue 6, June 2019 Page 1


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

I. Haar Feature Selection


II. Creating Integral images
III. Adaboost Training
IV. Cascading Classifiers
Here we will work with face detection. Initially, the algorithm needs a lot of positive images (images of faces) and
negative images (images without faces) to train the classifier. Then we need to extract features from it. First step is to
collect the Haar features. A Haar feature considers adjacent rectangular regions at a specific location in detection in a
detection window, sumps up the pixel intensities in each region and calculates the difference between these sums. For
this, Haar features shown in the below image are used. Each feature is a single value obtained by subtracting sum of
pixels under the white rectangle from sum of pixels under the black rectangle.

Fig. 2.1 Concept of Haar-Cascade

But among all these features we calculated, most of them are irrelevant. For example, consider the image below. The
top row shows two good features. The first feature selected seems to focus on the property that the region of the eyes is
often darker than the region of the nose and cheeks. The second feature selected relies on the property that the eyes are
darker than the bridge of the nose. But the same windows applied to cheeks or any other place is irrelevant. So how do
we select the best features out of 160000+ features? It is achieved by Adaboost.

Fig. 2.2 Haar cascade features

This, we apply each and every feature on all the training images. For each feature, it finds the best threshold which will
classify the faces to positive and negative. Obviously, there will be errors or misclassifications. We select the features
Volume 7, Issue 6, June 2019 Page 2
IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

with minimum error rate, which means they are the features that most accurately classify the face and non-face images.
(The process is not as simple as this. Each image is given an equal weight in the beginning. After each classification,
weights of misclassified images are increased. Then the same process is done. New error rates are calculated. Also new
weights. The process is continued until the required accuracy or error rate is achieved or the required number of
features is found). The final classifier is a weighted sum of these weak classifiers. It is called weak because it alone
can't classify the image, but together with others forms a strong Their final setup had around 6000 features. (Imagine a
reduction from 160000+ features to 6000 features. That is a big gain). So now you take an image. Take each 24x24
window. Apply 6000 features to it. Check if it is face or not. Wow. Isn't it a little inefficient and time consuming? Yes,
it is. The authors have a good solution for that.

2.3 PAST WORK (Using Haar Cascade Classifier)


The first step of the programming is to input external sources like library. As we used Numpy and OpenCV sources, we
called those sources by import command as shown in below fig.

Fig. 2.3 Importing necessary packages

After importing library, we have to call cascade classifiers used in particular application. In this case, two cascade
classifiers are used. One is for face and one is for eyes. One important thing to be kept in mind is that files of cascade
classifiers are to be kept in the same folder as that of program file. As cascade classifier is called in conjunction with
OpenCV, one will have to use cv2.CascadeClassifier as shown in below fig.

Now, we have to open laptop camera and start continuous streaming of it. For that, we have to use while loop. If we
do not use while loop, it will not give video streaming and all we can see is just a capture frame at an instant of
time when laptop camera is activated. In between while loop, we have to use for loop after defining the cascade
classifiers for the detection of faces and eyes. When the camera is activated, we want that streaming frame to be shown
on laptop

Volume 7, Issue 6, June 2019 Page 3


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

display. For that we have to use following command.


cv2.inshow( file_name , img)
So, the whole command will look like,

Fig. 2.5 Program for eyes and face detection (By Haar cascade)
And after running the program, the result will be,

Fig. 2.6 Result by Haar Cascade classifier


So that was our Past work. But we did not use Haar cascade classifier while detecting drowsiness. Why? As, we saw
that Haar cascade classifier, sumps up the pixel intensities in each region and calculates the difference between these
sums. In our case, when a person having Tilak on his forehead faces the laptop camera while program is in running
mode,

Volume 7, Issue 6, June 2019 Page 4


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

classifier takes that Tilak as eyes and shows it in square box. To remove this problem, one have to use shape
predictor in place of Haar cascade classifier.

3. PROJECT WORK AND OUTPUT


3.1 The drowsiness detector algorithm
The general flow of our drowsiness detection algorithm is fairly straightforward. First, we will setup a camera that
monitors a stream for faces. We have used in-built laptop camera.

Fig. 3.1 Look for faces in input video stream


If a face is found, we apply facial landmark detection and extract the eye regions:

Fig. 3.2 Facial landmark localization to extract the eye regions

Now that we have the eye regions, we can compute eye aspect ratio to determine if the eyes are closed:

Volume 7, Issue 6, June 2019 Page 5


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

Fig. 3.3 Compute the eye aspect ratio to determine if eyes are closed
If the eye aspect ratio indicated that the eyes have been closed for a sufficiently long enough amount time, we ll sound
an alarm to wake up the driver:

Fig. 3.4 Sound an alarm if eyes are closed for sufficiently long enough time

Now, we ll implement the drowsiness detection algorithm detailed above using OpenCV, dlib and Python.

Volume 7, Issue 6, June 2019 Page 6


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

3.2 Building the drowsiness detector with OpenCV


To start our implementation, open up a new file, name it detect_drowsiness.py, and insert the following code:

Lines 2-12 import our required Python packages. Next, we need to define our sound_alarm function which accepts a
path to an audio file residing on disk and then plays the file:

We also need to define the eye_aspect_ratio function which is used to compute the ratio of distances between the
vertical eye landmarks and the distances between the horizontal eye landmarks

The return value of the eye aspect ratio will be approximately constant when the eye is open. The values will then rapid
decrease towards zero during a blink.
If the eye is closed, the eye aspect ratio will again remain approximately constant, but will be much smaller than the
ratio when the eye is open.
To visualize this, consider the following figure,

Volume 7, Issue 6, June 2019 Page 7


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

On the top-left we have an eye that is fully open with the eye facial landmarks plotted. Then on the top-right we have
an eye that is closed. The bottom then plots the eye aspect ratio over time. As we can see, the eye aspect ratio is
constant (indicating the eye is open), then rapidly drops to zero, then increases again, indicating a blink has taken
place.
In our drowsiness detector case, we ll be monitoring the eye aspect ratio to see if the value falls but does not increase
again, thus implying that the person has closed their eyes.
Next, let s parse our command line arguments:

Our drowsiness detector requires one command line argument followed by two optional ones, each of which is detailed
below:
--shape-predictor: This is the path to dlib s pre-trained facial landmark detector
--alarm: Here you can optionally specify the path to an input audio file to be used as an alarm.
--webcam: This integer controls the index of your built-in webcam/USB camera.
Now that our command line arguments have been parsed, we need to define a few important variables:

Line 48 defines the EYE_AR_THRESH . If the eye aspect ratio falls below this threshold, we ll start counting the
number of frames the person has closed their eyes for.
If the number of frames the person has closed their eyes in exceeds EYE_AR_CONSEC_FRAMES (Line 49), we ll
sound an alarm.
Experimentally, I ve found that an EYE_AR_THRESH of 0.3 works well in a variety of situations (although you may
need to tune it yourself for your own applications).
I ve also set the EYE_AR_CONSEC_FRAMES to be 48, meaning that if a person has closed their eyes for 48
consecutive frames; we ll play the alarm sound. You can make the drowsiness detector more sensitive by decreasing
the
EYE_AR_CONSEC_FRAMES similarly, you can make the drowsiness detector less sensitive by increasing it.
Line 53 defines COUNTER, the total number of consecutive frames where the eyeaspect ratio is below
EYE_AR_THRESH.
If COUNTER exceeds EYE_AR_CONSEC_FRAMES, then we ll update the Boolean ALARM_ON (Line 54).
The dlib library ships with a Histogram of Oriented Gradients-based face detector along with a facial landmark
predictor we instantiate both of these in the following code block:

The facial landmarks produced by dlib are an index able list

Volume 7, Issue 6, June 2019 Page 8


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

Fig. 3.6 Visualizing the 68 facial landmark coordinates.


We can therefore determine the starting and ending array slice index values for extracting (x, y)-coordinates for both
the left and right eye below:

Next, we need to decide if we are working with a file-based video stream or a live USB/webcam/Raspberry Pi camera
video stream:

If you’re using a file video stream, then leave the code as is. Otherwise, if you want to use a built-in webcam or USB
camera, uncomment Line 62.
For a Raspberry Pi camera module, uncomment Line 63.
If you have uncommented either Line 62 or Line 63, then uncomment Line 64 as well to indicate that you are not
reading a video file from disk.
Finally, we have reached the main loop of our script:

Volume 7, Issue 6, June 2019 Page 9


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

On Line 68 we start looping over frames from our video stream.


If we are accessing a video file stream and there are no more frames left in the video, we break from the loop (Lines 71
and 72).
Line 77 reads the next frame from our video stream, followed by resizing it and converting it to grayscale (Lines 78 and
79).
We then detect faces in the grayscale frame on Line 82 via dlib s built-in face detector.
We now need to loop over each of the faces in the frame and then apply facial landmark detection to each of them:

Line 89 determines the facial landmarks for the face region, while Line 90 converts these (x, y)-coordinates to a NumPy
array.
Using our array slicing techniques from earlier in this script, we can extract the (x, y)- coordinates for both the left and
right eye, respectively (Lines 94 and 95).
From there, we compute the eye aspect ratio for each eye on Lines 96 and 97. Our next code block simply handles
visualizing the facial landmarks for the eye regions themselves:

At this point we have computed our (averaged) eye aspect ratio, but we haven t actually determined if a blink has
taken place this is taken care of in the next section:

Volume 7, Issue 6, June 2019 Page 10


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

Line 111 makes a check to see if the eye aspect ratio is below our blink threshold if it is, we increment the number of
consecutive frames that indicate a blink is taking place (Line 112).
Otherwise, Line 116 handles the case where the eye aspect ratio is not below the blink threshold.
In this case, we make another check on Line 119 to see if a sufficient number of consecutive frames contained an eye
blink ratio below our pre-defined threshold. If the check passes, we increment the TOTAL number of blinks (Line 120).
We then reset the number of consecutive blinks COUNTER (Line 123).
Our final code block simply handles drawing the number of blinks on our output frame, as well as displaying the
current eye aspect ratio:

3.3 Blink detection results:


So after giving input arguments and running the program, if eyes of person facing camera are closed, it shows result
shown below.

Volume 7, Issue 6, June 2019 Page 11


IPASJ International Journal of Mechanical Engineering (IIJME)
Web Site: http://www.ipasj.org/IIJME/IIJME.htm
A Publisher for Research Motivation ........ Email:editoriijme@ipasj.org
Volume 7, Issue 6, June 2019 ISSN 2321-6441

REFERENCES
1. Welcome to Python.org. (2019, May 17) (n.d.). Retrieved from https://www.python.org/
2. Ketul Patel (2017,December 30) PIP INSTALL COMMAND IN PYTHON 3.6 [Video File]
Retrieved from: https://www.youtube.com/watch?v=237dNNQhD3Q
3. Decipher Technic [Jul 31, 2016] Install OpenCV-Python on Windows PC [Video file]
Retrieved from: https://www.youtube.com/watch?v=3xAslL8htuQ
4. Decipher Technic [2018, May 18] Install Dlib Python API on Windows PC
Retrieved from: HTTPS://WWW.YOUTUBE.COM/WATCH?V=Q4_M8YTAPDG
5. PYSOURCE. (2019, JANUARY 07) EYE DETECTION - GAZE CONTROLLED KEYBOARD WITH PYTHON
AND OPENCV P.1[VIDEO FILE] RETRIEVED FROM:
HTTPS://WWW.YOUTUBE.COM/WATCH?V=VWUGKCX_KOY&LIST=PL6YC5OUGCOTLVHB5OFFLUJ90OFB
UOU5G
6. Sentdex. (2016, January 10) Haar Cascade Object Detection Face & Eye - OpenCV with Python for Image and
Video Analysis 16 [Video file] Retrieved from: https://www.youtube.com/watch?v=88HdqNDQsEk
7. Opencv.(2018,February07).Opencv/opencv. Retrieved from
https://github.com/opencv/opencv/tree/master/data/haarcascades
8. Deep Learning Haar Cascade Explained. (2018, August 17). Retrieved from http://www.willberger.org/cascade
haar-explained
9. OpenCV. (2019, May 06). Retrieved from https://opencv.org/

Volume 7, Issue 6, June 2019 Page 12

Vous aimerez peut-être aussi