Vous êtes sur la page 1sur 48

Intelligent Memory System

Final Year Project Report


Presented
by
Salsabeel Kiani
CIIT/SP15-BET-056/ISB

Shaina Laraib
CIIT/SP15-BET-091/ISB

M. Naufal Mansoor
CIIT/SP15-BET-078/ISB

In Partial Fulfillment
of the Requirement for the Degree of

Bachelor of Science in Electrical (Telecommunication) Engineering


DEPARTMENT OF ELECTRICAL ENGINEERING

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


ISLAMABAD
Jan 2018
Intelligent Memory System

Final Year Project Report


Presented
by
Salsabeel Kiani
CIIT/SP15-BET-056/ISB

Shaina Laraib
CIIT/SP15-BET-091/ISB

M. Naufal Mansoor
CIIT/SP15-BET-078/ISB

In Partial Fulfillment
of the Requirement for the Degree of

Bachelor of Science in Electrical (Telecommunication) Engineering


DEPARTMENT OF ELECTRICAL ENGINEERING

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


ISLAMABAD
Jan 2018
Declaration
We, hereby declare that this project neither as a whole nor
as a part there of has been copied out from any source. It is
further declared that we have developed this project and
the accompanied report entirely on the basis of our
personal efforts made under the sincere guidance of our
supervisor. No portion of the work presented in this report
has been submitted in the support of any other degree or
qualification of this or any other University or Institute of
learning, if found we shall stand responsible.

Signature:______________
Name: Salsabeel Kiani

Signature:______________
Name: Shaina Laraib

Signature:______________
Name: M. Naufal Mansoor

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


ISLAMABAD
Jan 2018
Intelligent Memory System
An Undergraduate Final Year Project Report submitted to the
Department of
ELECTRICAL ENGINEERING

As a Partial Fulfillment for the award of Degree


Bachelor of Science in Electrical (Telecommunication) Engineering
by
Name Registration Number
Salsabeel Kiani CIIT/SP15-BET-056/ISB

Shaina Laraib CIIT/SP15-BET-091/ISB

M. Naufal Mansoor CIIT/SP15-BET-078/ISB

Supervised by

Dr. Amir Rashid Chaudhary


Assistance Professor,
Department Of Electrical Engineering
CIIT Islamabad

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


ISLAMABAD
Jan 2018
Final Approval
This Project Titled
Intelligent Memory System

Submitted for the Degree of


Bachelor of Science in Electrical (Telecommunication) Engineering
by
Name Registration Number
Salsabeel Kiani CIIT/SP15-BET-056/ISB

Shaina Laraib CIIT/SP15-BET-091/ISB

M. Naufal Mansoor CIIT/SP15-BET-078/ISB

has been approved for

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


ISLAMABAD

_____________________
Supervisor
Dr. Amir Rashid Chaudhary,
Assistant Professor

______________________ ______________________
Internal Examiner-1 Internal Examiner-2
Dr. Moazzam Tiwana Adeel Qureshi
Assistant Professor Lecturer

______________________
External Examiner
Dr. Saleem Aslam
Associate Professor

_____________________
Head
Department of Electrical Engineering
Dedication
In the name of Allah, the Most Gracious and the Most Merciful. Firstly, we would like to
give our profound appreciations to the supervisor and people around us during the work
of our bachelor’s thesis. Secondly, we would like to show our gratitude to our parents for
their encouraging attitude during the entire span of our bachelor program with their moral
as well as financial support. We are and will be grateful to them for their love.
Acknowledgements
We are sincerely gratified to all our teachers especially our supervisor Dr. Amir Rashid
Chaudhary and consultant Sir Saleh Usman. Without their assistance it was almost
impossible to comprehend our project. We are very grateful for their boundless guidance,
direction, inspiration and motivation throughout the period of this project.

Salsabeel Kiani
Shaina Laraib
M. Naufal Mansoor
Table of Contents
1. Introduction 1
1.1 Objectives………………………………………………..……....1

1.2 Introduction…………………………………………………..…..1

1.3 Literature Review……………………………………………..…..2


1.3.1 Tile…………...………………………………………………...……...2
1.3.2 PixiePoint…..………………….……………………………….…….3
1.3.3 TrackR…………………………………………………...……………3
1.3.4 Chipolo………………………………………………………………..4

1.4 Project Description………………………………………..…..….5

1.5 Purpose…………………………………………………….……...5

1.6 Why Our Project Is Best………………………………………….6

2. System Analysis ____


_6
2.1 Android Studio……………………………………………….....................6
2.1.1 Features of Android Studio…………………………………….…….6
2.1.2 Project Structure……………………………………………………...7
2.1.3 The User Interface……………………………………………………8
2.1.4 Tool Windows……………………………..…………………………9
2.1.5 Code Completion…...…………………………………………..…..10
2.1.6 Style and Formatting.…………………………………………..…...11
2.1.7 Gradle Built System…………………………………………..…….12
2.1.8 CodeInspection…………………………………………..…………12

2.2 Tools and Description……………………………………….…13


2.2.1 Aspects……………………………………………………………….13
2.2.1.1 Software Aspect…………………………………………….13
2.2.2 Technology Platforms……………………………………...……...…13
2.2.2.1 Android Studio……………………………………………...13
2.2.3 Techniques……………………………………………………….…...13
2.2.3.1 Digital Image Processing……………………………………13
2.2.3.2 Template Matching………………………………………….15
2.2.4 Library………………………………………………………………..16
2.2.4.1 Open-cv……………………………………………………...16
2.2.4.2 TensorFlow………………………………………………….17
2.2.4.3 LabelImg……………………………………………………17
2.2.5 Hardware…………………………………………………………….18
2.2.5.1 Raspberry Pi3……………………………………………….....18
2.2.5.2 Generations of Raspberry Pi……...……………………………18

3. Software Features
21
3.1 Software Features……………………………………………….21
3.1.1 Blinds Disability………………………………………………….....21
3.1.2 Deaf Disability……………………………………………………...22
3.1.3 Dumb Disability…………………………………………………….23
3.1.4 Gait Disability……………………………………………………....23
3.1.4.1 Diplegic Gait………………………………………………25
3.1.4.2 Hemiplegic Gait…………………………………………...25
3.1.4.3 Steppage Gait……………………………………………...26
3.1.4.4 Waddling Gait……………………………………………..27
3.1.4.5 Stomping Gait……………………………………………..27

4. Outcomes of Blind Section


28
4.1 Main Display of Application…………………………………...28

4.2 Display of Blind Section……………………………………….28


4.2.1 Weather Update…………………………………………………......29
4.2.2 Vehicle Detection………………………………………………..….30
4.2.3 Human Detection……………………………………………………30
4.2.4 Color Detection…………………………………………………..…31
4.2.5 Map Update……………………………………………………..…..32
4.2.6 OCR…………………………………………………………………32
4.2.7 Object Guidance…………………………………………………….33
4.2.8 Currency Detection………………………………………………….33

5. Outcomes of Deaf and Dumb Section 34


5.1 Outcomes of Deaf Section…………………………………….34
5.1.1 Text To Speech………………………………………………………34
5.1.2 Text To Sign………………………………………………………….35
5.1.3 Speech To Text………………………………………………………35
5.1.4 Speech to Sign……………………………………………………….35

5.2 Outcomes of Dumb Section…………………………………..36


5.2.1 Text To Speech………………………………………………………37

6. Outcomes of Gait Disability Section 38


6.1 Display of Gait Section………………………………………...38
6.1.1 Diplegic Gait………………………………………………….………39
6.1.2 Hemiplegic Gait………………………………………………..……..39
6.1.3 Steppage Gait………………………………………………………....40
6.1.4 Waddling Gait………………………………………………………...41
6.1.5 Stomping Gait………………………………………………………...42

7. Conclusion And Future Work 43


7.1 Conclusion………………………………………………………43
7.2 Novelty…………………………………………………………..43
7.3 Initial Defined Risk……………………………………………..44
7.4 Commercial Aspects…………………………………………..44
7.5 Limitations………………………………………………………45
7.6 Future Work…………………………………………………….45

Appendix-A 47
Bibliography _________ 51
List of Acronyms

IMS……………………………………………..Intelligent Memory System


IDE……………………………………………..Integrated Data Environment
(S&S)……………………………………..…….Security and Surveillance
List of Figures
Figure 1 Vision Smart Glasses …………...…………………………………….3
Figure 2 Finger Reader………………………………………………………….4
Figure 3 Uhear App……………………………………………………………..4
Figure 4 Otosense App………………………………………………………….5
Figure 5 Talk - Text To Voice Free App………………………………………..6
Figure 6 Rifton Gait Trainer……………………………………………………6
Figure 7 Ambulatory Devices…………………………………………………..7
Figure 8 Legsys…………………………………………………………………7
Figure 9 Problem View Of Files Of Project…………………………………...11
Figure 10 Main Windows Of Android Studio…………………………………..11
Figure 11 Code Without Format……………………………..…………………14
Figure 13 Code With Format…..……………………………………………….14
Figure 13 Lint Inspection Result………………………………………………..15
Figure 14 Flow Chart Of Image Processing…………………………………….18
Figure 15 Flow Chart Of Disability……………………………………………..21
Figure 16 Flow Chart Of Blind Disabilty……………………………………….22
Figure 17 Flow Chart Of Deaf Disability……………………………………….23
Figure 18 Flow Chart Of Dumb Disability……………………………………...23
Figure 19 Flow Chart Of Gait Disability………………………………………..24
Figure 20 Flow Chart Of Working Of Gait Application………………………..24
Figure 21 Diplegic Gait Different Patterns……………………………………...25
Figure 22 Hemiplegic Gait Posture…………………………………………......26
Figure 23 Steppage Gait Foot Drop Position……………………………………26
Figure 24 Waddling Gait Walking Posture……………………………………...27
Figure 25 Main Display Of DMCS..…………………………………………….28
Figure 26 Display Of Blind Section…………………………………………….29
Figure 27 Weather Display Result………………………………………………29
Figure 28 Vehicle Detection Result……………………………………………..30
Figure 29 Human Detection Result……………………………………………..31
Figure 30 Color Detection Result……………………………………………….31
Figure 31 Map Update Result…………………………………………………...32
Figure 32 OCR Result………………………………………………………...…32
Figure 33 Object Guidance Result………………………………………………33
Figure 34 Currency Detection Result…………………………………………...33
Figure 35 Deaf Section Result…………………………………………………..34
Figure 36 Youtube Display For Sign Language………………………………...35
Figure 37 Dumb Section Result…………………………………………………36
Figure 38 Display Of Gait Disabilty…………………………………………….37
Figure 39 The Angle Detection Using Different Colored Ribbons……………..38
Figure 40 Angle Between Color Bands…………………………………………38
Figure 41 Analyzation Of The Foot Angle………………………………………39
Figure 42 Graphical Result Of Diplegic Gait…………………………………...39
Figure 43 Analyzation Of The Front View……………………………………...40
Figure 44 Graphical Result Of Hemiplegic Gait………………………………..40
Figure 45 Analyzation Of Inner Knee Angle…………………………………...40
Figure 46 Graphical Result Of Steppage Gait…………………………………..41
Figure 47 Analyzation Of The Back View………………………………………41
Figure 48 Graphical Result Of Waddling Gait………………………………….41
Figure 49 Analyzation Of The Side View……………………………………….42
Figure 50 Graphical Result Of Stomping Gait………………………………….42
Figure 51 Flow Chart Of Hardware Work………………………………………46
List of Tables
Table 1 Keyboard shortcuts for code completion.…...………………………13
Abstract

Losing your valuables is casual but when it comes to finding it, it's like a treasure hunt.
The main aim of our project is to assist those people who lose their possessions
frequently, aiding mentally handicapped patients and simultaneously providing security
and surveillance. Through this project our expectation is to find basic valuables (such as
house, office, car keys, glasses, wallet etc.) without human exertion. Our system searches
the scenes from the live streaming of camera fixed in a room to find the lost valuables.
IMS (Intelligent Memory System) is capable of maintaining, manipulating and storing
data structures that are captured from the live feed. It then sends the particular
information to the application's database which further works on it and provides the last
location of the object which the user is trying to find. IMS increases memory
functionality of mentally handicapped patients such as those whom are suffering from
Alzheimer and or Dementia to better balance the time spent in searching data with less
physical effort.
Chapter-1
Introduction
1.1 Objectives
Major objectives of our project are:
● To save time and energy
● To avoid frustration
● To help Alzheimer's patients
● To secure individuals valuables
● Provide better surveillance and security

1.2 Introduction
Over the years the popularity of mobile phones have increased amazingly high. Almost
every other person has a mobile phone in his/her possession because of which the
dependency on the mobile phones have also increased dramatically that gave the
incentive to an Intelligent Memory System (IMS). In this day and age people tend to be
in a hurry and have a habit of having too much possession than they care to think about it
that results having those possessions being misplaced and/or stolen. Intelligent Memory
System is unique as compared to other products that have the similar purpose in ways
such as the quantity of the items that need to be located is significantly higher, a longer
battery life and longer range of connectivity with the user’s android device. Through an
Intelligent Memory System we aim to make people’s lives easier and simpler in some
aspects by taking advantage of the relationship between a person and his machine.
Intelligent memory system is a combination of sophisticated hardware and software.
Through this the main objective of this product is to reduce human exertion when it
comes to finding their missing items, increasing security and surveillance and aiding
mentally handicapped patients. The video camera interfaced with the raspberry pi capture
images for the algorithm that is constructed using Phython and performs video processing
and finally the IMS android application will provide the location in the easiest manner to
avoid any confusion. The working process of the system can be distributed into two
primary tasks along a standard procedure: tracking and registering. Tracking is usually
intended to be detected on a known pattern stored in a database/memory.
Figure 1: GPS location

The figure 1 explains that a mobile phone is used to obtain the last location of the lost
object.

The technique that is most commonly used to achieve successful tracking is based on an
optical device (camera, mobile phone) is achieved through image processing. With smart
phones being so common nowadays almost everyone will likely use IMS thus leading to
an immense popularity of this project.

1.3 Literature Review


The previous works done on similar project are as follows.
1.3.1 Tile [1]

Figure 2: Tile

Tile is an American product that allowed its users to locate their lost items using
Bluetooth 4.0 connection. This device comes along its mobile application which has to be
installed in android and iOS. The device has a built in speaker which plays an important
role in tracking the valuables and has the working range of 150 foot approximately. The
battery life of this device is one year which is irreplaceable and isn’t rechargeable.
1.3.2 Pixie point [2]

Figure 3: Pixie point

Pixie point have the capability to guide its users and helps them to locate their lost
valuables (such as keys, wallet, passport and remote controls etc.). It uses the Pixie App
that is only compatible with the iOS devices. It has a battery life of one year and it is
irreplaceable thus making the user to buy a new one.

1.3.3 TrackR [3]

Figure 4: TrackR
The device contains a lithium battery that has to be changed about every year by the user.
The communication of the device's current location is established through Bluetooth 4.0
to an iOS or Android smartphone or tablet with the TrackR app installed and running.
Devices will report the location any and all TrackR devices including those that are not
owned or registered by the user. If the app is not installed on that nearby Bluetooth
enabled device, the location cannot be relayed.
1.3.4 Chipolo [4]

Figure 5: Chipolo

Chipolo works via Bluetooth connection. Bluetooth. It is most effective at the 30 ft. (10
m) range. Chipolo Bluetooth range can extend up to 200 ft. (60 m) when there are no
obstacles in its path. The user attaches the Chipolo to the items they don’t want to lose. It
has a associated app that the user accesses to find the location of the lost item. It also has
a built in speaker and built in battery.

1.4 Project Description


This project will make people lives easier and will save their valuable time at the right
moment of the day when time of the essence while also fulfilling the purpose of a
security and surveillance system. This project is based on a mobile application thus
making it user friendly and portable.
The idea of the project occurred to us when we observed that people tend to lose or are
unable to find their daily valuables such as keys (car, house, office or shop), glasses and
wallet. Another reason for the idea of this project came from the fact that the security
cameras installed are not smart enough meaning they do not alert the user by themselves
instead it takes someone to be constantly watching the video feed of the cameras.
Many applications are present to solve these issues but they are present individually at
different sources but our project is a complete package therefore it can solve these
problems without human exertion plus also being comparatively less expensive.
Our project can also help patients suffering from Alzheimer or Dementia that
significantly takes its toll on the memory of the patients. In short our project aids
different people in making their lives easier.
.
1.5 Purpose
The core objective of our project is to detect and recognize objects that hold value to
the people, these valuables are with almost every adult and they cannot leave their house
without it or else their whole day will be wasted so these daily valuables can be
substituted but not replaced such as keys, wallet and glasses. In addition another purpose
of our project is to enhance security and surveillance.
Using this system two issues are solved simultaneously. People who have busy and hectic
routine can maintain their schedule with the help of our project by always knowing where
they can find their valuable property with less effort and time. People can have a smart
security system inside their house of offices.

1.6 Why Our Project Is Best


Our project is best in following ways:
• It is user friendly.
• It is easy to use.
• It has portable mobile phone.
• It is fast and accurate
• It includes Real time video feed analysis

Chapter-2

System Analysis

2.1 Android Studio


Android studio is Android's official Integrated Development Environment (IDE) use
to shape the apps of high excellence for almost all sort of android devices. For the
software part we are using android studio as our technical platform. It suggests different
utensils that are custom-tailored for Android designers. This comprises of profiling,
ironic code expurgation, debugging and checking tools [7].

2.1.1 Features of Android Studio


Android Studio is offers a number of features which will be helpful in developing
android apps including:
● It provides Gradle-based build support.
● It is an emulator that is profligate and feature-rich.

● It involves exclusion of the tiresome tasks.

● It provides a unified atmosphere where one can wish to advance all the devices of
Android.

● It includes feature of Instant Running to amend our running app deprived of


constructing new file of APK.

● It provides integration and templates of code of GitHub that will guide us to build
mutual app features and help us to import sample code.

● It is enriched with extensive thought-provoking apparatuses and outlines.

● It gives tools of Lint that will fasten the presentation, compatibility, usability, and
different problems.

● It provides support of NDK and C++ guide.

● Supports that are built in Android studio for Google’s platform of Cloud, that will
make it convenient to integrate Google’s messaging of Cloud and App Engine.

2.1.2 Project Structure

In individual project, one or more modules are included in Android Studio, with the
resource files and source codes [7]. Few of the modules are:

● Modules of Android app


● Modules of Library

● Modules of Google’s Engine of App

Android Studio will display our files of project in the project view of Android, that
can be seen in below figure. This sight is then prepared by units for rapid provision of
entree to our source files of our project's key.

Altogether, build files are easily noticeable in the upper side below the Gradle Scripts.
Different folders are covered by different modules of app:

● manifests: Has the AndroidManifest.xml file.

● java: Has the code files of Java source, that include test codes of JUnit.

● res: Has altogether non-code properties, like bitmap images.

Customization of sight of the files can be completed for concentration on precise


aspects of the expansion for the app. An example is, choosing the interpretation of the
Problems in project will demonstrate us associations to the basic records including the
syntax errors and conversant coding like the close tag of absence of elements of the XML
in a layout file.

Figure 6: Problem View Of The Files Of Project

2.1.3 The User Interface


The chief window of Android Studio is made up of frequent rational areas that are
recognized in figure below:
Figure 7: Main Windows Of The Android Studio

1. The toolbar will let us carry out a diverse range of actions, compromising of the
running apps and launching of Android tools.

2. The navigation bar will assist us navigate through the project and open the files
for editing. It will give us an additional compressed view of structure that is visible in
the Project window.

3. The editor window is where we can modify and make the code. The editor can
alternate depending on the present file type. For example, when we view a layout file, the
editor will show the Layout Editor.

4. The tool window bar runs outside of the IDE window and conceals the buttons
that will allow to expand or collapse distinct tool windows.

5. The tool windows will give us access to precise tasks like search, version control,
project management and more. We can increase them and collapse them.

6. The status bar is to show the position and status of our project and the IDE itself,
as well as any cautions or messages.
We may create the chief window to provide us an extra space of screen by whacking
or displacing tool windows and toolbars. We may involve the usage of keyboard
shortcuts that will allow us to access most IDE features.

We can explore databases, actions, basic code, fundamentals of operator interface and
many more, at any time by just double-clicking the Shift key or by pressing at
magnifying glass at corner on the right side of upper level of window of the Android
Studio. It can very valuable as we are working to find the specific action of IDE which
has been overlooked to trigger.

2.1.4 Tool Windows


As a replacement for specific perspectives, Android Studio trails the framework and
can repeatedly bring the applicable tool window, during work. Windows which are
greatest usually consumed, are restrained, by default, to bar of tool window that is present
in the boundary of window.

● We can click on the name of tool present in the tool window bar, for expansion or
collapsing of a tool window. We can also unpin, attach, drag, pin, and separate tool
windows.

● For coming back to the defaulting layout of the tool window, we can press
the option of Window then on Restore Default Layout. We can also modify our
defaulting layout by pressing the Window then on Store Current Layout.

● We can click on the window’s icon, that is at the bottom in the left corner of
window, to display or to hide the whole bar of tool window.

● For location of exact tool window, we can press the window icon and then from
the menu, we can click on the tool window.

2.1.5 Code Completion

Three types of code completion are present in android, which we can easily use to
access the shortcuts of keyboard.
Table 1: Keyboard Shortcuts For Code Completion
We can also attain rapid fixes and display intention actions by clicking Alt+Enter.

2.1.6 Style and Formatting


Android Studio can automatically slander the styles and formats, according to the
specified formats present in the settings of code style. Customization of the style of code
settings is attained by software design language, by counting the specified agreements for
spaces, wrapping, tabs, indentations, braces, and blank lines. For customization of our
code style settings, we can click on File then on Settings then on Editor then on Code
Style (Android Studio > Preferences > Editor > Code Style on a Mac.)

IDE automatically applies the arranging as we are working and openly calls the action
of Reformat Code by clicking on the Control+Alt+L(Opt+Command+L on a Mac) or
auto-indent the lines by clicking Control+Alt+I (Control+Option+I on a Mac).

Figure 8: Code Without Format


Figure 9: Code With Format

2.1.7 Gradle Build System


Android Studio practices Gradle, as the basis of the build structure, with inclusion of
additional Android abilities that are given by the Android plugin for the Gradle. This
build structure works as a combined device, self-reliantly from line of command and
menu of Android Studio. We may habit this feature of the build structure for below
purpose:

● Customization, configuration and extending the build procedure.

● Creation of numerous APKs, by adding dissimilar structures using the similar


modules and the projects.

● Reusing properties and codes of sets of sources.

We can accomplish all of this deprived of altering source files of app's core, by
retaining the elasticity of Gradle. Build files in android Studio are termed build.gradle. It
is basic script files that custom Groovy syntax to arrange the shape with the basics
provided by the Android plugin for Gradle purpose. Every project will have a highest
build file for the comprehensive project and will cover build files for discrete module-
level for separate module. If we will ingress a current project, Android Studio will have
aptitude to automatically produce the essential build files.

2.1.8 Code inspections


When we assemble the package, Android Studio will repeatedly run the Lint that is
configured and the other reviews of IDE that will help us easily, in recognizing and
precising the glitches with the physical eminence of code.
The tool of Lint tests our files of project basics to potentially bug and optimize the
developments for precision, safety, presentation, use, convenience and inter-
nationalization.

Figure 10: Lint Inspection Results

Android Studio can accomplish inspection of IntelliJ code and can validate the
observations to rationalize the workflow of coding, in addition to Lint checks.

2.2 Tools and System Description

2.2.1 Aspects
Our project has software aspects.

2.2.1.1 Software Aspect


In software we are making such an app that helps blinds in person, vehicle and
obstacle detection. For fulfil security requirements the same application can perform
object, weapons and multiple person detection. For people with memory related diseases
such as Alzheimer the app will allow them to locate the objects they might be looking for
through object detection.

2.2.2 Technology Platforms


We are using android studio as platforms for our project.
2.2.2.1 Android Studio
For development of this application we are using Android Studio as our
programming platform and are also using its vast vision and other libraries. Android
Studio is Android's official IDE. Purpose for Android is to quicken our expansion and
benefit us to build the apps of high quality for every Android device. It offers tools
custom-tailored for Android developers that include rich code editing, debugging, testing,
and profiling tools.

2.2.3 Techniques
We are using few techniques in our project in android studio for software part.
2.2.3.1 Image processing
It is a method that will analyze and manipulate the image that are digitized. It is done
to improve the quality of picture. Treating of images as signals of two dimensions is
usually included in the process of image processing and methods of processing of set
signal applying are applied to them.
Image processing is basically the inclusion of the three steps that are as follows.
● Importation of the image with optical scanner or through digital photography.
● Analyzation and manipulation of the image that includes compression of data,
enhancement of image and spotting patterns that are not visible to eyes of human like
satellite photographs.
● The last stage of this technique is the output in which outcomes can be differed
images or reports, based on analysis of image.

The division of purpose of image processing is into 5 groups which are:


1. Visualization – Observation of the objects which are not visible.
2. Image sharpening and restoration - Creation of a better image.
3. Image retrieval - Seeking for the image of consideration.
4. Measurement of pattern – Measurement of the various objects that are present in the
image.
5. Image Recognition – Distinguishing the objects in an image.

Analog and Digital Image Processing are the two types of methods which are used for
Image Processing.
1. Analog or visual methods of image processing are used for the firm copies like
printouts and photographs. Image analysts are used for various fundamentals of
clarification while using these techniques. The image processing is not just limited to
zone that has to be considered but on information of analyst. Association is another
significant instrument in image processing that is concluded by visual techniques. So,
analysts smear a mixture of individual knowledge and guaranteed data to image
processing.

2. Digital Processing techniques help in the operation of the digital images by means
of computers. Fresh data from sensors of images from satellite stage, covers deficiencies.
To overcome such faults and to get original info, numerous phases of processing are
experienced. All types of data undergo three general phases while using digital technique
that are Pre- processing, enhancement and display, extraction of information.

Figure 11: Flow Chart Of Image Processing

2.2.3.2 Template matching


It is a procedure in digital image processing that discovers the small shares of pictures
(pixels and frames) which match with the template image. It has feature built and
approach that is based on template. If the template image consumes strong features, a
feature-based method may be considered; the method may demonstrate further beneficial
if the match in the exploration image that might be altered in some fashion [8].

For templates deprived of strong features, or when most of the template image
establishes the matching image, a template-based method may be operative. Since,
template-based matching may possibly need sampling of a great number of points, it is
conceivable to decrease the number of sampling points through reduction of the search
and template images of the resolution by the similar aspect and accomplishing the
operation on the subsequent rationalized images (multiresolution, or pyramid), that
provide a search window of data points inside the search image so that the template does
not have to examine every feasible data point, or a mixture of both.

2.2.4 Library
For our project, we are using open computer visions and tensorflow. We have
imported open-cv in android studio and are using its various libraries.

2.2.4.1 Open-CV
OpenCV has the full form Open Source Computer Vision Library and is unconfined
below a license of BSD due to which it is allowed for together, hypothetical and
profitable usage. It includes Python, interfaces of java and C++ that chains Linux, Mac,
Windows, OS, Android and iOS. OpenCV was planned for effectiveness and robust
emphasis on apps of real-time. Written in enhanced C/C++, the library takes the benefit
of process of multi-core. It can income benefit of the hardware quickening of
fundamental varied as it can figure stage Linux. OpenCV (Open Source Computer Vision
Library) is a mechanism of erudition of software and open source computer vision library.
OpenCV was made for delivering the shared substructure for vast visions of computer
applications and to quicken the usage of mechanical awareness in the profitable goods.
For BSD-licensed creation, OpenCV has made it efficient and relaxed for trades to
exploit and adapt the encryption.
OpenCV rests mostly near real-time vision applications and takings advantage of MMX
and SSE instructions. A full-featured OpenCL and CUDA interfaces are being
aggressively advanced right now. There are over 500 algorithms and about 10 times as
many functions that will compose and support those algorithms. OpenCV is written
natively in C++ and contains interface that is templated to work flawlessly with STL
containers.
The library compromises more than 2500 optimized algorithms, including a complete
set of classic and state-of-the-art machine learning and visions of computer algorithms.
These algorithms are used to detect and recognize appearances, classify objects, order
human actions in videos, track camera movements, track moving objects, extract 3D
models of objects, yield 3D point clouds from stereo cameras, stitch the images together
so that production of a maximum resolution picture can be developed of the whole scene,
discovery of alike pictures from the database, eliminate red eyes from the pictures being
taken by means of flash, trail the activities of eye, identify background and start
indicators to overlap it with amplified reality. OpenCV has almost greater than 47
thousand persons of operator community and predictable amount of transfers that are
beyond 14 million. The library is utilized widely for businesses, examination groups and
from administrative people [9].

2.2.4.2 TensorFlow
TensorFlow developed and released in 2017, it is Googles’ second generation system
it has the capability of running in multiple CPUs and GPUs with option Cuda extensions
for general purpose computations. It is an open source library that uses data flow graphs
to perform operations. It is compatible with Linux, macOS, Windows and mobile
computing platforms such as Android. The architecture of TensoFlow is flexible in nature
hence it can easily be deployed at multiple platforms.

2.2.4.3 LabelImg

LabelImg is a graphical image annotation tool. It is used to manually make bounding


box on the desired object of interest that is in the image. This software converts the
jpeg/png images to XML files in PASCAL VOC format which then further is converted
from xml to csv.
2.2.5 Hardware

One of the best features of the IMS is that the hardware that the product consists of
is comprised of two major components:

1) Raspberry Pi 3B (Processor)

2) Camera

In our product the IMS is compatible with any type of camera but we have used the
Pi Camera, the significant reason for that is because the processor that we are using is the
Raspberry Pi 3 so it gives us the best frames per second.

2.2.5.1 Raspberry pi 3B

Raspberry pi is basically a mini computer which serves as a processor. It comprises of


all necessary features that are present in a computer such as RAM, ROM, Ethernet cable
spot and Wi-Fi, SD card slots, power cable slot. Raspberry pi is much slower than any
other processor such as a computer or a laptop, but still it is preferred because of its low
cost and smaller size. It is used to learn different programming skills and implement a
number of hardware projects.

Figure 13: Raspberry pi 3


2.2.5.2 Generations of Raspberry pi

Till now the three generations of raspberry pi has been released including raspberry pi 1,
2 and 3. The most frequently used model now a days is raspberry pi 3. We also used the
same model for our project.

2.2.5.2.1 Generation 1

The first generation of raspberry pi was released in 2012. It is the simplest of all and
has low cost. It includes different models listed below:

● Model A

● Model B

● Model B plus

Model A

This model is the simplest one with very less features. It consists of a single USB port
and an external port. The number of general purpose input output pins in this model are
26 in total. It has random access memory of 256 MB. It was designed for the projects
related to robotics and embedded systems.

Model B

This model is a bit complex one as compared to model A as it has some additional
features in it. It has an additional USB port which makes in total two USB ports and 100
external ports. The number of GPIO pins is same.

Model B plus

This model consists of 40 GPIO pins of which first 26 works the same way as of
the previously discussed models do. No of USB ports are same as in model B i.e. two.

2.2.5.2.2 Generation 2

The second generation of pi was released in 2015, three years after the release of
raspberry pi 1. It’s one and only model is available i.e. model B explained below.
Model B

This model has a number of extra features as compared to raspberry pi one. The
number of general purpose input output pins provided is the same i.e. 40. In this model a
separate slot for camera is provided as CSI port. In addition to this, screen display port is
also being provided as DSI port. The RAM has been increased to 1GB. The number of
external ports are 100 out of which 10 ports can be used to connect pi to camera quickly.
The number of USB ports also has been increased by 2.

2.2.5.2.3 Generation 3

The generation 3 model of the raspberry pi has not much of a difference as compared to
the last two models. If you compare them physically they are all the same, but comparing
their features, the up models have additional features. The third generation is more of the
same as second generation but the number of features stated below makes it distinct:

● 3.5 mm audio video jack

● Full HDMI port for audio and video

● quad core 1.2 GHz central processing unit(CPU)

● Wi-Fi and Bluetooth connecting facility

● heavy central processing unit

2.2.5.3 Interfacing of Raspberry pi with pi-camera

To start working with Raspberry pi 3 we need to give 5V power supply to the


processor. After connecting it to a led you need to interface the pi camera. We need to
insert the cable into raspberry pi to install the raspberry pi camera module. By connecting
the camera we are getting the live feed from the pi camera which is displayed on the led.
4 adapters can be connected to get multiple feeds from the cameras which will make the
product more useful.

Chapter-3

Features

3.1 Features

IMS has different features for medical and security portions i.e blind, Alzheimer and
cognitive disorder portion. Security portion includes person tracking and weapons
detection. Each of the features are described below.

3.1.1 Medical Features

Disability

Alzheimer Blind Cognitive


Disorder
Figure 14: Flow Chart Of Medical Features

3.1.1.1 Blind Disability

Blind section includes the so many features such as human detection, vehicle
detection, and obstacle detection. Flow chart of all the features of the blind section is
given in below figure.

Humandete
ction

Object
Blind detection

Vehicle
detection

Figure 15:Flow Chart Of Blind Disabilty

Human Detection will help to detect the presence of humans by detecting their faces.
Vehicle Detection will detect the incoming. Obstacle and object detection will help the
blind in finding their way around a premises furthermore they can also find an item they
might need.
3.1.1.2 Alzheimer

People with memory issues will also benefit from our project by the IMS to locate the
objects they might be looking for.

Alzheimer Object
detection

Figure 16: Flow Chart Of Alzeihmer

3.1.2 Security and Surveillance (S&S)

The IMS is capable of providing state of the art security and surveillance. We have
trained the processor attached to the camera to be able to detect multiple humans
simultaneously and recognize weapons such as knives and guns. There is no need of a
person constantly monitoring the cameras the camera will alert the user if any weapons or
unknown person are detected within the premises.

Security and
Surveillance

Humandete Object
ction detection

Figure 17: Flowchart of Security and Surveillance


Chapter-4

Outcomes of IMS

4.1 Human Detection


Blind can be guided about the human through IMS. He just has to hold the mobile in
his hand by facing its camera to his front view, after opening it. The camera will detect
the human face if it is present in front view of him. This feature will then tell blind
through speech that the human is detected. Using this feature, blind can easily know
about the presence of people in his surrounding or he can know whether he is alone or not
[12]. Results of human detection are shown below.

Figure 18: Human Detection Result

4.2 Vehicle Detection


Blind can be guided about the vehicle through this IMS. He just has to hold the
mobile in his hand by facing its camera to his front view, after opening it. The camera
will detect the vehicle if it is present in front view of him. This feature will then tell blind
through speech that the vehicle is detected [11]. Results of vehicle detection are shown
below.
Figure 19: Vehicle Detection Result

4.3 Object Guidance


Blind can be guided about the obstacles through IMS. He just has to hold the mobile
in his hand by facing its camera to his front view, after opening it. The camera will detect
the obstacles that are present in the path and front view of him. This feature will then tell
blind through speech that what obstacle has been detected. Results of object guidance is
below.
Figure 20: Object Guidance Result
Chapter-5

Conclusion
5.1 Conclusion
To help people, we have developed a product named IMS containing software and
hardware which mainly consists of object detection which implies image processing on
different objects so they can be detected. Along with the object detection when our
project will include recognition of facial features and will help maintaining costumer’s
beauty. IMS serves different purposes such as security, helping disable people, reducing
the frustration among people. IMS consists of raspberry pie and the camera module.
Raspberry pie works as a server and the camera module works as the image detector.
Using this system, disable people can easily spend their lives like normal people. They
can communicate, navigate, see, listen, talk, walk and read like them. Blinds can easily
get the guidance avoiding the obstacles and hurdles as they’ll be notified what’s coming
up their way.

5.2 Novelty
Uniqueness about this idea is that this work is not done here in Pakistan using image
processing. Similar products are available in market but they use RFID and Bluetooth
technology. We are implementing it through object detection.

5.3 Initial Defined Risk


There are few risks in every project. The quicker they are recognized the sooner these
risks can be assessed and measured. Due to this reason, risks should be identified in the
defining phase of the project. Our project involves the following risks:
● Processor failure
● Mobile failure (The application runs on android phone)
● Problem in internet connection.
● Disturbance in stable electric connection.
5.4 Commercial Aspects
Commercially, the project is going to have a very vital value because in market
especially Pakistani market there is not a single thing available similar to our project
that aid the blind and Alzheimer patients at the same time. It is also a cost beneficial
project. In market separate projects are there but there is no project with complete
package for different purposes. Our project will provide a complete package to assist
blind and Alzheimer patients. Our project can be commercially implemented in homes,
offices, industries and hospitals for security purposes.

5.5 Limitations
There are few limitations of our project that are
● IMS only detects the object when the system has a stable connection.
● Due to variation of light, detection is not proper.
● Detection is possible only when the objects are clearly visible to the camera
attached.

5.6 Future Work


Our future work is the hardware aspect that includes the design of device for path and
map generation. The project will practice Raspberry Pi. Areas of specific focus include
image processing, navigation and integrated circuits. We will also enterprise the GUI for
this project using android. The resolution of this project is establishing map and path for
the blinds for indoor purpose. Our system will entail a Microsoft Kinect sensor, which
will generate a 3-D point cloud that create the path for the blind person and will store the
map in the data base. Flow chart of hardware work is shown.
Figure 21: Flow Chart Of Hardware Work

Kinect camera will capture pictures of the environment and obstacles and then send
scanned images and videos through point cloud to Raspberry pi. ROS (Robot Operating
System), installed in Pi, will create the map of indoor environment and will then send it
to server. Server will then send database to android phone. Thus, the real-time map and
path will be displayed on mobile phone through android studio.
Appendix

Vous aimerez peut-être aussi