Académique Documents
Professionnel Documents
Culture Documents
Shaina Laraib
CIIT/SP15-BET-091/ISB
M. Naufal Mansoor
CIIT/SP15-BET-078/ISB
In Partial Fulfillment
of the Requirement for the Degree of
Shaina Laraib
CIIT/SP15-BET-091/ISB
M. Naufal Mansoor
CIIT/SP15-BET-078/ISB
In Partial Fulfillment
of the Requirement for the Degree of
Signature:______________
Name: Salsabeel Kiani
Signature:______________
Name: Shaina Laraib
Signature:______________
Name: M. Naufal Mansoor
Supervised by
_____________________
Supervisor
Dr. Amir Rashid Chaudhary,
Assistant Professor
______________________ ______________________
Internal Examiner-1 Internal Examiner-2
Dr. Moazzam Tiwana Adeel Qureshi
Assistant Professor Lecturer
______________________
External Examiner
Dr. Saleem Aslam
Associate Professor
_____________________
Head
Department of Electrical Engineering
Dedication
In the name of Allah, the Most Gracious and the Most Merciful. Firstly, we would like to
give our profound appreciations to the supervisor and people around us during the work
of our bachelor’s thesis. Secondly, we would like to show our gratitude to our parents for
their encouraging attitude during the entire span of our bachelor program with their moral
as well as financial support. We are and will be grateful to them for their love.
Acknowledgements
We are sincerely gratified to all our teachers especially our supervisor Dr. Amir Rashid
Chaudhary and consultant Sir Saleh Usman. Without their assistance it was almost
impossible to comprehend our project. We are very grateful for their boundless guidance,
direction, inspiration and motivation throughout the period of this project.
Salsabeel Kiani
Shaina Laraib
M. Naufal Mansoor
Table of Contents
1. Introduction 1
1.1 Objectives………………………………………………..……....1
1.2 Introduction…………………………………………………..…..1
1.5 Purpose…………………………………………………….……...5
3. Software Features
21
3.1 Software Features……………………………………………….21
3.1.1 Blinds Disability………………………………………………….....21
3.1.2 Deaf Disability……………………………………………………...22
3.1.3 Dumb Disability…………………………………………………….23
3.1.4 Gait Disability……………………………………………………....23
3.1.4.1 Diplegic Gait………………………………………………25
3.1.4.2 Hemiplegic Gait…………………………………………...25
3.1.4.3 Steppage Gait……………………………………………...26
3.1.4.4 Waddling Gait……………………………………………..27
3.1.4.5 Stomping Gait……………………………………………..27
Appendix-A 47
Bibliography _________ 51
List of Acronyms
Losing your valuables is casual but when it comes to finding it, it's like a treasure hunt.
The main aim of our project is to assist those people who lose their possessions
frequently, aiding mentally handicapped patients and simultaneously providing security
and surveillance. Through this project our expectation is to find basic valuables (such as
house, office, car keys, glasses, wallet etc.) without human exertion. Our system searches
the scenes from the live streaming of camera fixed in a room to find the lost valuables.
IMS (Intelligent Memory System) is capable of maintaining, manipulating and storing
data structures that are captured from the live feed. It then sends the particular
information to the application's database which further works on it and provides the last
location of the object which the user is trying to find. IMS increases memory
functionality of mentally handicapped patients such as those whom are suffering from
Alzheimer and or Dementia to better balance the time spent in searching data with less
physical effort.
Chapter-1
Introduction
1.1 Objectives
Major objectives of our project are:
● To save time and energy
● To avoid frustration
● To help Alzheimer's patients
● To secure individuals valuables
● Provide better surveillance and security
1.2 Introduction
Over the years the popularity of mobile phones have increased amazingly high. Almost
every other person has a mobile phone in his/her possession because of which the
dependency on the mobile phones have also increased dramatically that gave the
incentive to an Intelligent Memory System (IMS). In this day and age people tend to be
in a hurry and have a habit of having too much possession than they care to think about it
that results having those possessions being misplaced and/or stolen. Intelligent Memory
System is unique as compared to other products that have the similar purpose in ways
such as the quantity of the items that need to be located is significantly higher, a longer
battery life and longer range of connectivity with the user’s android device. Through an
Intelligent Memory System we aim to make people’s lives easier and simpler in some
aspects by taking advantage of the relationship between a person and his machine.
Intelligent memory system is a combination of sophisticated hardware and software.
Through this the main objective of this product is to reduce human exertion when it
comes to finding their missing items, increasing security and surveillance and aiding
mentally handicapped patients. The video camera interfaced with the raspberry pi capture
images for the algorithm that is constructed using Phython and performs video processing
and finally the IMS android application will provide the location in the easiest manner to
avoid any confusion. The working process of the system can be distributed into two
primary tasks along a standard procedure: tracking and registering. Tracking is usually
intended to be detected on a known pattern stored in a database/memory.
Figure 1: GPS location
The figure 1 explains that a mobile phone is used to obtain the last location of the lost
object.
The technique that is most commonly used to achieve successful tracking is based on an
optical device (camera, mobile phone) is achieved through image processing. With smart
phones being so common nowadays almost everyone will likely use IMS thus leading to
an immense popularity of this project.
Figure 2: Tile
Tile is an American product that allowed its users to locate their lost items using
Bluetooth 4.0 connection. This device comes along its mobile application which has to be
installed in android and iOS. The device has a built in speaker which plays an important
role in tracking the valuables and has the working range of 150 foot approximately. The
battery life of this device is one year which is irreplaceable and isn’t rechargeable.
1.3.2 Pixie point [2]
Pixie point have the capability to guide its users and helps them to locate their lost
valuables (such as keys, wallet, passport and remote controls etc.). It uses the Pixie App
that is only compatible with the iOS devices. It has a battery life of one year and it is
irreplaceable thus making the user to buy a new one.
Figure 4: TrackR
The device contains a lithium battery that has to be changed about every year by the user.
The communication of the device's current location is established through Bluetooth 4.0
to an iOS or Android smartphone or tablet with the TrackR app installed and running.
Devices will report the location any and all TrackR devices including those that are not
owned or registered by the user. If the app is not installed on that nearby Bluetooth
enabled device, the location cannot be relayed.
1.3.4 Chipolo [4]
Figure 5: Chipolo
Chipolo works via Bluetooth connection. Bluetooth. It is most effective at the 30 ft. (10
m) range. Chipolo Bluetooth range can extend up to 200 ft. (60 m) when there are no
obstacles in its path. The user attaches the Chipolo to the items they don’t want to lose. It
has a associated app that the user accesses to find the location of the lost item. It also has
a built in speaker and built in battery.
Chapter-2
System Analysis
● It provides a unified atmosphere where one can wish to advance all the devices of
Android.
● It provides integration and templates of code of GitHub that will guide us to build
mutual app features and help us to import sample code.
● It gives tools of Lint that will fasten the presentation, compatibility, usability, and
different problems.
● Supports that are built in Android studio for Google’s platform of Cloud, that will
make it convenient to integrate Google’s messaging of Cloud and App Engine.
In individual project, one or more modules are included in Android Studio, with the
resource files and source codes [7]. Few of the modules are:
Android Studio will display our files of project in the project view of Android, that
can be seen in below figure. This sight is then prepared by units for rapid provision of
entree to our source files of our project's key.
Altogether, build files are easily noticeable in the upper side below the Gradle Scripts.
Different folders are covered by different modules of app:
● java: Has the code files of Java source, that include test codes of JUnit.
1. The toolbar will let us carry out a diverse range of actions, compromising of the
running apps and launching of Android tools.
2. The navigation bar will assist us navigate through the project and open the files
for editing. It will give us an additional compressed view of structure that is visible in
the Project window.
3. The editor window is where we can modify and make the code. The editor can
alternate depending on the present file type. For example, when we view a layout file, the
editor will show the Layout Editor.
4. The tool window bar runs outside of the IDE window and conceals the buttons
that will allow to expand or collapse distinct tool windows.
5. The tool windows will give us access to precise tasks like search, version control,
project management and more. We can increase them and collapse them.
6. The status bar is to show the position and status of our project and the IDE itself,
as well as any cautions or messages.
We may create the chief window to provide us an extra space of screen by whacking
or displacing tool windows and toolbars. We may involve the usage of keyboard
shortcuts that will allow us to access most IDE features.
We can explore databases, actions, basic code, fundamentals of operator interface and
many more, at any time by just double-clicking the Shift key or by pressing at
magnifying glass at corner on the right side of upper level of window of the Android
Studio. It can very valuable as we are working to find the specific action of IDE which
has been overlooked to trigger.
● We can click on the name of tool present in the tool window bar, for expansion or
collapsing of a tool window. We can also unpin, attach, drag, pin, and separate tool
windows.
● For coming back to the defaulting layout of the tool window, we can press
the option of Window then on Restore Default Layout. We can also modify our
defaulting layout by pressing the Window then on Store Current Layout.
● We can click on the window’s icon, that is at the bottom in the left corner of
window, to display or to hide the whole bar of tool window.
● For location of exact tool window, we can press the window icon and then from
the menu, we can click on the tool window.
Three types of code completion are present in android, which we can easily use to
access the shortcuts of keyboard.
Table 1: Keyboard Shortcuts For Code Completion
We can also attain rapid fixes and display intention actions by clicking Alt+Enter.
IDE automatically applies the arranging as we are working and openly calls the action
of Reformat Code by clicking on the Control+Alt+L(Opt+Command+L on a Mac) or
auto-indent the lines by clicking Control+Alt+I (Control+Option+I on a Mac).
We can accomplish all of this deprived of altering source files of app's core, by
retaining the elasticity of Gradle. Build files in android Studio are termed build.gradle. It
is basic script files that custom Groovy syntax to arrange the shape with the basics
provided by the Android plugin for Gradle purpose. Every project will have a highest
build file for the comprehensive project and will cover build files for discrete module-
level for separate module. If we will ingress a current project, Android Studio will have
aptitude to automatically produce the essential build files.
Android Studio can accomplish inspection of IntelliJ code and can validate the
observations to rationalize the workflow of coding, in addition to Lint checks.
2.2.1 Aspects
Our project has software aspects.
2.2.3 Techniques
We are using few techniques in our project in android studio for software part.
2.2.3.1 Image processing
It is a method that will analyze and manipulate the image that are digitized. It is done
to improve the quality of picture. Treating of images as signals of two dimensions is
usually included in the process of image processing and methods of processing of set
signal applying are applied to them.
Image processing is basically the inclusion of the three steps that are as follows.
● Importation of the image with optical scanner or through digital photography.
● Analyzation and manipulation of the image that includes compression of data,
enhancement of image and spotting patterns that are not visible to eyes of human like
satellite photographs.
● The last stage of this technique is the output in which outcomes can be differed
images or reports, based on analysis of image.
Analog and Digital Image Processing are the two types of methods which are used for
Image Processing.
1. Analog or visual methods of image processing are used for the firm copies like
printouts and photographs. Image analysts are used for various fundamentals of
clarification while using these techniques. The image processing is not just limited to
zone that has to be considered but on information of analyst. Association is another
significant instrument in image processing that is concluded by visual techniques. So,
analysts smear a mixture of individual knowledge and guaranteed data to image
processing.
2. Digital Processing techniques help in the operation of the digital images by means
of computers. Fresh data from sensors of images from satellite stage, covers deficiencies.
To overcome such faults and to get original info, numerous phases of processing are
experienced. All types of data undergo three general phases while using digital technique
that are Pre- processing, enhancement and display, extraction of information.
For templates deprived of strong features, or when most of the template image
establishes the matching image, a template-based method may be operative. Since,
template-based matching may possibly need sampling of a great number of points, it is
conceivable to decrease the number of sampling points through reduction of the search
and template images of the resolution by the similar aspect and accomplishing the
operation on the subsequent rationalized images (multiresolution, or pyramid), that
provide a search window of data points inside the search image so that the template does
not have to examine every feasible data point, or a mixture of both.
2.2.4 Library
For our project, we are using open computer visions and tensorflow. We have
imported open-cv in android studio and are using its various libraries.
2.2.4.1 Open-CV
OpenCV has the full form Open Source Computer Vision Library and is unconfined
below a license of BSD due to which it is allowed for together, hypothetical and
profitable usage. It includes Python, interfaces of java and C++ that chains Linux, Mac,
Windows, OS, Android and iOS. OpenCV was planned for effectiveness and robust
emphasis on apps of real-time. Written in enhanced C/C++, the library takes the benefit
of process of multi-core. It can income benefit of the hardware quickening of
fundamental varied as it can figure stage Linux. OpenCV (Open Source Computer Vision
Library) is a mechanism of erudition of software and open source computer vision library.
OpenCV was made for delivering the shared substructure for vast visions of computer
applications and to quicken the usage of mechanical awareness in the profitable goods.
For BSD-licensed creation, OpenCV has made it efficient and relaxed for trades to
exploit and adapt the encryption.
OpenCV rests mostly near real-time vision applications and takings advantage of MMX
and SSE instructions. A full-featured OpenCL and CUDA interfaces are being
aggressively advanced right now. There are over 500 algorithms and about 10 times as
many functions that will compose and support those algorithms. OpenCV is written
natively in C++ and contains interface that is templated to work flawlessly with STL
containers.
The library compromises more than 2500 optimized algorithms, including a complete
set of classic and state-of-the-art machine learning and visions of computer algorithms.
These algorithms are used to detect and recognize appearances, classify objects, order
human actions in videos, track camera movements, track moving objects, extract 3D
models of objects, yield 3D point clouds from stereo cameras, stitch the images together
so that production of a maximum resolution picture can be developed of the whole scene,
discovery of alike pictures from the database, eliminate red eyes from the pictures being
taken by means of flash, trail the activities of eye, identify background and start
indicators to overlap it with amplified reality. OpenCV has almost greater than 47
thousand persons of operator community and predictable amount of transfers that are
beyond 14 million. The library is utilized widely for businesses, examination groups and
from administrative people [9].
2.2.4.2 TensorFlow
TensorFlow developed and released in 2017, it is Googles’ second generation system
it has the capability of running in multiple CPUs and GPUs with option Cuda extensions
for general purpose computations. It is an open source library that uses data flow graphs
to perform operations. It is compatible with Linux, macOS, Windows and mobile
computing platforms such as Android. The architecture of TensoFlow is flexible in nature
hence it can easily be deployed at multiple platforms.
2.2.4.3 LabelImg
One of the best features of the IMS is that the hardware that the product consists of
is comprised of two major components:
1) Raspberry Pi 3B (Processor)
2) Camera
In our product the IMS is compatible with any type of camera but we have used the
Pi Camera, the significant reason for that is because the processor that we are using is the
Raspberry Pi 3 so it gives us the best frames per second.
2.2.5.1 Raspberry pi 3B
Till now the three generations of raspberry pi has been released including raspberry pi 1,
2 and 3. The most frequently used model now a days is raspberry pi 3. We also used the
same model for our project.
2.2.5.2.1 Generation 1
The first generation of raspberry pi was released in 2012. It is the simplest of all and
has low cost. It includes different models listed below:
● Model A
● Model B
● Model B plus
Model A
This model is the simplest one with very less features. It consists of a single USB port
and an external port. The number of general purpose input output pins in this model are
26 in total. It has random access memory of 256 MB. It was designed for the projects
related to robotics and embedded systems.
Model B
This model is a bit complex one as compared to model A as it has some additional
features in it. It has an additional USB port which makes in total two USB ports and 100
external ports. The number of GPIO pins is same.
Model B plus
This model consists of 40 GPIO pins of which first 26 works the same way as of
the previously discussed models do. No of USB ports are same as in model B i.e. two.
2.2.5.2.2 Generation 2
The second generation of pi was released in 2015, three years after the release of
raspberry pi 1. It’s one and only model is available i.e. model B explained below.
Model B
This model has a number of extra features as compared to raspberry pi one. The
number of general purpose input output pins provided is the same i.e. 40. In this model a
separate slot for camera is provided as CSI port. In addition to this, screen display port is
also being provided as DSI port. The RAM has been increased to 1GB. The number of
external ports are 100 out of which 10 ports can be used to connect pi to camera quickly.
The number of USB ports also has been increased by 2.
2.2.5.2.3 Generation 3
The generation 3 model of the raspberry pi has not much of a difference as compared to
the last two models. If you compare them physically they are all the same, but comparing
their features, the up models have additional features. The third generation is more of the
same as second generation but the number of features stated below makes it distinct:
Chapter-3
Features
3.1 Features
IMS has different features for medical and security portions i.e blind, Alzheimer and
cognitive disorder portion. Security portion includes person tracking and weapons
detection. Each of the features are described below.
Disability
Blind section includes the so many features such as human detection, vehicle
detection, and obstacle detection. Flow chart of all the features of the blind section is
given in below figure.
Humandete
ction
Object
Blind detection
Vehicle
detection
Human Detection will help to detect the presence of humans by detecting their faces.
Vehicle Detection will detect the incoming. Obstacle and object detection will help the
blind in finding their way around a premises furthermore they can also find an item they
might need.
3.1.1.2 Alzheimer
People with memory issues will also benefit from our project by the IMS to locate the
objects they might be looking for.
Alzheimer Object
detection
The IMS is capable of providing state of the art security and surveillance. We have
trained the processor attached to the camera to be able to detect multiple humans
simultaneously and recognize weapons such as knives and guns. There is no need of a
person constantly monitoring the cameras the camera will alert the user if any weapons or
unknown person are detected within the premises.
Security and
Surveillance
Humandete Object
ction detection
Outcomes of IMS
Conclusion
5.1 Conclusion
To help people, we have developed a product named IMS containing software and
hardware which mainly consists of object detection which implies image processing on
different objects so they can be detected. Along with the object detection when our
project will include recognition of facial features and will help maintaining costumer’s
beauty. IMS serves different purposes such as security, helping disable people, reducing
the frustration among people. IMS consists of raspberry pie and the camera module.
Raspberry pie works as a server and the camera module works as the image detector.
Using this system, disable people can easily spend their lives like normal people. They
can communicate, navigate, see, listen, talk, walk and read like them. Blinds can easily
get the guidance avoiding the obstacles and hurdles as they’ll be notified what’s coming
up their way.
5.2 Novelty
Uniqueness about this idea is that this work is not done here in Pakistan using image
processing. Similar products are available in market but they use RFID and Bluetooth
technology. We are implementing it through object detection.
5.5 Limitations
There are few limitations of our project that are
● IMS only detects the object when the system has a stable connection.
● Due to variation of light, detection is not proper.
● Detection is possible only when the objects are clearly visible to the camera
attached.
Kinect camera will capture pictures of the environment and obstacles and then send
scanned images and videos through point cloud to Raspberry pi. ROS (Robot Operating
System), installed in Pi, will create the map of indoor environment and will then send it
to server. Server will then send database to android phone. Thus, the real-time map and
path will be displayed on mobile phone through android studio.
Appendix