Vous êtes sur la page 1sur 79

1.

INTRODUCTION
Heart Rate
Heart rate is the speed of the heartbeat measured by the number of
contractions of the heart per minute (bpm). The heart rate can vary according to the
body's physical needs, including the need to absorb oxygen and excrete carbon
dioxide. It is usually equal or close to the pulse measured at any peripheral point.
Activities that can provoke change include physical exercise, sleep, anxiety, stress,
illness, ingesting, and drugs.

Fig 1.1 Human heart


Heart rate, also known as pulse, is the number of times a person's heart beats
per minute. A normal heart rate depends on the individual, age, body size, heart
conditions, whether the person is sitting or moving, medication use and even air

temperature. Even emotions can have an impact on heart rate. For example,
getting excited or scared can increase the heart rate. But most importantly,

Table 1.1: Major factors affecting heart rate and force of


contraction
Factor
Cardioaccelerator
nerves
Proprioreceptors
Chemoreceptors
Baroreceptors

Effect
Release of norepinephrine
Increased rates of firing during exercise
Decreased levels of O2; increased levels of H+, CO2, and
lactic acid
Decreased rates of firing, indicating falling blood
volume/pressure

Limbic system

Anticipation of physical exercise or strong emotions

Catecholamines

Increased epinephrine and norepinephrine

Thyroid hormones

Variation in T3 and T4

Calcium

Variation in Ca2+

Potassium

Variation in K+

Sodium

Variation in Na+

Body temperature

Increased body temperature

Nicotine and caffeine

Stimulants, increasing heart rate

Measurement

1. Manual measurement
Heart rate is measured by finding the pulse of the heart. This pulse rate can
be found at any point on the body where the artery's pulsation is transmitted to the
surface by pressuring it with the index and middle fingers; often it is compressed
against an underlying structure like bone. A good area is on the neck, under the
corner of the jaw.
The radial artery is the easiest to use to check the heart rate. However, in
emergency situations the most reliable arteries to measure heart rate are carotid
arteries.
Possible points for measuring the heart rate are:
1. The ventral aspect of the wrist on the side of the thumb (radial artery).
2. The ulnar artery.
3. The neck (carotid artery).
4. The inside of the elbow, or under the biceps muscle (brachial artery).
5. The groin (femoral artery).
6. Behind the medial malleolus on the feet (posterior tibial artery).

7. Behind the knee (popliteal artery).


8. Over the abdomen (abdominal aorta).

9. The chest (apex of the heart), which can be felt with one's hand or fingers. It
is also possible to auscultate the heart using a stethoscope.
10.The temple (superficial temporal artery).
11.The lateral edge of the mandible (facial artery).
12.The side of the head near the ear (posterior auricular artery).

2.Electronic measurement
In obstetrics, heart rate can be measured by ultrasonography, however a more
precise method of determining heart rate involves the use of an electrocardiograph,
or ECG. An ECG generates a pattern based on electrical activity of the heart,
which closely follows heart function. Continuous ECG monitoring is routinely
done in many clinical settings, especially in critical care medicine. On the ECG,
instantaneous heart rate is calculated using the R wave-to-R wave (RR) interval
and multiplying/dividing in order to derive heart rate in heartbeats/min.
Multiple methods exist:
HR = 1,500/(RR interval in millimeters)
HR = 60/(RR interval in seconds)
HR = 300/number of "large" squares between successive R waves.
the monitors, used during sport, consist of a chest strap with electrodes. The
signal is transmitted to a wrist receiver for display.Alternative methods of
measurement include pulse oximetry and seismocardiography.

Fig 1.2 ECG instrument

Fig 1.3 ECG wave form

Heart Rate Variability


Heart rate variability (HRV) is the physiological phenomenon of variation in
the time interval between heartbeats. It is measured by the variation in the beat-tobeat interval.
Methods

used

ballistocardiograms,

to
and

detect
the

beats
pulse

include:
wave

ECG,

signal

blood
derived

pressure,
from

photoplethysmograph (PPG). ECG is considered superior because it provides a


clear waveform, which makes it easier to exclude heartbeats not originating in the
sinoatrialnode.The main inputs are the sympathetic and the parasympathetic
nervous system (PSNS) and humoral factors. Factors that affect the input are the
baroreflex, thermoregulation, hormones, sleep-wake cycle, meals, physical activity,
and stress.

HRV analysis
The most widely used methods can be grouped under time-domain and
frequency-domain. Other methods have been proposed, such as non-linear
methods.

1.Time-domain methods
These are based on the beat-to-beat or NN intervals, which are analysed to give
variables such as SDNN(the standard deviation of NN intervals), RMSSD(root
mean square of successive differences),SDSD(standard deviation of successive
differences),EBC(estimated breath cycle).

2.Frequency-domain methods
Frequency domain methods assign bands of frequency and then count the
number of NN intervals that match each band. The bands are typically high
frequency (HF) from 0.15 to 0.4 Hz, low frequency (LF) from 0.04 to 0.15 Hz, and
the very low frequency (VLF) from 0.0033 to 0.04 Hz.

Changes of HRV related to specific pathologies


A reduction of HRV has been reported in several cardiovascular and
noncardiovascular diseases.

Myocardial infarction

Depressed HRV after MI may reflect a decrease in vagal activity directed to


the heart. HRV in patients surviving an acute MI reveal a reduction in total and in

the individual power of spectral components. The presence of an alteration in


neural control is also reflected in a blunting of day-night variations of RR interval.

Diabetic neuropathy
In neuropathy associated with diabetes mellitus characterized by alteration in
small nerve fibers, a reduction in time domain parameters of HRV seems not only
to carry negative prognostic value but also to precede the clinical expression of
autonomic neuropathy.

Myocardial dysfunction
A reduced HRV has been observed consistently in patients with cardiac
failure. In this condition characterized by signs of sympathetic activation such as
faster heart rates and high levels of circulating catecholamines, a relation between
changes in HRV and the extent of left ventricular dysfunction was reported. In
particular, in most patients with a very advanced phase of the disease and with a
drastic reduction in HRV, an LF component could not be detected despite the
clinical signs of sympathetic activation. This reflects that, as stated above, the LF
may not accurately reflect cardiac sympathetic tone.

Liver cirrhosis
Liver cirrhosis is associated with decreased HRV. Decreased HRV in patients
with cirrhosis has a prognostic value and predicts mortality. Loss of HRV is also

associated with higher plasma pro-inflammatory cytokine levels and impaired


neurocognitive function in this patient population.

Average resting respiratory rates by age are:


birth to 6 weeks: 3060 breaths per minute
6 months: 2540 breaths per minute
3 years: 2030 breaths per minute
6 years: 1825 breaths per minute
10 years: 1723 breaths per minute
Adults: 12-18-breaths per minute
Elderly 65 years old: 12-28 breaths per minute.

1.1 Need For Proposed System


Traditional methods for calculating HV,HRV and RR include ECG and pulse
oximetry techniques .Wherein ECG heart rate monitor requires the patient to wear
adhesive gel patches or chest straps which may be allergic for patients with skin
diseases. Pulse oximetric methods involves attachment of pulse-oximetry
sensors(in the form of spring loaded clips) to the patients fingertips or earlobes
which may lead to pain. Both of these tecniques involve hardware RLC circuits

and ADC conversion. Resistors are available for only some specifications. Thus if
the required resistance does not match the available resistance thenit is
approximated to some available nearby values introducing very minute error
values which are resolved manually nowadays. Further, analog to digital
conversion(uses an ADC with 8 input pins and 3 selection pins) involves 3 inputs

from sensors which leaves nearly 4 cycles unused which leads to wastage of
bandwidth.

1.2Description of Proposed System


The underlying source signal of interest is the BVP that propagates throughout the
body. During the cardiac cycle, volumetric changes in the facial blood vessels
modify the path length of the incident ambient light such that the subsequent
changes in amount of reflected light indicate the timing of cardiovascular events.
By recording a video of the facial region with a webcam, the red, green, and blue
(RGB) color sensors pick up a mixture of the reflected plethysmographic signal
along with other sources of fluctuations in light due to artifacts.
Given that hemoglobin absorptivity differs across the visible and nearinfrared spectral range, each color sensor records a mixture of the original source
signals with slightly different weights. These observed signals from the RGB color
sensors are denoted by y1 (t), y2 (t), and y3 (t), respectively, which are the
amplitudes of the recorded signals at time point t. We assume three underlying
source signals, represented by x1 (t), x2 (t), and x3 (t).

Capturing Video

The experiments were conducted indoors and with a varying amount of


ambient sunlight entering through windows as the only source of illumination.
Participants were seated at a table in front of a laptop at a distance of
approximately 0.5 m from the built-in webcam. During the experiment,
participants were asked to keep still, breathe spontaneously, and face the webcam

while their video was recorded for one minute. All videos were recorded in color
(24-bit RGB with three channels 8 bits/channel) at 15 frames per second (fps)
with pixel resolution of 640 480 and saved in AVI format on the laptop.

Recovery of BVP from Webcam Recordings


All the video and physiological recordings were analyzed offline using
custom software written in MATLAB. It provides an overview of the stages
involved in our approach to recover the BVP from the webcam videos. to
automatically identify the coordinates of the face location in the first frame of the
video recording, We selected the center 60% width and full height of the box as the
region of interest (ROI) for our subsequent calculations.

1.3 Benefits of proposed system


To achieve a robust evaluation, ensemble empirical mode decomposition of
the HilbertHuang transform is used to acquire the primary heart rate signal while
reducing the effect of ambient light changes. The proposed approach is found to
outperform the current state of the art, providing greater measurement accuracy
with smaller variance and is shown to be feasible in real-world environments.

10

1.4 Organization of Project Report


The Next chapter deals with literature survey and followed by specification
needed for system to run software. Fourth chapter deals with architectural design,
data flow diagram and activity diagram. Fifth chapter for testing, in this chapter it
discuss about taxonomy of testing and testing used particular for project.

2.LITERATURE SURVEY

11

12

3. SYSTEM SPECIFICATION
The System Requirements Specification(SRS) document describes all data,
functional and behavioral requirements of the software under production or
development. It is produced at the culmination of the analysis task. The function
and performance allocated to software as part of system engineering are refined by
establishing a complete information description as functional representation of
system behavior, an indication of performance requirements and design constarints,
appropriate validation criteria.
HARDWARE REQUIREMENT SPECIFICATION

Processor
Main Memory (RAM)
Cache Memory
Monitor
Keyboard
Mouse
Hard Disk

:
:
:
:
:
:

Intel Pentium III or Later


256 MB
512 KB
17 inch Color Monitor
108 Keys
:
Optical Mouse
160 GB

SOFTWARE REQUIREMENT SPECIFICATION


Front End/Language
Back End/Database

:
:

Mat lab
Nil

13

Operating System

Windows XP Service Pack 2/Windows

Vista/Windows 7/Windows 8

4. SYSTEM ANALYSIS AND DESIGN


Introduction
Architectural design Values make up an important part of what influences
architects and designers when they make their design decisions.However,
architects and designers are not always influenced by the same values and
intentions. Value and intentions differ between different architectural movements.

4.1 Architectural Design


Software application architecture is the process of defining a structured
solution that meets all of the technical and operational requirements, while
optimizing common quality attributes such as performance, security, and
manageability. It involves a series of decisions based on a wide range of factors,
and each of these decisions can have considerable impact on the quality,
performance, maintainability, and overall success of application.
Architectural design is a creative process so the process differs depending on
the type of system being developed. However, a number of common decisions span
all design processes. The architectural model of a system may conform to a generic
architectural model or style. An awareness of these styles can simplify the problem
of defining system architectures. However, most large systems are heterogeneous
and do not follow a single architectural style.
A description of the behavior of each component is part of the architecture.
In box-and-line diagrams, readers imagine the behavior of each component by

14

interpreting the labels of the boxes and lines. One must document the extent that a
components behavior influences how another component must be written to
interact with it. Structures are important because they boil away details about the
software that are independent of the concern reflected by the abstraction. Each
structure provides a useful perspective of the system. Sometimes the term is used
instead of structure.
Software architectures are represented as graphs where nodes represent
components:

Procedures
Modules
Processes
Tools
Databases

And edges represent connectors:

The design

Procedure calls
Event broadcasts
Database queries
Pipes
process starts by decomposing the software into components.

The decomposition should be done top-down, based on the functional


decomposition should be done top-down, based on the functional decomposition in
the logical model. Correctness at each level can only be confirmed after
demonstrating feasibility of the next level down. Such demonstrations may require
prototyping. Designers rely on their knowledge of the technology, and experience
of similar systems, to achieve a good design in just a few iterations. This is the
lowest level of the task hierarchy, and is the stage at which the control flow has
been fully defined. It is usually unnecessary to describe the architecture down to

15

the module level. However some consideration of module level processing is


usually necessary if the functionality at higher levels is to be allocated correctly.

Fig 4.1 System Architecture


Figure 4.1 represents system architecture where a video clip of patients face
is converted into frames. Later refined frames are converted into RGB format.Then
the green signal is separated using ICA(Independent Component Analysis). Further
noises are eliminated and required parameters are extracted using JADE algorithm.
Then extracted HR,HRV and RR are validated by comparing with ECG results.

4.2 Data Flow Diagram


Data Flow Diagram(DFD) is a two-dimensional diagram that explains how
data is processed and transferred between different processes in a system. It is a
graphical technique that depicts information flow and the transforms that are
applied as data move from input to output. It provides a simple, intuitive method

16

for describing business process without focusing onthe details of computer for
describing business processes without focusing on the details of computer systems.
The graphical depiction identifies each source of data and how it interacts with
other data sources to reach a common output. DFD are attractive technique because
they provide what users do rather than what computers do.

Components of DFD
DFDs are constructed using four major components
1.External entities- represent the source of data as input to the system.
They are also the destination of system data. External entities can be called data
stored outside the system. These are represented by squares.
2. Data stores represent stores of data within the system, for example,
computer files or databases. An open-ended box represents a data, which implies
stored data at rest or a temporary repository of data.
3. Processes represent activities in which data is manipulated by being
stored or retrieved or transferred in some way. In other words, we can say that
process transforms the input data into output data. Circles stand for a process that
converts data into information.
4. Data flow represents the movement of data from one component to
the other. An arrow(

) identifies data flow i.e. data in motion. It is a pipeline

through which information flows. Data flows are generally shown as one-way only.
Data flows between external entities are shown as dotted lines(---->).
Table 4.1 shows various symbols used for drawing DFD diagrams. A Data
Flow Diagram(DFD) is a graphical representation of the flow of data through an
information system, modelling its process aspects. A DFD is often used as a
preliminary step to create an overview of the system, which can later be elaborated.

17

Table 4.1 DFD Symbols


Symbols

Name
External Entity

Description
Source of destination lies
outside the system

Data Store

boundaries
Repository for data that

Process

are not moving


Depicts transform of
incoming flow to

Data Flow

outgoing flow
Shows the flow of data in
the system

Data flow diagrams are either logical or physical.

Logical DFD-This type of DFD concentrates on the system process and


flow of data in the system. For example in a banking system, how data is moved
between different entities.

Physical DFD- This type of DFD shows how the data flow is actually
implemented in the system. It is more specific and close to the implementation.
Logical DFDs offer the following advantages:
Better communication with users.More stable systems Better
understanding of business by analysts
Flexibility and maintenance
Elimination of redundancies and better creation of physical
model

18

Physical DFD offers the following advantages:


Clarifying which process are manual and which process are

automated
Describing process in more detail than logical DFDs
Sequencing process that has to be done in a particular order
Identifying temporary data stores
Specifying actual names of files and printouts
Adding controls to ensure the processes are done properly

Levels of DFD
Level 0-Highest abstraction level DFD is known as Level 0 DFD, which depicts
the entire information system as one diagram concealing all the underlying details.
Level 0 DFDs are known as context level DFDs.
Level 1-The Level 0 DFD is broken down into more specific,Level 1 DFD. Level
1 DFD depicts basic modules in the system and flow of data among various
modules. Level 1 DFD also mentions basic processes and sources of information.
Higher level DFDs can be transformed into more specific lower level DFDs with
deeper level of understanding unless the desired level of specification is achieved.

Level 0 DFD
Figure 4.2.1 depicts that image has the input for the system which is now
given to the system. A level 0 DFD, also called a fundamental system model or a

19

context model, represents the entire software element as a single bubble with input
and output data indicated by incoming and outgoing arrows, respectively. It shows
how the system is divided into sub-systems(processes), each of which deals with
one or more of the data flows to or from an external agent, and which together
provide all of the functionality of the system as a whole.

Fig 4.2.1 level 0 DFD

Level 1 DFD

20

Fig 4.2.2 Level 1 DFD


Figure 4.2.2 DFD diagram the three process of the sytem is explained
and the flow is bee represented. The hybrid segmentation process is divided in
three processes. Initially the input CT image is preprocessed to reduce noise and
the refined image is segmented by detecting visceral and pleural space through
initialization. On further iteration the pleural space grows by the edges to provide
segmented pleural space. The pleural liquid level will be determined based on the
segmented pixels. Finally, a set of segmented images are used for 3D deformable
surface.

4.3 Activity Diagram

21

Activity diagrams are graphical representations of workflows of stepwise


activities and actions with support for choice, iteration and concurrency. In the
Unified Modeling Language, activity diagrams are intended to model both
computational and organizational processes (i.e. workflows). Activity diagrams
show the overall flow of control.
Activity diagrams are constructed from a limited number of shapes, connected
with arrows.
Arrows run from the start towards the end and represent the order in which
activities happen. Activity diagrams may be regarded as a form of flowchart.
Typical flowchart techniques lack constructs for expressing concurrency. However,
the join and split symbols in activity diagrams only resolve this for simple cases;
the meaning of the model is not clear when they are arbitrarily combined with
decisions or loops.
Table 4.2 shows the various symbols used for drawing activity diagram.
Activity diagrams are as simple to make as an ordinary flowchart. Each symbol has
a meaning and context where its use is appropriate. It focuses on the flow of
activities involved in a single process. The Activity diagram shows how these
single-process activities depend on one another.

22

Table 4.2 Activity symbols


SYMBOLS

NAME

DESCRIPTION

Action

The task need to be done

Decision

Conditional

flow

of

control
Split or Merge Bar

Merges

concurrent

transitions into a single


target

or

splits

single

transition into concurrent


targets.
Pseudo

Initial State

state

that

represents the start of the


event.
End of state transitions.

Final State

23

Face
Reflectance

Channels
Red/Green/Blue
Signals
Red/Green/Blue
Tranform the
Signals

Separated
Sources 1/2/3

Fig.4.3 Activity Diagram


Figure 4.3 represents the activity diagram; the video of the human face can be
recorded and split up into separate frames using ROI. And determine the 3
channels from the corresponding frames. If any error present means eliminate it by
using JADE algorithm. Finally, the human heart rate, respiratory rate can be
evaluated.

24

4.4 Implementation
Implementation is the stage of the object when the theoretical design is
turned out into a working system. Thus it can be considered to be the most critical
stage in achieving a successful new system and in giving the user, confidence that
the new system will work and be effective. The implementation stage involves
careful planning, investigation of the existing system and its constrain on
implementation, designing of methods to achieve changeover and evolution of
changeover methods.
Each program is tested individually at the time of development using the
data and has verified that this program linked together in the way specified in the
program specification, the computer system and its environment is tested to the
satisfaction of the user. And so the system is going to be implemented very soon. A
simple operating procedure is included so that the user can understand the different
functions clearly and quickly.
Initially the desired tool is selected, then designing the system to get
required output. The final stage is to document the entire system which provides
components and the operating procedures of the system.
In this project first record the human face video and separate the frames
using ROI. The ROI was then separated into the three RGB channels and spatially
averaged over all pixels in the ROI to yield a red, blue, and green measurement

25

point for each frame and form the raw signals. Each trace was 1 min long. And
finding the three signals to demonstrate which is the best signal to calculate the

heart rate variation. Most probably the green signal is the best one to determine the
difference signal propagation. To remove the environmental noise use ensembleempirical mode decomposition and then apply JADE algorithm to find HR,HRV
and RR rates.

Modules used
Capturing module
BVP recovery module
Quantification of physiological parameter module(HR,HVR,RR)

4.4.1 CAPTURING MODULE


The experiments were conducted indoors and with a varying
amount of ambient sunlight entering through windows as the only source of
illumination. Participants were seated at a table in front of a laptop at a distance of
approximately 0.5 m from the built-in webcam. During the experiment,
participants were asked to keep still, breathe spontaneously, and face the webcam
while their video was recorded for one minute. All videos were recorded in color
(24-bit RGB with three channels 8 bits/channel) at 15 frames per second (fps)
with pixel resolution of 640 480 and saved in AVI format.

4.4.2 BVP RECOVERY MODULE

26

All the video and physiological recordings were analyzed offline


using custom software written in MATLAB. It provides an overview of the stages
involved in our approach to recover the BVP from the webcam videos. To
automatically identify the coordinates of the face location in the first frame of the

video recording, We selected the center 60% width and full height of the box as the
region of interest (ROI) for our subsequent calculations.
The ROI was then separated into the three RGB channels and
spatially averaged over all pixels in the ROI to yield a red, blue, and green
measurement point for each frame and form the raw signals y1 (t), y2 (t), and y3
(t), respectively. Each trace was 1 min long. The raw traces were detrended using a
procedure based on a smoothness priors approach with the smoothing parameter
= 10 (cutoff frequency of 0.89 Hz) and normalized as follows. To perform motionartifact removal by separating the fluctuations caused predominantly by the BVP
from the observed raw signals.

QUATIFICATION OF PHYSIOLOGICAL PARAMETER


MODULE
The separated source signal was smoothed using a five-point moving
average filter and band pass filtered (128-point Hamming window, 0.74 Hz). To
refine the BVP peak fiducial point, the signal was interpolated with a cubic spline
function at a sampling frequency of 256 Hz. We developed a custom algorithm to
detect the BVP peaks in the interpolated signal and applied it to obtain the Inter
Beat Intervals (IBIs).

27

(i) HR DETECTION
The HR detection can be performed by selecting the green signal
among the three signals. To avoid inclusion of artifacts, such as ectopic beats or
motion, the IBIs were filtered using the non causal of variable threshold algorithm

with a tolerance of 30%. HR was calculated from the mean of the IBI time series as
60/IBI.

(ii)HVR DETECTION
Analysis of HRV was performed by power spectral density (PSD)
estimation using the Lomb periodogram . The lowfrequency (LF) and high
frequency (HF) powers were measured as the area under the PSD curve
corresponding to 0.040.15 and 0.150.4 Hz, respectively, and quantified in
normalized units (n.u.) to minimize the effect on the values of the changes in total
power.The LF component is modulated by baroreflex activity and includes both
sympathetic and parasympathetic influences. The HF component reflects
parasympathetic influence on the heart through efferent vagal activity and is
connected to respiratory sinus arrhythmia (RSA), a cardio respiratory phenomenon
characterized by IBI fluctuations that are in phase with inhalation and exhalation.
We also calculated the LF/HF ratio, considered to mirror sympatho/vagal balance
or to reflect sympathetic modulations.

(iii)RR DETECTION
Since the HF component is connected with breathing, the RR can be
estimated from the HRV power spectrum. When the frequency of respiration

28

changes, the center frequency of the HF peak shifts in accordance with RR


[20].Thus, we calculated RR from the center frequency of the HF peak fHFpeak in
the HRV PSD derived from the webcam recordings as 60/fHFpeak . The
respiratory rate measured using the chest belt sensor was determined by the

frequency corresponding to the dominant peak fresppeak in the PSD of the


recorded respiratory waveform using 60/fresppeak.

ALGORITHM :
Step 1 : Start.
Step 2 : Convert the given video into .avi format.
Step 3 : Calculate totalframe , totaltime , framerate for the given format.
Step 4 : And separate the three different frame with 3 signal(red,blue,green).
Step 5 : Crop the image into pixel resolution which only covers the face. And also
calculate the

mean value for the 3 signals with adopted crop image.

Step 6 : Find the determinant value for the separated signal,


detr_r=detrend(r_sig)./sr;
detr_g=detrend(g_sig)./sg;
detr_b=detrend(b_sig)./sb;
Step 7:And plot the values.
Step 8 :Combine the detr_(r,g,b) signals and apply the JADER algorithm.
Step 9: Apply hamming technique to convert the combined signal as per the
formula
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);

29

step 10: Find the peak value for the signal.


Step 11: Calculate HR,RR using the below formula
resp_rate=60*fpeak
heart_rate=60/mean(ibi).
Step 12 : Display the corresponding value in the figure.

IMPLEMENTATION PROCEDURE:
BLOCK DIAGRAM OF THE ENTIRE SYSTEM:

MATLAB

MAX23
2

3-EASY
PULSE
SENSOR

16 X 2 LCD

PIC16F87
7A
ADC

POWER
SUPPLY

Fig 4.4 Block diagram for implementation


In this PIC16F877A microcontroller is used. This controller get input from
adc is display the values in lcd.

Introduction to PIC16F877A Microcontroller:


Microcontroller

PIC16F877A is

one

of

the

PIC

Micro

Family

microcontroller which is popular at this moment, start from beginner until all

30

professionals. Because very easy using PIC16F877A and use FLASH memory
technology so that can be write-erase until thousand times. The superiority this
Risc Microcontroller compared to with other microcontroller 8-bit especially at a
speed of and his code compression. PIC16F877A have 40 pin by 33 path of I/O.
PIC16F877A perfectly fits many uses, from automotive industries and
controlling home appliances to industrial instruments, remote sensors, electrical
doorlocks and safety devices. It is also ideal for smart cards as well as for battery
supplied devices because of its low consumption.
EEPROM memory makes it easier to apply microcontrollers to devices
where permanent storage of various parameters is needed (codes for transmitters,
motor speed, receiver frequencies, etc.). Low cost, low consumption, easy handling
and flexibility make PIC16F877A applicable even in areas where microcontrollers
had not previously been considered (example: timer functions, interface
replacement in larger systems, coprocessor applications, etc.).In System
Programmability of this chip (along with using only two pins in data transfer)
makes possible the flexibility of a product, after assembling and testing have been
completed.

SENSORS:
A sensor is a device that detects and responds to some type of input from the
physical environment. The specific input could be light, heat, motion, moisture,
pressure, or any one of a great number of other environmental phenomena. The

31

output is generally a signal that is converted to human-readable display at the

Fig 4.4.1 Easy pulse sensor

The Easy Pulse sensor is based on the principle of photoplethysmography


(PPG) which is a non-invasive method of measuring the variation in blood volume
in tissues using a light source and a detector. Since the change in blood volume is
synchronous to the heart beat, this technique can be used to calculate the heart rate.
Transmittance and reflectance are two basic types of photoplethysmography.The
transmittance PPG, a light source is emitted in to the tissue and a light detector is
placed in the opposite side of the tissue to measure the resultant light. Because of
the limited penetration depth of the light through organ tissue, the transmittance
PPG is applicable to a restricted body part, such as the finger or the ear lobe.
However, in the reflectance PPG, the light source and the light detector are both
placed on the same side of a body part. The light is emitted into the tissue and the
reflected light is measured by the detector. As the light doesnt have to penetrate

32

the body, the reflectance PPG can be applied to any parts of human body. In either
case, the detected light reflected from or transmitted through the body part will
fluctuate according to the pulsatile blood flow caused by the beating of the heart.
The HRM-2511E sensor is manufactured by Kyoto Electronic Co., China,
and operates in transmission mode. The sensor body is built with flexible Silicone
rubber material that helps to keep the sensor tightly hold to the finger. Inside the
sensor case, an IR LED and a photodetector are placed on two opposite sides and
are facing each other. When a fingertip is plugged into the sensor, it is illuminated
by the IR light coming from the LED. The photodetector diode receives the
transmitted light through the tissue on other side. More or less light is transmitted
depending on the tissue blood volume. Consequently, the transmitted light intensity
varies with the pulsing of the blood with heart beat. A plot for this variation against
time is referred to be a Photoplethysmography or PPG signal. The

following picture shows a basic transmittance PPG probe setup to extract the pulse
signal from the fingertip.

Fig 4.4.2 HRM-2511E as a transmission PPG probe

33

The PPG signal consists of a large DC component, which is attributed to the total
blood volume of the examined tissue, and a pulsatile (AC) component, which is
synchronous to the pumping action of the heart. The AC component, which carries
vital information including the heart rate, is much smaller in magnitude than the
DC component. A typical PPG waveform is shown in the figure below (not to
scale).

Fig 4.4.3 PPG components

The two maxima observed in the PPG are called Sytolic and Diastolic peaks, and
they can provide valuable information about the cardiovascular system (this topic
is outside the scope of this article). The time duration between two consecutive
Systolic peaks gives the instantaneous heart rate.
Here are the features of Easy Pulse V1.1 sensor module.
Uses HRM-2511E transmission PPG sensor for stable readings
MCP6004 Opamp with rail-to-rail output capability for maximum signal
swing
Separate analog and digital outputs

34

Potentiometer gain control for the analog output


Pulse width control for the digital output
Additional test points on board for analyzing signals at different stages of
instrumentation.

ADC:
Basic analog-to-digital converter terminology will be covered first, followed
by configuration of the analog-to-digital converter peripheral. Next, information on
the usage of the peripheral will be presented, initially focusing on the 8-bit analogtodigital converter. Then the differences between the 8-bit and the 10-or 12-bit
converters will be discussed. Finally, some additional reference resources will be
highlighted.
Microcontrollers are very efficient at processing digital numbers, but they
cannot handle analog signals directly. An analog-to-digital converter, converts an
analog voltage level to a digital number. The microcontroller can then efficiently

process the digital representation of the original analog voltage. By definition,


digital numbers are non-fractional whole numbers.
In this example, an input voltage of 2.343 volts is converted to 87. The
users software can use the value 87 as the representation of the original input
voltage. At this point, the number 87 is only used for discussion purposes as a
typical output.
The analog-to-digital converter is only capable of performing an accurate
conversion if the analog input voltage is within the valid input range of the
converter. If the input voltage falls outside this range, the conversion value will be

35

inaccurate. The input range is set by high and low voltage references. These define
the upper and lower limits of the valid input range. In many cases, the high and
low voltage references are selected as the microcontroller supply voltage and
ground, at other times an external reference or references are used.In addition,
some devices have internal voltage references that can be used. The source or
sources for these voltage references are a configuration option when setting up the
analog-to-digital converter in the PICmicro microcontroller (MCU). Note that
there are restrictions on the voltage reference levels, for example: the reference
voltages generally shouldnt be less than Vss or greater than VDD. There is also a
minimum difference that is required between the high and low reference voltages.
Please consult your data sheet for the voltage reference requirements.
The output of an analog-to-digital converter is a quantized representation of
the original analog signal. The term quantization refers to subdividing a range into
small but measurable increments. The total allowable input range is divided into a

finite number of regions with a fixed increment. The analog-to-digital converter


determines the appropriate region to assign the given input voltage.
In this example, the step or increment is one-tenth of a volt and the input
voltage is 2.343 volts. The appropriate result would be assigned as a digital value
of 87, because 2.343 volts fits between the quantization limits of 2.3 volts and 2.4
volts. Any input voltage between the 2.3 and 2.4 volt quantization limits will be
assigned a digital value of 87.
The process of quantization has the potential to introduce an inaccuracy
known as quantization error, which can be viewed as being similar to a rounding
error. In the above example, the 2.343 volt input is in effect rounded to the nearest

36

tenth of a volt. The maximum quantization error in this case would be five
hundredths of a volt or one-half of the increment size. It should be noted that the
minimum quantization error for the analog-to-digital converter peripheral in the
PICmicro devices is 500 micro volts. Therefore, the smallest step size for each
state cannot be less than one milli-volt.
Resolution defines the number of possible analog-to-digital converter output
states. As previously discussed, the result is a digital or whole number, so for an 8bit converter the possible states will be: zero, one, two, three and so on, with 255
as the maximum state. A 10-bit converter will have 1023 as the maximum state,
and a 12- bit converter will have 4095 as the maximum state. If the input range
remains constant, a higher resolution converter will have less quantization error
because the range is divided into smaller steps. This is similar in concept to the
process of rounding a number to the nearest hundredths, having potentially less
error than rounding to the nearest tenths.
Acquisition time is the amount time required to charge the holding capacitor
on the front end of an analog-to-digital converter. The holding capacitor must be
given sufficient time to settle to the analog input voltage level before the actual
conversion is initiated. If sufficient time is not allowed for acquisition, the
conversion will be inaccurate. The required acquisition time is based on a number
of factors, two of them being the impedance of the internal analog multiplexer and
the output impedance of the analog source.

LCD:
LCD (Liquid Crystal Display) screen is an electronic display module and
find a wide range of applications. A 16x2 LCD display is very basic module and is
very commonly used in various devices and circuits. These modules are preferred
over seven segments and other multi segment LEDs. The reasons being: LCDs are

37

economical; easily programmable; have no limitation of displaying special &


even custom characters (unlike in seven segments), animations.
A 16x2 LCD means it can display 16 characters per line and there are 2 such
lines. In this LCD each character is displayed in 5x7 pixel matrix. This LCD has
two registers, namely, Command and Data.The command register stores the
command instructions given to the LCD. A command is an instruction given to
LCD to do a predefined task like initializing it, clearing its screen, setting the
cursor position, controlling display etc. The data register stores the data to be
displayed on the LCD. The data is the ASCII value of the character to be displayed
on the LCD. Click to learn more about internal structure of a LCD.The LCD
panel's Enable and Register Select is connected to the Control Port. The Control
Port is an open collector / open drain output. By incorporating two 10K external
pull up resistors, the circuit is made portable for a wider range of computers. The
R/W line of the LCD panel is hard-wired into the write mode which will not cause
any bus conflicts on the data lines. Hence the LCD's internal Busy Flag cannot tell

if the LCD has accepted and finished processing the last instruction or not. The 10k
Potentiometer controls the contrast of the LCD panel.
Table 4.3 Pin Details of LCD

GND

Ground

38

Vcc

Supply Voltage +5V

VEE

Contrast adjustment
Register

select

:0->Control

input,

RS

1-> Data input


5

R/W

Read/ Write

Enable

D0 to D7

I/O Data pins

7 to 14

POWER SUPPLY:
Introduction:
The input to the circuit is applied from the regulated power supply. The a.c.
input i.e., 230V from the mains supply is step down by the transformer to 12V and
is fed to a rectifier. The output obtained from the rectifier is a pulsating d.c voltage.
So in order to get a pure d.c voltage, the output voltage from the rectifier is fed to a
filter to remove any a.c components present even after rectification. Now, this
voltage is given to a voltage regulator to obtain a pure constant dc voltage.

Block Diagram:

39

Fig 4.4.4 Block diagram for power supply

Transformer:
Usually, DC voltages are required to operate various electronic equipment
and these voltages are 5V, 9V or 12V. But these voltages cannot be obtained
directly. Thus the a.c input available at the mains supply i.e., 230V is to be brought

down to the required voltage level. This is done by a transformer. Thus, a step
down transformer is employed to decrease the voltage to a required level.

Rectifier:
The output from the transformer is fed to the rectifier. It converts A.C. into
pulsating. D.C. The rectifier may be a half wave or a full wave rectifier. In this
project, a bridge rectifier is used because of its merits like good stability and full
wave rectification.

Filter:

40

Capacitive filter is used in this project. It removes the ripples from the output of
rectifier and smoothens the D.C. Output received from this filter is constant until
the mains voltage and load is maintained constant. However, if either of the two is
varied, D.C. voltage received at this point changes. Therefore a regulator is applied
at the output stage.

Voltage Regulator:
As the name itself implies, it regulates the input applied to it. A voltage regulator
is an electrical regulator designed to automatically maintain a constant voltage
level. In this project, power supply of 5V and 12V are required. In order to obtain
these voltage levels, 7805 and 7812 voltage regulators are to be used. The first
number 78 represents positive supply and the numbers 05, 12 represent the
required output voltage levels.

5. TESTING
Testing is a set of activities that can be planned in advance and conducted
systematically. For this reason a template for software testing, a set of steps into
which we can place specific test case design techniques and testing methods should
be defined for software process. Testing often accounts for more effort than any
other software engineering activity. If it is conducted haphazardly, time is wasted,
unnecessary effort is expanded, and even worse, errors sneak through undetected.
It would therefore seem reasonable to establish a systematic strategy for testing
software.

41

Testing is a process of executing a program with the intent of finding an


error. A good test case is one that has a high probability of finding an as yet
undiscovered error. A successful test is one that uncovers an as yet undiscovered
error. System testing is the stage of implementation, which is aimed at ensuring
that the system works accurately and efficiently as expected before live operation
commences. It verifies that the whole set of programs hang together. System
testing requires a test consists of several key activities steps for run program,
string, system and is important in adopting a successful new system. This is the last
chance to detect and correct errors before the system is installed for user
acceptance testing.
The software testing process commences once the program is created and the
documentation and related data structures are designed. Software testing is
essential for correcting errors. Otherwise the program or the project is not said to
be complete. Software testing is the critical element of software quality assurance
and represents the ultimate the review of specification design and coding. Testing

is the process of executing the program with the intent of finding the error. A good
test case design is one that as a probability of finding an yet undiscovered error.
Testing is generally described as a group of procedures carried out to
evaluate some aspects of a piece of software. It can be described as a process used
for revealing defects in the software, and for establishing that the software has
attained a specific degree of quality with respected to selected attributes. It is an
investigation which is conducted to provide stakeholders with information about
the quality of the product or service under test. Testing can also provide an

42

objective, independent view of the software to allow the business to appreciate and
understand the risks of the software implementation.
Testing is more than just debugging. The purpose of testing can be quality
assurance, verification, and validation, or reliability estimation. Testing can be used
as a generic metric as well. Correctness testing and reliability are the two major
areas of testing. Software testing is a trade-off between budget, time and quality.
Poor quality software that can cause loss of life or property is no longer acceptable
to society. Failures can result in catastrophic losses. Conditions demand software
development staffs with interest and training areas of software product and process
quality. Highly qualified staff ensures that software products are built on time,
within budget, and are of the highest quality with respect to attributes such as
reliability, correctness, usability and the ability to meet all user requirements.
Testing helps in verifying and validating the software to see if it is working as it is
intended to be working. Test techniques include, but are not limited to, the process
of executing a program or application with the intent of finding software bugs
(errors or other defects).

Software must definitely be tested before it is delivered to the users as


untested software may contain faults, errors or failures. Hence, it is seen that
testing is an essential part of the process of developing software or a software
project. The necessity to test the software and hence, the necessity to test the
project (need for testing), the taxonomy of testing, the types of testing, the levels of
testing and the test case design for the project are elucidated in this chapter.

5.1 NEED OF TESTING

43

When something is done, we need to know why it is being done in order to


perform the process in a thorough and satisfactory manner. From this it is inferred
that knowing what testing is and does it enough; the need for testing also should be
known. A primary purpose of testing is to detect software failures so that defects
may be discovered and corrected. Testing cannot establish that a product functions
properly under all conditions but can only establish that it does not function
properly under specific conditions. The scope of software testing often includes
examination of code as well as execution of that code in various environments and
conditions as well as examining the aspects of code such as whether it does what it
is supposed to do and whether it does what it needs to do. The user will appreciate
it if a system is tested before it is delivered. It is good practice to include testing as
part of the development process in order to minimize the efforts prior to
implementation.
It is for this reason that a user representative is recommended to be on the
development team they can test the system at its various stages of development.
This also assists with user training.

While testing, care must be taken to not fall into the trap of rewriting large
parts of the system unnecessarily or even adding new coding. This comes about
when it is obvious that not of the required functionality has been implemented. It
can also happen when the user introduces new functionality which they had
omitted from the original specifications. Testing should, therefore, simply be
ensuring that the systems meets its original specifications and accurately performs
to that specification. Testing is not an easy phase of system development and
should not be treated lightly. Some organizations employ staff specifically to carry

44

out the testing of the products prior to release to the user. During this outcome it is
required to:
1. Implement a test plan using a defined strategy: Maintain test
documentation recording both the expected results of the test data and the actual
results. The bank of test data should be sufficient to thoroughly test the
implemented solution in scope and range.
2. Evaluate the results of test runs: Amend coding as necessary: where there
are discrepancies between the expected results and the actual results, the
application and documentation must be amended and corrected accordingly.
3. Testing is usually performed for the following purposes:

To improve quality
As computers and software are used in critical applications, the outcome of a
bug can be severe. Bugs can cause huge losses. Bugs in the critical systems have
caused airplane crashes, allowed space shuttle missions to go awry, halted trading
on the stock market, and worse. Bugs can kill. Bugs can cause disasters. Quality is

the conformance of the specified design requirement. Being correct, the minimum
requirement of quality, means performing as required under specified conditions.
Debugging, a narrow view of software testing, is performed heavily to find out
design defects by the programmer. The imperfection of human nature makes it
almost impossible to make a moderately complex programs correct the first time.
Finding problems and get them fixed, is the purpose of debugging in the
programming phase.

45

For verification and validation (V&V)


Another important purpose of testing is verification and validation (V&V).
Testing can serve as metrics. It is heavily used as a tool in the V&V process.
Testers can make claims based on interpretations of the testing results, which either
the product works under certain situations, or it does not work. We can also
compare the quality among the different products under the same specifications,
based on results from the same test. We cannot test quality directly, but we can test
related factors to make quality visible. Quality has three sets of factors
functionality, engineering and adaptability. These three sets of factors can be
thought of as dimensions in the software quality space. Each dimension may be
broken down into its component factors and considerations at successively lower
level of detail.
Good testing provides measures for all relevant factors. The importance of
any particular factor varies from application to application. Any system where
human lives are at stake must place an extreme emphasis on reliability and
integrity. In the typical business system usability and maintainability are the key
factors, while for a one-time scientific program neither may be significant. Our

testing, to be fully effective, must be fully effective, must be geared to measuring


each relevant factor and thus forcing quality to become tangible and visible.

For reliability estimation


Software reliability has important relations with many aspects of the
software, including the structure, and the amount of testing it has been subjected
to.

46

5.2 Testing Objective


The main set of testing objectives is
1. Testing is a process of executing a program with the intent of finding an
error.
2.

A good test case is one that has a high probability of finding an

undiscovered error.
3. A successful test is one that uncovers an as-yet-undiscovered error.

5.3 Types of Testing

47

Fig 5.1 Testing types

White Box Testing


White Box Testing is a testing in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It
is used to test areas that cannot be reached from a black box level. To design test
cases using this inner structure of the software knowledge of that structure. The
code or a suitable pseudo code like representation must be available. These testing

48

methods are especially useful for revealing design and code based control, logic
and sequence defects, initialization defects and data flow defects.
A major White box testing technique is Code Coverage analysis. Code
Coverage analysis, eliminates gaps in a test case suite. It identifies areas of a
program that are not exercised by a set of test cases. Once gaps are identified, you
create test cases to verify untested parts of code, thereby increase the quality of the
software product. There are automated tools available to perform Code coverage
analysis. Below are a few coverage analysis techniques
Statement Coverage: This technique requires every possible statement in the
code to be tested at least once during the testing process
Branch Coverage: This technique checks every possible path (if-else and other
conditional loops) of a software application.
Apart from above, there are numerous coverage types such as Condition
Coverage, Multiple Condition Coverage, Path Coverage, Function Coverage
etc. Each technique has its own merits and attempts to test (cover) all parts of
software code. Using Statement and Branch coverage you generally attain 8090% code coverage which is sufficient.

Black Box Testing


Black Box Testing is a testing in which the software without any knowledge
of the inner workings, structure or language of the module being tested. Black box
tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as specification or
requirements document. It is a testing in which the software under test is treated, as

49

a black box you cannot see into it. The test provides inputs and responds to
outputs without considering how the software works. It exploits specifications to
generate test cases in a methodical way to avoid redundancy and to provide better
coverage.
By applying black-box techniques, we derive a set of test cases that satisfy
the following criteria: (1) test cases that reduce, by a count that is greater than one,
the number of additional test cases that must be designed to achieve reasonable
testing and (2) test cases that tell us something about the presence or absence of
classes of errors, rather than an error associated only with the specific test at hand.

Graph-Based Testing:
The first step in black-box testing is to understand the objects6 that are
modeled in software and the relationships that connect these objects. Once this has
been accomplished, the next step is to define a series of tests that verify all objects
have the expected relationship to one another [BEI95]. Stated in another way,
software testing begins by creating a graph of important objects and their
relationships and then devising a series of tests that will cover the graph so that
each object and relationship is exercised and errors are uncovered.

Equivalence Partitioning:
It is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived. An ideal test case singlehandedly uncovers a class of errors (e.g., incorrect processing of all character data)
that might otherwise require many cases to be executed before the general error is
observed. Equivalence partitioning strives to define a test case that uncovers

50

classes of errors, thereby reducing the total number of test cases that must be
developed.

Boundary Value Analysis:


For reasons that are not completely clear, a greater number of errors tend to
occur at the boundaries of the input domain rather than in the "center." It is for this
reason that boundary value analysis (BVA) has been developed as a testing
technique. Boundary value analysis leads to a selection of test cases that exercise
bounding values. Boundary value analysis is a test case design technique that
complements equivalence partitioning. Rather than selecting any element of an
equivalence class, BVA leads to the selection of test cases at the "edges" of the
class. Rather than focusing solely on input conditions, BVA derives test cases from
the output domain as well.

Comparison Testing:
When multiple implementations of the same specification have been
produced, test cases designed using other black-box techniques (e.g., equivalence
partitioning) are provided as input to each version of the software. If the output
from each version is the same, it is assumed that all implementations are correct. If
the output is different, each of the applications is investigated to determine if a
defect in one or more versions is responsible for the difference. In most cases, the
comparison of outputs can be performed by an automated tool. Comparison testing
is not foolproof. If the specification from which all versions have been developed
is in error, all versions will likely reflect the error. In addition, if each of the

51

independent versions produces identical but incorrect results, condition testing will
fail to detect the error.

Unit Testing
Unit testing focuses verification effort on the smallest unit of software
design the software component or module. Using the component-level design
description as a guide, important control paths are tested to uncover errors within
the boundary of the module. The relative complexity of tests and uncovered errors
is limited by the constrained scope established for unit testing. The unit test is
white-box oriented, and the step can be conducted in parallel for multiple
components.
The module interface is tested to ensure that information properly flows into
and out of the program unit under test. The local data structure is examined to
ensure that data stored temporarily maintains its integrity during all steps in an
algorithm's execution. Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or restrict processing. All
independent paths (basis paths) through the control structure are exercised to
ensure that all statements in a module have been executed at least once. And
finally, all error handling paths are tested.

Acceptance Testing
Acceptance of the system is key factor for the success of any system. It is a
critical phase of any project and requires significant participation by the end user.
It also ensures that the system meets the functional requirements.

52

The system under consideration is tested for user acceptance by constantly keeping
in touch with prospective system and user at the time of developing and making
changes whenever required. This is done in regarding to the following points.
Input screen design.
Output screen design.

Integration Testing
Integration testing is a systematic technique for constructing the program
structure while at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design.. All components are combined in
advance. The entire program is tested as a whole. Usually a set of errors is
encountered. Correction is difficult because isolation of causes is complicated by
the vast expanse of the entire program. Once these errors are corrected, new ones
appear and the process continues in a seemingly endless loop.

Testing Process
Waterfall development model
A common practice of software testing is that testing is performed by an
independent group of testers after the functionality is developed, before it is
shipped to the customer. This practice often results in the testing phase being used
as a project buffer to compensate for project delays, thereby compromising the
time devoted to testing.

53

Agile development model


In contrast, some emerging software disciplines such as extreme
programming and the agile software development movement, adhere to a testdriven software development model. In this process, unit tests are written first, by
the software engineers (often with pair programming in the extreme programming
methodology). Of course these tests fail initially; as they are expected to. Then as
code is written it passes incrementally larger portions of the test suites. The test
suites are continuously updated as new failure conditions and corner cases are
discovered, and they are integrated with any regression tests that are developed.
The ultimate goal of this test process is to achieve continuous integration where
software updates can be published to the public frequently.
This methodology increases the testing effort done by development, before
reaching any formal testing team. In some other development models, most of the
test execution occurs after the requirements have been defined and the coding
process has been completed.

Top-down and bottom-up


Bottom up Testing is an approach to integrated testing where the lowest level
components (modules, procedures, and functions) are tested first, then integrated
and used to facilitate the testing of higher level components. After the integration
testing of lower level integrated modules, the next level of modules will be formed
and can be used for integration testing. This method also helps to determine the
levels of software developed and makes it easier to report testing progress in the
form of a percentage.

54

Validation Testing
Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements. A test plan outlines the classes of tests
to be conducted and a test procedure defines specific test cases that will be used to
demonstrate conformity with requirements. Both the plan and procedure are
designed to ensure that all functional requirements are satisfied, all behavioral
characteristics are achieved, all performance requirements are attained.

Functional Testing
Functional testing provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input

: identified classes of valid input must be accepted.

Invalid Input

: identified classes of invalid input must be rejected.

Functions

: identified functions must be exercised.

Output

: identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.


Organization and preparation of functional tests is focused on requirements,
key functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive

55

processes must be considered for testing. Before functional testing is complete,


additional tests are identified and the effective value of current tests is determined.
Three types of tests in Functional test:
Performance Test
Stress Test
Structure Test

Performance Test: It determines the amount of execution time spent in various


parts of the unit, program throughput, and response time and device utilization by
the program unit.

Stress Test: It designed to intentionally break the unit. A Great deal can be
learned about the strength and limitations of a program by examining the manner
in which a programmer in which a program unit breaks.

Structured Test: Structure Tests are concerned with exercising the internal logic
of a program and traversing particular execution paths. The way in which WhiteBox test strategy was employed to ensure that the test cases could guarantee that
all independent paths within a module have been exercised at least once.
Exercise all logical decisions on their true or false sides.
Execute all loops at their boundaries and within their operational bounds.
Exercise internal data structures to assure their validity.
Checking attributes for their correctness.

56

5.4 Testing In Particular


System testing of software or hardware is testing conducted on a complete,
integrated system to evaluate the system's compliance with its specified
requirements. System testing falls within the scope of black box testing, and as
such, should require no knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the "integrated" software
components that have successfully passed integration testing and also the software
system itself integrated with any applicable hardware system(s). The purpose of
integration testing is to detect any inconsistencies between the software units that
are integrated together (called assemblages) or between any of the assemblages and
the hardware. System testing is a more limited type of testing; it seeks to detect
defects both within the "inter-assemblages" and also within the system as a whole.

Testing the whole system


System testing is performed on the entire system in the context of a
Functional Requirement Specification(s) (FRS) and/or a System Requirement
Specification (SRS). System testing tests not only the design, but also the behavior
and even the believed expectations of the customer. It is also intended to test up to
and beyond the bounds defined in the software/hardware requirements
specification(s)

System Testing
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An

57

example of system testing is the configuration oriented system integration test.


System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points. System testing of software or hardware is
testing conducted on a complete, integrated system to evaluate the system's
compliance with its specified requirements. System testing falls within the scope of
black box testing, and as such, should require no knowledge of the inner design of
the code or logic.

Test Case Design- Integration testing


Input is to record the human video by age wise.
A valid test is to find the HR,HRV,RR waves from recorded human face video and
check it with ECG reports.
Invalid test is to not able to find the expected output.
Test case TC1: Input is below 15 year and expected output is to find the equalized
range of output.
Test case TC2: Input is below 25 year and expected output is to find the equalized
range of output.
.
Test case TC3: Input is above 40 year and expected output is to find the equalized
range of output.

58

5.5 Test Report


Product

: Black box testing


Table 5.1 Test case design

Test ID
TC1
TC2
TC3
TC4

Age limit

Expected

Pass/Fail

60

output
Accurate

Pass

21

output
Accurate

Pass

output
Accurate

Pass

output
Not

20
but changing output
the
seating
arrangements

exact Fail

59

6.EXPERIMENTAL RESULT
MATLAB is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation.
Typical uses include:
Math and computation
Algorithm development
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including Graphical User Interface building
MATLAB is an interactive system whose basic data element is an array that does
not require dimensioning. This allows you to solve many technical computing
problems, especially those with matrix and vector formulations, in a fraction of the
time it would take to write a program in a scalar non-interactive language such as
C or FORTRAN.

MATLAB has several advantages over other methods or languages:


Its basic data element is the matrix. A simple integer is considered an
matrix of one row and one column. Several mathematical operations that work on
arrays or matrices are built-in to the Matlab environment. For example, crossproducts, dot-products, determinants, inverse matrices.

60

Vectorized operations. Adding two arrays together needs only one command,
instead of a for or while loop.

The graphical output is optimized for interaction. You can plot your data
very easily, and then change colors, sizes, scales, etc, by using the graphical
interactive tools.
MATLABs functionality can be greatly expanded by the addition of toolboxes.
These

are

sets

of

specific

functions

that

provided

more

specialized

functionality.Example: Excel link allows data to be written in a format recognized


by Excel, Statistics Toolbox allows more specialized statistical manipulation of
data (Anova, Basic Fits, etc)

MATLAB System:
The MATLAB system consists of five main parts:
Development Environment. This is the set of tools and facilities that help
you use MATLAB functions and files. Many of these tools are graphical
user interfaces. It includes the MATLAB desktop and Command Window, a
command history, and browsers for viewing help, the workspace, files, and
the search path.
The MATLAB Mathematical Function Library. This is a vast collection
of computational algorithms ranging from elementary functions like sum,
sine, cosine, and complex arithmetic, to more sophisticated functions like
matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier
transforms.

61

The MATLAB Language. This is a high-level matrix/array language with


control flow statements, functions, data structures, input/output, and objectoriented programming features. It allows both "programming in the small" to

6.1 EVALUATION OF FACE REFLECTANCE


The frame rate was set 30 fps (frames per second) and a total of 900
frames were selected for each heart rate evaluation. The testing data set included
1video clips recorded from the participants.actually the face reflectance is already
measured using Hilbert-Huang transform but in this concept only heart rate should
be measured.

Fig 6.1 compares the Bland-Altman plots for Hilbert-Huang transform


framework
For fair comparison with the results, the detection range of heart rate is set
between 50 and 90. Our proposed Framework provides more robust evaluation
with a smaller degree of deviation. The performance evaluation, the precision for
different k settings is measured. The highest precision (about 84%) is achieved
when k is set at 100.

62

Table 6.1 Comparing the different age persons and determine their heart rate
values
INPUT

CAPTURING

ROI

OUTPUT

IMAGE

SEPERATION RANGE(HR,RR,HV
R)
HR:-65
HVR:-13
RR:-13

AGE :> 50
BUT <70

AGE: >18 BUT

HR:-78
HVR:-15
RR:-17

<25
AGE: >10 BUT

HR:-74
HVR:-14
RR:-18

<16

AGE: >5 BUT

HR:-89
HVR:-20
RR:-19

<10

Table 6.1 indicates the comparison of different age people and


determine the value of HR,HVR,RR. The adult people having the normal
resting human heart rate ranges from 60100 bpm.During sleep a slow
heartbeat with rates around 4050 BPM is common and is considered

63

normal. The typical respiratory rate for a healthy adult at rest is 1220
breaths per minute.

7. SCREEN SHOTS
OUTPUT:

Fig 7.1 output screen

64

Fig 7.2 Seperating three signals(red,blue,green)

Fig 7.3 Rectifying the noise in the three signal

65

Fig 7.4 determining the zero peaks in the three signal and find the
green signal

Fig 7.5 Displaying the heart rate

66

8. CONCLUSION AND FUTURE ENCHANCEMENT


8.1 CONCLUSION
In this project, we use a webcam to record the human face video
for 1 minutes and convert into .avi format. Already this procedure is done in the
Hilbert-Huang Transform method, they supposed to detect only the heart rate from
the separated green signal. Because in the green signal the peak value is nearly
seems to be zero. Even though, at that method eliminate the noise but it cannot
look as accurate.
So that we perform JADER algorithm for this same face
reflectance procedure. In this technique we find the ranges for heart rate, heart rate
variability and respiratory rate which is more or less same to the result of ECG
result. To achieve a robust evaluation, ensemble empirical mode decomposition of
the JADE algorithm is used to acquire the primary heart rate signal while reducing
the effect of ambient light changes. Our proposed approach is found to outperform
the current state of the art, providing greater measurement accuracy with smaller
variance and is shown to be feasible in real-world environments.

8.2 FUTURE ENHANCEMENT


The program works in the hospital by recording the face even
though it takes some time to get the result .

67

9.CODING
CODING:
Main.m
avi=mmreader('facevideo.avi');
totalframe=get(avi,'NumberOfFrames');
totaltime=get(avi,'Duration');
framerate=get(avi,'FrameRate');
timestamp=0:0.0704:totaltime;
r_sig=zeros(1,totalframe);
g_sig=zeros(1,totalframe);
b_sig=zeros(1,totalframe);
for i=1:totalframe %% 828 frames are here
frame=read(avi,i);
crop=imcrop(frame,[205 130 105 110]); % x1 , y1, x2, y2
red=crop(:,:,1);
green=crop(:,:,2);
blue=crop(:,:,3);
mr=mean2(red);

68

mg=mean2(green);
mb=mean2(blue);
r_sig(i)=mr;
g_sig(i)=mg;
b_sig(i)=mb;
end
figure
subplot(3,1,1)
plot(r_sig,'r'),grid on
subplot(3,1,2)
plot(g_sig,'g'),grid on
subplot(3,1,3)
plot(b_sig,'b'),grid on
sr=std2(r_sig);
sg=std2(g_sig);
sb=std2(b_sig);
meanr=mean2(r_sig);
meang=mean2(g_sig);
meanb=mean2(b_sig);
detr_r=detrend(r_sig)./sr;
detr_g=detrend(g_sig)./sg;
detr_b=detrend(b_sig)./sb;
figure;
subplot(3,1,1)
plot(detr_r,'r'),grid on
subplot(3,1,2)

69

plot(detr_g,'g'),grid on
subplot(3,1,3)
plot(detr_b,'b'),grid on
comb_sig=[detr_r;detr_g;detr_b];
B=JadeR(comb_sig);
source_sig=B*comb_sig;
gsource=source_sig(2,:);
avg_filt=ones(1,5)/5;
smoothed_sig = convn(gsource,avg_filt,'same');
figure;
subplot(3,1,1);
plot(timestamp,smoothed_sig,'g');
grid on;
Fs = framerate;
N = 128;
Fc1 = 0.4;
Fc2 = 4;
flag = 'scale';
win = hamming(N+1);
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);
bandpass=convn(smoothed_sig,b,'same');
subplot(3,1,2)
plot(timestamp,bandpass),grid on;
xx=0:0.0704:58.2636;
sampledata=0:1/256:58.2636;

70

interpolate=spline(xx,bandpass,sampledata);
subplot(3,1,3)
plot(interpolate),grid on;
[pks,loc]=findpeaks(interpolate,'minpeakdistance',100);
hold on
plot(loc,pks,'*r');
hold off
temp1=[0 loc];
temp2=[loc 0];
temp=temp2-temp1;
ibi=temp(1,1:size(loc,2))/256;
timeibi=loc/256;
ibisignal=detrend(ibi);
figure,subplot(3,1,1)
plot(timeibi,ibisignal,'--*b'),grid on
[f,Pxx,prob] = lomb(timeibi,ibisignal,4,1);
[psdpeak,psdloc]=findpeaks(Pxx);
[peakvalue,ind]=max(psdpeak);
fpeak=f(psdloc(ind));
subplot(3,1,2)
plot(f,Pxx,'b'),grid on;
hold on
plot(fpeak,peakvalue,'*r')
hold off
resp_rate=60*fpeak
heart_rate=60/mean(ibi)

71

lomb.m
function [f,P,prob] = lomb(t,h,ofac,hifac)
h=h';t=t';
N = length(h);
T = max(t) - min(t);
mu = mean(h);
s2 = var(h);
f = (1/(T*ofac):1/(T*ofac):hifac*N/(2*T)).';
w = 2*pi*f;
tau = atan2(sum(sin(2*w*t.'),2),sum(cos(2*w*t.'),2))./(2*w);
cterm = cos(w*t.' - repmat(w.*tau,1,length(t)));
sterm = sin(w*t.' - repmat(w.*tau,1,length(t)));
P = (sum(cterm*diag(h-mu),2).^2./sum(cterm.^2,2) + ...
sum(sterm*diag(h-mu),2).^2./sum(sterm.^2,2))/(2*s2);
M=2*length(f)/ofac;
prob = M*exp(-P);
inds = prob > 0.01;
prob(inds) = 1-(1-exp(-P(inds))).^M;
Jader.m
function B = JadeR(X,m)
verbose

=0;

[n,T] = size(X);
if nargin==1, m=n ; end;
if m>n ,

fprintf('jade -> Do not ask more sources than sensors here!!!\n'),

return,end
if verbose, fprintf('jade -> Looking for %d sources\n',m); end ;

72

if verbose, fprintf('jade -> Removing the mean value\n'); end


X

= X - mean(X')' * ones(1,T);

if verbose, fprintf('jade -> Whitening the data\n'); end


[U,D]

= eig((X*X')/T);

[puiss,k]

= sort(diag(D));

rangeW

= n-m+1:n;

scales

= sqrt(puiss(rangeW))

= diag(1./scales) * U(1:n,k(rangeW))';

iW

= U(1:n,k(rangeW)) * diag(scales);

= W*X;

if verbose, fprintf('jade -> Estimating cumulant matrices\n'); end


dimsymm

= (m*(m+1))/2;

nbcm

= dimsymm ;

CM

= zeros(m,m*nbcm);

= eye(m);

Qij

= zeros(m);

Xim

= zeros(1,m);

Xjm

= zeros(1,m);

scale

= ones(m,1)/T ;

Range

= 1:m ;

for im = 1:m
Xim = X(im,:) ;
Qij = ((scale* (Xim.*Xim)) .* X ) * X'
CM(:,Range)

= Qij ;

Range

= Range + m ;

for jm = 1:im-1

73

- R - 2 * R(:,im)*R(:,im)' ;

Xjm = X(jm,:) ;
Qij = ((scale * (Xim.*Xjm) ) .*X ) * X' - R(:,im)*R(:,jm)' R(:,jm)*R(:,im)' ;
CM(:,Range)

= sqrt(2)*Qij ;

Range

= Range + m ;

end ;
end;
%%
if 1,
if verbose, fprintf('jade -> Initialization of the diagonalization\n'); end
[V,D] = eig(CM(:,1:m));
for u=1:m:m*nbcm,
CM(:,u:u+m-1) = CM(:,u:u+m-1)*V ;
end;
CM

= V'*CM;

= eye(m) ;

else,
end;
seuil = 1/sqrt(T)/100;
encore

= 1;

sweep= 0;
updates = 0;
g

= zeros(2,nbcm);

gg

= zeros(2,2);

= zeros(2,2);

=0;

=0;

74

ton

=0;

toff

=0;

theta = 0 ;
%% Joint diagonalization
if verbose, fprintf('jade -> Contrast optimization by joint diagonalization\n'); end
while encore, encore=0;
if verbose, fprintf('jade -> Sweep #%d\n',sweep); end
sweep=sweep+1;
for p=1:m-1,
for q=p+1:m,
Ip = p:m:m*nbcm ;
Iq = q:m:m*nbcm ;
g

= [ CM(p,Ip)-CM(q,Iq) ; CM(p,Iq)+CM(q,Ip) ];

gg

= g*g';

ton

= gg(1,1)-gg(2,2);

toff

= gg(1,2)+gg(2,1);

theta = 0.5*atan2( toff , ton+sqrt(ton*ton+toff*toff) );


if abs(theta) > seuil,

encore = 1 ;

updates = updates + 1;
c

= cos(theta);

= sin(theta);

= [ c -s ; s c ] ;

pair

= [p;q] ;

V(:,pair)

= V(:,pair)*G ;

CM(pair,:) = G' * CM(pair,:) ;


CM(:,[Ip Iq])

= [ c*CM(:,Ip)+s*CM(:,Iq) -s*CM(:,Ip)

+c*CM(:,Iq) ] ;

75

end
end
end
end
if verbose, fprintf('jade -> Total of %d Givens rotations\n',updates); end
B

= V'*W ;

if verbose, fprintf('jade -> Sorting the components\n',updates); end


A

= iW*V ;

[vars,keys] = sort(sum(A.*A)) ;
B

= B(keys,:);

= B(m:-1:1,:) ;

if verbose, fprintf('jade -> Fixing the signs\n',updates); end


b

= B(:,1) ;

signs = sign(sign(b)+0.1) ;
B

= diag(signs)*B ;

return ;
bphamming.m
function Hd = bphamming
Fs = 14.2113;
N = 128;
Fc1 = 0.7;
Fc2 = 4;
flag = 'scale';
win = hamming(N+1);
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);
Hd = dfilt.dffir(b);

76

9.REFERENCE
[1] Decheng Yang and Weiting Chen,An illumination insensitive framework
using robust
illumination normalization and Spectral Regression Kernel Discriminant Analysis
for face recognition, IEEE Transactions, 2015.
[2] Fida Al-Obaisi, Ja far Alqatawna, Hossam Faris, Ali Rodan and Omar AlKadi, Pattern Recognition of Thermal Images for Monitoring of Breathing
Function,International Journal of Control and Automation vol. 8,No.6,2015.
[3]

Daniel McDuff , Sarah Gontarek, and Rosalind W. Picard , Remote

Detection of photoplethysmographic Systolic and Diastolic Peaks Using a Digital


Camera, IEEE Transactions on biomedical engineering, vol. 61, no. 12, 2014.
[4] Ali Jalali, et.al , Prediction of Periventricular Leukomalacia Occurrence in
Neonates After Heart Surgery,IEEE journal of biomedical and health informatics,
vol. 18, no. 4, 2014.
[5] Sverre Brovoll, et.al ,Time-lapse imaging of human heartbeats using UWB
radar, IEEE Transactions, 2013.

77

[6] Paolo Melillo, et.al, Heart Rate Variability and renal organ damage in
hypertensive patients, International Conference of the IEEE EMBS,2012.
[7] Magdalena Lewandowska,et.al, Measuring Pulse Rate with a Webcam a
Non-contact Method for Evaluating Cardiac Activity,IEEE Transactions,2011.
[8]

H.Stefanescu,et.al, Telescreening System for the Early Detection of

Hepatocellular carcinoma,IEEE Transactions,2006.


[9]

Nanfei Sun,et.al,Imaging the Cardiovascular Pulse,IEEE

Transactions,2005.

[10] Marc Garbey,et.al,Contact-Free Measurement of Cardiac Pulse Based on the


Analysis of Thermal Imagery,IEEE Transaction on Biomedical Engineering, vol54,no.8,2007.
[11] J. K. Chiang, M. Koo, T. B. Kuo, and C. H. Fu, Association between
cardiovascular autonomic functions and time to death in patients with terminal
hepatocellular carcinoma, J. Pain Symptom Manage., vol. 39, no. 4, 2010.
[12] M. Z. Poh, D. J. McDuffk, and R. W. Picard, Non-contact, automated
cardiac pulse measurements using video imaging and blind source
separation, Opt. Exp., vol. 18, no. 10, 2010.
[13] C. Takano and Y. Ohta, Heart rate measurement based on a time-lapse
image, Med. Eng. Phys., vol. 29, no. Oct. 2007.
[14] W. Verkruysse, L. O. Svaasand, and J. S. Nelson, Remote plethysmographic
imaging using ambient light, Opt. Exp., vol. 16, no. 26, 2008.
[15] W. G. Zijlstra, A. Buursma, and W. P. Meeuwsen-van der Roest, Absorption
spectra of human fetal and adult oxyhemoglobin, de-oxyhemoglobin,
carboxyhemoglobin, and methemoglobin, , vol. 37, no. 9, 1991.

78

[16] P. N. Belhumeur, J. P. Hespanha, and D. Kriegman, Eigenfaces vs.


Fisherfaces: Recognition using class specific linear projection,IEEE Trans.
Pattern Anal. Mach. Intell., vol. 19, no. 7, 1997.
[17] P. Belhumeur and D. Kriegman, What is the set of images of an object
under all possible illumination conditions? Int. J. Comput. Vis., vol. 28,no. 3,
1998.
[18] A. S. Georghiades, D. Kriegman, and P. N. Belhumeur, From few to many:
Generative models for recognition under variable pose and illumination, in Proc.
IEEE PAMI, 2000.

[19] T. Riklin-Raviv and A. Shashua, The quotient image: Class-based rerendering and recognition with varying illumination conditions,IEEE Trans.
Pattern Anal. Mach. Intell.,2001
[20] A. S. Georghiades, D. Kriegman, and P. N. Belhumeur, Illumination cones
for recognition under variable lighting: Faces, in Proc. IEEE Conf. CVPR, 1998.
[21] V. Blanz, S. Romdhani, and T. Vetter, Face identification across different
poses and illuminations with a 3D morphable model, in Proc. IEEE Conf. Autom.
Face Gesture Recognit., 2002.
[22] R. Gross and V. Brajovic, An image preprocessing algorithm for
illumination invariant face recognition, in Proc. 4th Int. Conf. Audio-Video-Based
Biometric Person Authentication (AVBPA), 2003.
[23] Z. Wu and N. E. Huang, Ensemble empirical mode decomposition:A noiseassisted data analysis method, Centre Ocean-Land-Atmos. Stud., Tech. Rep. Ser.,
vol. 193, no. 173,2004.

79

Vous aimerez peut-être aussi