Académique Documents
Professionnel Documents
Culture Documents
INTRODUCTION
Heart Rate
Heart rate is the speed of the heartbeat measured by the number of
contractions of the heart per minute (bpm). The heart rate can vary according to the
body's physical needs, including the need to absorb oxygen and excrete carbon
dioxide. It is usually equal or close to the pulse measured at any peripheral point.
Activities that can provoke change include physical exercise, sleep, anxiety, stress,
illness, ingesting, and drugs.
temperature. Even emotions can have an impact on heart rate. For example,
getting excited or scared can increase the heart rate. But most importantly,
Effect
Release of norepinephrine
Increased rates of firing during exercise
Decreased levels of O2; increased levels of H+, CO2, and
lactic acid
Decreased rates of firing, indicating falling blood
volume/pressure
Limbic system
Catecholamines
Thyroid hormones
Variation in T3 and T4
Calcium
Variation in Ca2+
Potassium
Variation in K+
Sodium
Variation in Na+
Body temperature
Measurement
1. Manual measurement
Heart rate is measured by finding the pulse of the heart. This pulse rate can
be found at any point on the body where the artery's pulsation is transmitted to the
surface by pressuring it with the index and middle fingers; often it is compressed
against an underlying structure like bone. A good area is on the neck, under the
corner of the jaw.
The radial artery is the easiest to use to check the heart rate. However, in
emergency situations the most reliable arteries to measure heart rate are carotid
arteries.
Possible points for measuring the heart rate are:
1. The ventral aspect of the wrist on the side of the thumb (radial artery).
2. The ulnar artery.
3. The neck (carotid artery).
4. The inside of the elbow, or under the biceps muscle (brachial artery).
5. The groin (femoral artery).
6. Behind the medial malleolus on the feet (posterior tibial artery).
9. The chest (apex of the heart), which can be felt with one's hand or fingers. It
is also possible to auscultate the heart using a stethoscope.
10.The temple (superficial temporal artery).
11.The lateral edge of the mandible (facial artery).
12.The side of the head near the ear (posterior auricular artery).
2.Electronic measurement
In obstetrics, heart rate can be measured by ultrasonography, however a more
precise method of determining heart rate involves the use of an electrocardiograph,
or ECG. An ECG generates a pattern based on electrical activity of the heart,
which closely follows heart function. Continuous ECG monitoring is routinely
done in many clinical settings, especially in critical care medicine. On the ECG,
instantaneous heart rate is calculated using the R wave-to-R wave (RR) interval
and multiplying/dividing in order to derive heart rate in heartbeats/min.
Multiple methods exist:
HR = 1,500/(RR interval in millimeters)
HR = 60/(RR interval in seconds)
HR = 300/number of "large" squares between successive R waves.
the monitors, used during sport, consist of a chest strap with electrodes. The
signal is transmitted to a wrist receiver for display.Alternative methods of
measurement include pulse oximetry and seismocardiography.
used
ballistocardiograms,
to
and
detect
the
beats
pulse
include:
wave
ECG,
signal
blood
derived
pressure,
from
HRV analysis
The most widely used methods can be grouped under time-domain and
frequency-domain. Other methods have been proposed, such as non-linear
methods.
1.Time-domain methods
These are based on the beat-to-beat or NN intervals, which are analysed to give
variables such as SDNN(the standard deviation of NN intervals), RMSSD(root
mean square of successive differences),SDSD(standard deviation of successive
differences),EBC(estimated breath cycle).
2.Frequency-domain methods
Frequency domain methods assign bands of frequency and then count the
number of NN intervals that match each band. The bands are typically high
frequency (HF) from 0.15 to 0.4 Hz, low frequency (LF) from 0.04 to 0.15 Hz, and
the very low frequency (VLF) from 0.0033 to 0.04 Hz.
Myocardial infarction
Diabetic neuropathy
In neuropathy associated with diabetes mellitus characterized by alteration in
small nerve fibers, a reduction in time domain parameters of HRV seems not only
to carry negative prognostic value but also to precede the clinical expression of
autonomic neuropathy.
Myocardial dysfunction
A reduced HRV has been observed consistently in patients with cardiac
failure. In this condition characterized by signs of sympathetic activation such as
faster heart rates and high levels of circulating catecholamines, a relation between
changes in HRV and the extent of left ventricular dysfunction was reported. In
particular, in most patients with a very advanced phase of the disease and with a
drastic reduction in HRV, an LF component could not be detected despite the
clinical signs of sympathetic activation. This reflects that, as stated above, the LF
may not accurately reflect cardiac sympathetic tone.
Liver cirrhosis
Liver cirrhosis is associated with decreased HRV. Decreased HRV in patients
with cirrhosis has a prognostic value and predicts mortality. Loss of HRV is also
and ADC conversion. Resistors are available for only some specifications. Thus if
the required resistance does not match the available resistance thenit is
approximated to some available nearby values introducing very minute error
values which are resolved manually nowadays. Further, analog to digital
conversion(uses an ADC with 8 input pins and 3 selection pins) involves 3 inputs
from sensors which leaves nearly 4 cycles unused which leads to wastage of
bandwidth.
Capturing Video
while their video was recorded for one minute. All videos were recorded in color
(24-bit RGB with three channels 8 bits/channel) at 15 frames per second (fps)
with pixel resolution of 640 480 and saved in AVI format on the laptop.
10
2.LITERATURE SURVEY
11
12
3. SYSTEM SPECIFICATION
The System Requirements Specification(SRS) document describes all data,
functional and behavioral requirements of the software under production or
development. It is produced at the culmination of the analysis task. The function
and performance allocated to software as part of system engineering are refined by
establishing a complete information description as functional representation of
system behavior, an indication of performance requirements and design constarints,
appropriate validation criteria.
HARDWARE REQUIREMENT SPECIFICATION
Processor
Main Memory (RAM)
Cache Memory
Monitor
Keyboard
Mouse
Hard Disk
:
:
:
:
:
:
:
:
Mat lab
Nil
13
Operating System
Vista/Windows 7/Windows 8
14
interpreting the labels of the boxes and lines. One must document the extent that a
components behavior influences how another component must be written to
interact with it. Structures are important because they boil away details about the
software that are independent of the concern reflected by the abstraction. Each
structure provides a useful perspective of the system. Sometimes the term is used
instead of structure.
Software architectures are represented as graphs where nodes represent
components:
Procedures
Modules
Processes
Tools
Databases
The design
Procedure calls
Event broadcasts
Database queries
Pipes
process starts by decomposing the software into components.
15
16
for describing business process without focusing onthe details of computer for
describing business processes without focusing on the details of computer systems.
The graphical depiction identifies each source of data and how it interacts with
other data sources to reach a common output. DFD are attractive technique because
they provide what users do rather than what computers do.
Components of DFD
DFDs are constructed using four major components
1.External entities- represent the source of data as input to the system.
They are also the destination of system data. External entities can be called data
stored outside the system. These are represented by squares.
2. Data stores represent stores of data within the system, for example,
computer files or databases. An open-ended box represents a data, which implies
stored data at rest or a temporary repository of data.
3. Processes represent activities in which data is manipulated by being
stored or retrieved or transferred in some way. In other words, we can say that
process transforms the input data into output data. Circles stand for a process that
converts data into information.
4. Data flow represents the movement of data from one component to
the other. An arrow(
through which information flows. Data flows are generally shown as one-way only.
Data flows between external entities are shown as dotted lines(---->).
Table 4.1 shows various symbols used for drawing DFD diagrams. A Data
Flow Diagram(DFD) is a graphical representation of the flow of data through an
information system, modelling its process aspects. A DFD is often used as a
preliminary step to create an overview of the system, which can later be elaborated.
17
Name
External Entity
Description
Source of destination lies
outside the system
Data Store
boundaries
Repository for data that
Process
Data Flow
outgoing flow
Shows the flow of data in
the system
Physical DFD- This type of DFD shows how the data flow is actually
implemented in the system. It is more specific and close to the implementation.
Logical DFDs offer the following advantages:
Better communication with users.More stable systems Better
understanding of business by analysts
Flexibility and maintenance
Elimination of redundancies and better creation of physical
model
18
automated
Describing process in more detail than logical DFDs
Sequencing process that has to be done in a particular order
Identifying temporary data stores
Specifying actual names of files and printouts
Adding controls to ensure the processes are done properly
Levels of DFD
Level 0-Highest abstraction level DFD is known as Level 0 DFD, which depicts
the entire information system as one diagram concealing all the underlying details.
Level 0 DFDs are known as context level DFDs.
Level 1-The Level 0 DFD is broken down into more specific,Level 1 DFD. Level
1 DFD depicts basic modules in the system and flow of data among various
modules. Level 1 DFD also mentions basic processes and sources of information.
Higher level DFDs can be transformed into more specific lower level DFDs with
deeper level of understanding unless the desired level of specification is achieved.
Level 0 DFD
Figure 4.2.1 depicts that image has the input for the system which is now
given to the system. A level 0 DFD, also called a fundamental system model or a
19
context model, represents the entire software element as a single bubble with input
and output data indicated by incoming and outgoing arrows, respectively. It shows
how the system is divided into sub-systems(processes), each of which deals with
one or more of the data flows to or from an external agent, and which together
provide all of the functionality of the system as a whole.
Level 1 DFD
20
21
22
NAME
DESCRIPTION
Action
Decision
Conditional
flow
of
control
Split or Merge Bar
Merges
concurrent
or
splits
single
Initial State
state
that
Final State
23
Face
Reflectance
Channels
Red/Green/Blue
Signals
Red/Green/Blue
Tranform the
Signals
Separated
Sources 1/2/3
24
4.4 Implementation
Implementation is the stage of the object when the theoretical design is
turned out into a working system. Thus it can be considered to be the most critical
stage in achieving a successful new system and in giving the user, confidence that
the new system will work and be effective. The implementation stage involves
careful planning, investigation of the existing system and its constrain on
implementation, designing of methods to achieve changeover and evolution of
changeover methods.
Each program is tested individually at the time of development using the
data and has verified that this program linked together in the way specified in the
program specification, the computer system and its environment is tested to the
satisfaction of the user. And so the system is going to be implemented very soon. A
simple operating procedure is included so that the user can understand the different
functions clearly and quickly.
Initially the desired tool is selected, then designing the system to get
required output. The final stage is to document the entire system which provides
components and the operating procedures of the system.
In this project first record the human face video and separate the frames
using ROI. The ROI was then separated into the three RGB channels and spatially
averaged over all pixels in the ROI to yield a red, blue, and green measurement
25
point for each frame and form the raw signals. Each trace was 1 min long. And
finding the three signals to demonstrate which is the best signal to calculate the
heart rate variation. Most probably the green signal is the best one to determine the
difference signal propagation. To remove the environmental noise use ensembleempirical mode decomposition and then apply JADE algorithm to find HR,HRV
and RR rates.
Modules used
Capturing module
BVP recovery module
Quantification of physiological parameter module(HR,HVR,RR)
26
video recording, We selected the center 60% width and full height of the box as the
region of interest (ROI) for our subsequent calculations.
The ROI was then separated into the three RGB channels and
spatially averaged over all pixels in the ROI to yield a red, blue, and green
measurement point for each frame and form the raw signals y1 (t), y2 (t), and y3
(t), respectively. Each trace was 1 min long. The raw traces were detrended using a
procedure based on a smoothness priors approach with the smoothing parameter
= 10 (cutoff frequency of 0.89 Hz) and normalized as follows. To perform motionartifact removal by separating the fluctuations caused predominantly by the BVP
from the observed raw signals.
27
(i) HR DETECTION
The HR detection can be performed by selecting the green signal
among the three signals. To avoid inclusion of artifacts, such as ectopic beats or
motion, the IBIs were filtered using the non causal of variable threshold algorithm
with a tolerance of 30%. HR was calculated from the mean of the IBI time series as
60/IBI.
(ii)HVR DETECTION
Analysis of HRV was performed by power spectral density (PSD)
estimation using the Lomb periodogram . The lowfrequency (LF) and high
frequency (HF) powers were measured as the area under the PSD curve
corresponding to 0.040.15 and 0.150.4 Hz, respectively, and quantified in
normalized units (n.u.) to minimize the effect on the values of the changes in total
power.The LF component is modulated by baroreflex activity and includes both
sympathetic and parasympathetic influences. The HF component reflects
parasympathetic influence on the heart through efferent vagal activity and is
connected to respiratory sinus arrhythmia (RSA), a cardio respiratory phenomenon
characterized by IBI fluctuations that are in phase with inhalation and exhalation.
We also calculated the LF/HF ratio, considered to mirror sympatho/vagal balance
or to reflect sympathetic modulations.
(iii)RR DETECTION
Since the HF component is connected with breathing, the RR can be
estimated from the HRV power spectrum. When the frequency of respiration
28
ALGORITHM :
Step 1 : Start.
Step 2 : Convert the given video into .avi format.
Step 3 : Calculate totalframe , totaltime , framerate for the given format.
Step 4 : And separate the three different frame with 3 signal(red,blue,green).
Step 5 : Crop the image into pixel resolution which only covers the face. And also
calculate the
29
IMPLEMENTATION PROCEDURE:
BLOCK DIAGRAM OF THE ENTIRE SYSTEM:
MATLAB
MAX23
2
3-EASY
PULSE
SENSOR
16 X 2 LCD
PIC16F87
7A
ADC
POWER
SUPPLY
PIC16F877A is
one
of
the
PIC
Micro
Family
microcontroller which is popular at this moment, start from beginner until all
30
professionals. Because very easy using PIC16F877A and use FLASH memory
technology so that can be write-erase until thousand times. The superiority this
Risc Microcontroller compared to with other microcontroller 8-bit especially at a
speed of and his code compression. PIC16F877A have 40 pin by 33 path of I/O.
PIC16F877A perfectly fits many uses, from automotive industries and
controlling home appliances to industrial instruments, remote sensors, electrical
doorlocks and safety devices. It is also ideal for smart cards as well as for battery
supplied devices because of its low consumption.
EEPROM memory makes it easier to apply microcontrollers to devices
where permanent storage of various parameters is needed (codes for transmitters,
motor speed, receiver frequencies, etc.). Low cost, low consumption, easy handling
and flexibility make PIC16F877A applicable even in areas where microcontrollers
had not previously been considered (example: timer functions, interface
replacement in larger systems, coprocessor applications, etc.).In System
Programmability of this chip (along with using only two pins in data transfer)
makes possible the flexibility of a product, after assembling and testing have been
completed.
SENSORS:
A sensor is a device that detects and responds to some type of input from the
physical environment. The specific input could be light, heat, motion, moisture,
pressure, or any one of a great number of other environmental phenomena. The
31
32
the body, the reflectance PPG can be applied to any parts of human body. In either
case, the detected light reflected from or transmitted through the body part will
fluctuate according to the pulsatile blood flow caused by the beating of the heart.
The HRM-2511E sensor is manufactured by Kyoto Electronic Co., China,
and operates in transmission mode. The sensor body is built with flexible Silicone
rubber material that helps to keep the sensor tightly hold to the finger. Inside the
sensor case, an IR LED and a photodetector are placed on two opposite sides and
are facing each other. When a fingertip is plugged into the sensor, it is illuminated
by the IR light coming from the LED. The photodetector diode receives the
transmitted light through the tissue on other side. More or less light is transmitted
depending on the tissue blood volume. Consequently, the transmitted light intensity
varies with the pulsing of the blood with heart beat. A plot for this variation against
time is referred to be a Photoplethysmography or PPG signal. The
following picture shows a basic transmittance PPG probe setup to extract the pulse
signal from the fingertip.
33
The PPG signal consists of a large DC component, which is attributed to the total
blood volume of the examined tissue, and a pulsatile (AC) component, which is
synchronous to the pumping action of the heart. The AC component, which carries
vital information including the heart rate, is much smaller in magnitude than the
DC component. A typical PPG waveform is shown in the figure below (not to
scale).
The two maxima observed in the PPG are called Sytolic and Diastolic peaks, and
they can provide valuable information about the cardiovascular system (this topic
is outside the scope of this article). The time duration between two consecutive
Systolic peaks gives the instantaneous heart rate.
Here are the features of Easy Pulse V1.1 sensor module.
Uses HRM-2511E transmission PPG sensor for stable readings
MCP6004 Opamp with rail-to-rail output capability for maximum signal
swing
Separate analog and digital outputs
34
ADC:
Basic analog-to-digital converter terminology will be covered first, followed
by configuration of the analog-to-digital converter peripheral. Next, information on
the usage of the peripheral will be presented, initially focusing on the 8-bit analogtodigital converter. Then the differences between the 8-bit and the 10-or 12-bit
converters will be discussed. Finally, some additional reference resources will be
highlighted.
Microcontrollers are very efficient at processing digital numbers, but they
cannot handle analog signals directly. An analog-to-digital converter, converts an
analog voltage level to a digital number. The microcontroller can then efficiently
35
inaccurate. The input range is set by high and low voltage references. These define
the upper and lower limits of the valid input range. In many cases, the high and
low voltage references are selected as the microcontroller supply voltage and
ground, at other times an external reference or references are used.In addition,
some devices have internal voltage references that can be used. The source or
sources for these voltage references are a configuration option when setting up the
analog-to-digital converter in the PICmicro microcontroller (MCU). Note that
there are restrictions on the voltage reference levels, for example: the reference
voltages generally shouldnt be less than Vss or greater than VDD. There is also a
minimum difference that is required between the high and low reference voltages.
Please consult your data sheet for the voltage reference requirements.
The output of an analog-to-digital converter is a quantized representation of
the original analog signal. The term quantization refers to subdividing a range into
small but measurable increments. The total allowable input range is divided into a
36
tenth of a volt. The maximum quantization error in this case would be five
hundredths of a volt or one-half of the increment size. It should be noted that the
minimum quantization error for the analog-to-digital converter peripheral in the
PICmicro devices is 500 micro volts. Therefore, the smallest step size for each
state cannot be less than one milli-volt.
Resolution defines the number of possible analog-to-digital converter output
states. As previously discussed, the result is a digital or whole number, so for an 8bit converter the possible states will be: zero, one, two, three and so on, with 255
as the maximum state. A 10-bit converter will have 1023 as the maximum state,
and a 12- bit converter will have 4095 as the maximum state. If the input range
remains constant, a higher resolution converter will have less quantization error
because the range is divided into smaller steps. This is similar in concept to the
process of rounding a number to the nearest hundredths, having potentially less
error than rounding to the nearest tenths.
Acquisition time is the amount time required to charge the holding capacitor
on the front end of an analog-to-digital converter. The holding capacitor must be
given sufficient time to settle to the analog input voltage level before the actual
conversion is initiated. If sufficient time is not allowed for acquisition, the
conversion will be inaccurate. The required acquisition time is based on a number
of factors, two of them being the impedance of the internal analog multiplexer and
the output impedance of the analog source.
LCD:
LCD (Liquid Crystal Display) screen is an electronic display module and
find a wide range of applications. A 16x2 LCD display is very basic module and is
very commonly used in various devices and circuits. These modules are preferred
over seven segments and other multi segment LEDs. The reasons being: LCDs are
37
if the LCD has accepted and finished processing the last instruction or not. The 10k
Potentiometer controls the contrast of the LCD panel.
Table 4.3 Pin Details of LCD
GND
Ground
38
Vcc
VEE
Contrast adjustment
Register
select
:0->Control
input,
RS
R/W
Read/ Write
Enable
D0 to D7
7 to 14
POWER SUPPLY:
Introduction:
The input to the circuit is applied from the regulated power supply. The a.c.
input i.e., 230V from the mains supply is step down by the transformer to 12V and
is fed to a rectifier. The output obtained from the rectifier is a pulsating d.c voltage.
So in order to get a pure d.c voltage, the output voltage from the rectifier is fed to a
filter to remove any a.c components present even after rectification. Now, this
voltage is given to a voltage regulator to obtain a pure constant dc voltage.
Block Diagram:
39
Transformer:
Usually, DC voltages are required to operate various electronic equipment
and these voltages are 5V, 9V or 12V. But these voltages cannot be obtained
directly. Thus the a.c input available at the mains supply i.e., 230V is to be brought
down to the required voltage level. This is done by a transformer. Thus, a step
down transformer is employed to decrease the voltage to a required level.
Rectifier:
The output from the transformer is fed to the rectifier. It converts A.C. into
pulsating. D.C. The rectifier may be a half wave or a full wave rectifier. In this
project, a bridge rectifier is used because of its merits like good stability and full
wave rectification.
Filter:
40
Capacitive filter is used in this project. It removes the ripples from the output of
rectifier and smoothens the D.C. Output received from this filter is constant until
the mains voltage and load is maintained constant. However, if either of the two is
varied, D.C. voltage received at this point changes. Therefore a regulator is applied
at the output stage.
Voltage Regulator:
As the name itself implies, it regulates the input applied to it. A voltage regulator
is an electrical regulator designed to automatically maintain a constant voltage
level. In this project, power supply of 5V and 12V are required. In order to obtain
these voltage levels, 7805 and 7812 voltage regulators are to be used. The first
number 78 represents positive supply and the numbers 05, 12 represent the
required output voltage levels.
5. TESTING
Testing is a set of activities that can be planned in advance and conducted
systematically. For this reason a template for software testing, a set of steps into
which we can place specific test case design techniques and testing methods should
be defined for software process. Testing often accounts for more effort than any
other software engineering activity. If it is conducted haphazardly, time is wasted,
unnecessary effort is expanded, and even worse, errors sneak through undetected.
It would therefore seem reasonable to establish a systematic strategy for testing
software.
41
is the process of executing the program with the intent of finding the error. A good
test case design is one that as a probability of finding an yet undiscovered error.
Testing is generally described as a group of procedures carried out to
evaluate some aspects of a piece of software. It can be described as a process used
for revealing defects in the software, and for establishing that the software has
attained a specific degree of quality with respected to selected attributes. It is an
investigation which is conducted to provide stakeholders with information about
the quality of the product or service under test. Testing can also provide an
42
objective, independent view of the software to allow the business to appreciate and
understand the risks of the software implementation.
Testing is more than just debugging. The purpose of testing can be quality
assurance, verification, and validation, or reliability estimation. Testing can be used
as a generic metric as well. Correctness testing and reliability are the two major
areas of testing. Software testing is a trade-off between budget, time and quality.
Poor quality software that can cause loss of life or property is no longer acceptable
to society. Failures can result in catastrophic losses. Conditions demand software
development staffs with interest and training areas of software product and process
quality. Highly qualified staff ensures that software products are built on time,
within budget, and are of the highest quality with respect to attributes such as
reliability, correctness, usability and the ability to meet all user requirements.
Testing helps in verifying and validating the software to see if it is working as it is
intended to be working. Test techniques include, but are not limited to, the process
of executing a program or application with the intent of finding software bugs
(errors or other defects).
43
While testing, care must be taken to not fall into the trap of rewriting large
parts of the system unnecessarily or even adding new coding. This comes about
when it is obvious that not of the required functionality has been implemented. It
can also happen when the user introduces new functionality which they had
omitted from the original specifications. Testing should, therefore, simply be
ensuring that the systems meets its original specifications and accurately performs
to that specification. Testing is not an easy phase of system development and
should not be treated lightly. Some organizations employ staff specifically to carry
44
out the testing of the products prior to release to the user. During this outcome it is
required to:
1. Implement a test plan using a defined strategy: Maintain test
documentation recording both the expected results of the test data and the actual
results. The bank of test data should be sufficient to thoroughly test the
implemented solution in scope and range.
2. Evaluate the results of test runs: Amend coding as necessary: where there
are discrepancies between the expected results and the actual results, the
application and documentation must be amended and corrected accordingly.
3. Testing is usually performed for the following purposes:
To improve quality
As computers and software are used in critical applications, the outcome of a
bug can be severe. Bugs can cause huge losses. Bugs in the critical systems have
caused airplane crashes, allowed space shuttle missions to go awry, halted trading
on the stock market, and worse. Bugs can kill. Bugs can cause disasters. Quality is
the conformance of the specified design requirement. Being correct, the minimum
requirement of quality, means performing as required under specified conditions.
Debugging, a narrow view of software testing, is performed heavily to find out
design defects by the programmer. The imperfection of human nature makes it
almost impossible to make a moderately complex programs correct the first time.
Finding problems and get them fixed, is the purpose of debugging in the
programming phase.
45
46
undiscovered error.
3. A successful test is one that uncovers an as-yet-undiscovered error.
47
48
methods are especially useful for revealing design and code based control, logic
and sequence defects, initialization defects and data flow defects.
A major White box testing technique is Code Coverage analysis. Code
Coverage analysis, eliminates gaps in a test case suite. It identifies areas of a
program that are not exercised by a set of test cases. Once gaps are identified, you
create test cases to verify untested parts of code, thereby increase the quality of the
software product. There are automated tools available to perform Code coverage
analysis. Below are a few coverage analysis techniques
Statement Coverage: This technique requires every possible statement in the
code to be tested at least once during the testing process
Branch Coverage: This technique checks every possible path (if-else and other
conditional loops) of a software application.
Apart from above, there are numerous coverage types such as Condition
Coverage, Multiple Condition Coverage, Path Coverage, Function Coverage
etc. Each technique has its own merits and attempts to test (cover) all parts of
software code. Using Statement and Branch coverage you generally attain 8090% code coverage which is sufficient.
49
a black box you cannot see into it. The test provides inputs and responds to
outputs without considering how the software works. It exploits specifications to
generate test cases in a methodical way to avoid redundancy and to provide better
coverage.
By applying black-box techniques, we derive a set of test cases that satisfy
the following criteria: (1) test cases that reduce, by a count that is greater than one,
the number of additional test cases that must be designed to achieve reasonable
testing and (2) test cases that tell us something about the presence or absence of
classes of errors, rather than an error associated only with the specific test at hand.
Graph-Based Testing:
The first step in black-box testing is to understand the objects6 that are
modeled in software and the relationships that connect these objects. Once this has
been accomplished, the next step is to define a series of tests that verify all objects
have the expected relationship to one another [BEI95]. Stated in another way,
software testing begins by creating a graph of important objects and their
relationships and then devising a series of tests that will cover the graph so that
each object and relationship is exercised and errors are uncovered.
Equivalence Partitioning:
It is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived. An ideal test case singlehandedly uncovers a class of errors (e.g., incorrect processing of all character data)
that might otherwise require many cases to be executed before the general error is
observed. Equivalence partitioning strives to define a test case that uncovers
50
classes of errors, thereby reducing the total number of test cases that must be
developed.
Comparison Testing:
When multiple implementations of the same specification have been
produced, test cases designed using other black-box techniques (e.g., equivalence
partitioning) are provided as input to each version of the software. If the output
from each version is the same, it is assumed that all implementations are correct. If
the output is different, each of the applications is investigated to determine if a
defect in one or more versions is responsible for the difference. In most cases, the
comparison of outputs can be performed by an automated tool. Comparison testing
is not foolproof. If the specification from which all versions have been developed
is in error, all versions will likely reflect the error. In addition, if each of the
51
independent versions produces identical but incorrect results, condition testing will
fail to detect the error.
Unit Testing
Unit testing focuses verification effort on the smallest unit of software
design the software component or module. Using the component-level design
description as a guide, important control paths are tested to uncover errors within
the boundary of the module. The relative complexity of tests and uncovered errors
is limited by the constrained scope established for unit testing. The unit test is
white-box oriented, and the step can be conducted in parallel for multiple
components.
The module interface is tested to ensure that information properly flows into
and out of the program unit under test. The local data structure is examined to
ensure that data stored temporarily maintains its integrity during all steps in an
algorithm's execution. Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or restrict processing. All
independent paths (basis paths) through the control structure are exercised to
ensure that all statements in a module have been executed at least once. And
finally, all error handling paths are tested.
Acceptance Testing
Acceptance of the system is key factor for the success of any system. It is a
critical phase of any project and requires significant participation by the end user.
It also ensures that the system meets the functional requirements.
52
The system under consideration is tested for user acceptance by constantly keeping
in touch with prospective system and user at the time of developing and making
changes whenever required. This is done in regarding to the following points.
Input screen design.
Output screen design.
Integration Testing
Integration testing is a systematic technique for constructing the program
structure while at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design.. All components are combined in
advance. The entire program is tested as a whole. Usually a set of errors is
encountered. Correction is difficult because isolation of causes is complicated by
the vast expanse of the entire program. Once these errors are corrected, new ones
appear and the process continues in a seemingly endless loop.
Testing Process
Waterfall development model
A common practice of software testing is that testing is performed by an
independent group of testers after the functionality is developed, before it is
shipped to the customer. This practice often results in the testing phase being used
as a project buffer to compensate for project delays, thereby compromising the
time devoted to testing.
53
54
Validation Testing
Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements. A test plan outlines the classes of tests
to be conducted and a test procedure defines specific test cases that will be used to
demonstrate conformity with requirements. Both the plan and procedure are
designed to ensure that all functional requirements are satisfied, all behavioral
characteristics are achieved, all performance requirements are attained.
Functional Testing
Functional testing provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input
Invalid Input
Functions
Output
55
Stress Test: It designed to intentionally break the unit. A Great deal can be
learned about the strength and limitations of a program by examining the manner
in which a programmer in which a program unit breaks.
Structured Test: Structure Tests are concerned with exercising the internal logic
of a program and traversing particular execution paths. The way in which WhiteBox test strategy was employed to ensure that the test cases could guarantee that
all independent paths within a module have been exercised at least once.
Exercise all logical decisions on their true or false sides.
Execute all loops at their boundaries and within their operational bounds.
Exercise internal data structures to assure their validity.
Checking attributes for their correctness.
56
System Testing
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
57
58
Test ID
TC1
TC2
TC3
TC4
Age limit
Expected
Pass/Fail
60
output
Accurate
Pass
21
output
Accurate
Pass
output
Accurate
Pass
output
Not
20
but changing output
the
seating
arrangements
exact Fail
59
6.EXPERIMENTAL RESULT
MATLAB is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation.
Typical uses include:
Math and computation
Algorithm development
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including Graphical User Interface building
MATLAB is an interactive system whose basic data element is an array that does
not require dimensioning. This allows you to solve many technical computing
problems, especially those with matrix and vector formulations, in a fraction of the
time it would take to write a program in a scalar non-interactive language such as
C or FORTRAN.
60
Vectorized operations. Adding two arrays together needs only one command,
instead of a for or while loop.
The graphical output is optimized for interaction. You can plot your data
very easily, and then change colors, sizes, scales, etc, by using the graphical
interactive tools.
MATLABs functionality can be greatly expanded by the addition of toolboxes.
These
are
sets
of
specific
functions
that
provided
more
specialized
MATLAB System:
The MATLAB system consists of five main parts:
Development Environment. This is the set of tools and facilities that help
you use MATLAB functions and files. Many of these tools are graphical
user interfaces. It includes the MATLAB desktop and Command Window, a
command history, and browsers for viewing help, the workspace, files, and
the search path.
The MATLAB Mathematical Function Library. This is a vast collection
of computational algorithms ranging from elementary functions like sum,
sine, cosine, and complex arithmetic, to more sophisticated functions like
matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier
transforms.
61
62
Table 6.1 Comparing the different age persons and determine their heart rate
values
INPUT
CAPTURING
ROI
OUTPUT
IMAGE
SEPERATION RANGE(HR,RR,HV
R)
HR:-65
HVR:-13
RR:-13
AGE :> 50
BUT <70
HR:-78
HVR:-15
RR:-17
<25
AGE: >10 BUT
HR:-74
HVR:-14
RR:-18
<16
HR:-89
HVR:-20
RR:-19
<10
63
normal. The typical respiratory rate for a healthy adult at rest is 1220
breaths per minute.
7. SCREEN SHOTS
OUTPUT:
64
65
Fig 7.4 determining the zero peaks in the three signal and find the
green signal
66
67
9.CODING
CODING:
Main.m
avi=mmreader('facevideo.avi');
totalframe=get(avi,'NumberOfFrames');
totaltime=get(avi,'Duration');
framerate=get(avi,'FrameRate');
timestamp=0:0.0704:totaltime;
r_sig=zeros(1,totalframe);
g_sig=zeros(1,totalframe);
b_sig=zeros(1,totalframe);
for i=1:totalframe %% 828 frames are here
frame=read(avi,i);
crop=imcrop(frame,[205 130 105 110]); % x1 , y1, x2, y2
red=crop(:,:,1);
green=crop(:,:,2);
blue=crop(:,:,3);
mr=mean2(red);
68
mg=mean2(green);
mb=mean2(blue);
r_sig(i)=mr;
g_sig(i)=mg;
b_sig(i)=mb;
end
figure
subplot(3,1,1)
plot(r_sig,'r'),grid on
subplot(3,1,2)
plot(g_sig,'g'),grid on
subplot(3,1,3)
plot(b_sig,'b'),grid on
sr=std2(r_sig);
sg=std2(g_sig);
sb=std2(b_sig);
meanr=mean2(r_sig);
meang=mean2(g_sig);
meanb=mean2(b_sig);
detr_r=detrend(r_sig)./sr;
detr_g=detrend(g_sig)./sg;
detr_b=detrend(b_sig)./sb;
figure;
subplot(3,1,1)
plot(detr_r,'r'),grid on
subplot(3,1,2)
69
plot(detr_g,'g'),grid on
subplot(3,1,3)
plot(detr_b,'b'),grid on
comb_sig=[detr_r;detr_g;detr_b];
B=JadeR(comb_sig);
source_sig=B*comb_sig;
gsource=source_sig(2,:);
avg_filt=ones(1,5)/5;
smoothed_sig = convn(gsource,avg_filt,'same');
figure;
subplot(3,1,1);
plot(timestamp,smoothed_sig,'g');
grid on;
Fs = framerate;
N = 128;
Fc1 = 0.4;
Fc2 = 4;
flag = 'scale';
win = hamming(N+1);
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);
bandpass=convn(smoothed_sig,b,'same');
subplot(3,1,2)
plot(timestamp,bandpass),grid on;
xx=0:0.0704:58.2636;
sampledata=0:1/256:58.2636;
70
interpolate=spline(xx,bandpass,sampledata);
subplot(3,1,3)
plot(interpolate),grid on;
[pks,loc]=findpeaks(interpolate,'minpeakdistance',100);
hold on
plot(loc,pks,'*r');
hold off
temp1=[0 loc];
temp2=[loc 0];
temp=temp2-temp1;
ibi=temp(1,1:size(loc,2))/256;
timeibi=loc/256;
ibisignal=detrend(ibi);
figure,subplot(3,1,1)
plot(timeibi,ibisignal,'--*b'),grid on
[f,Pxx,prob] = lomb(timeibi,ibisignal,4,1);
[psdpeak,psdloc]=findpeaks(Pxx);
[peakvalue,ind]=max(psdpeak);
fpeak=f(psdloc(ind));
subplot(3,1,2)
plot(f,Pxx,'b'),grid on;
hold on
plot(fpeak,peakvalue,'*r')
hold off
resp_rate=60*fpeak
heart_rate=60/mean(ibi)
71
lomb.m
function [f,P,prob] = lomb(t,h,ofac,hifac)
h=h';t=t';
N = length(h);
T = max(t) - min(t);
mu = mean(h);
s2 = var(h);
f = (1/(T*ofac):1/(T*ofac):hifac*N/(2*T)).';
w = 2*pi*f;
tau = atan2(sum(sin(2*w*t.'),2),sum(cos(2*w*t.'),2))./(2*w);
cterm = cos(w*t.' - repmat(w.*tau,1,length(t)));
sterm = sin(w*t.' - repmat(w.*tau,1,length(t)));
P = (sum(cterm*diag(h-mu),2).^2./sum(cterm.^2,2) + ...
sum(sterm*diag(h-mu),2).^2./sum(sterm.^2,2))/(2*s2);
M=2*length(f)/ofac;
prob = M*exp(-P);
inds = prob > 0.01;
prob(inds) = 1-(1-exp(-P(inds))).^M;
Jader.m
function B = JadeR(X,m)
verbose
=0;
[n,T] = size(X);
if nargin==1, m=n ; end;
if m>n ,
return,end
if verbose, fprintf('jade -> Looking for %d sources\n',m); end ;
72
= X - mean(X')' * ones(1,T);
= eig((X*X')/T);
[puiss,k]
= sort(diag(D));
rangeW
= n-m+1:n;
scales
= sqrt(puiss(rangeW))
= diag(1./scales) * U(1:n,k(rangeW))';
iW
= U(1:n,k(rangeW)) * diag(scales);
= W*X;
= (m*(m+1))/2;
nbcm
= dimsymm ;
CM
= zeros(m,m*nbcm);
= eye(m);
Qij
= zeros(m);
Xim
= zeros(1,m);
Xjm
= zeros(1,m);
scale
= ones(m,1)/T ;
Range
= 1:m ;
for im = 1:m
Xim = X(im,:) ;
Qij = ((scale* (Xim.*Xim)) .* X ) * X'
CM(:,Range)
= Qij ;
Range
= Range + m ;
for jm = 1:im-1
73
- R - 2 * R(:,im)*R(:,im)' ;
Xjm = X(jm,:) ;
Qij = ((scale * (Xim.*Xjm) ) .*X ) * X' - R(:,im)*R(:,jm)' R(:,jm)*R(:,im)' ;
CM(:,Range)
= sqrt(2)*Qij ;
Range
= Range + m ;
end ;
end;
%%
if 1,
if verbose, fprintf('jade -> Initialization of the diagonalization\n'); end
[V,D] = eig(CM(:,1:m));
for u=1:m:m*nbcm,
CM(:,u:u+m-1) = CM(:,u:u+m-1)*V ;
end;
CM
= V'*CM;
= eye(m) ;
else,
end;
seuil = 1/sqrt(T)/100;
encore
= 1;
sweep= 0;
updates = 0;
g
= zeros(2,nbcm);
gg
= zeros(2,2);
= zeros(2,2);
=0;
=0;
74
ton
=0;
toff
=0;
theta = 0 ;
%% Joint diagonalization
if verbose, fprintf('jade -> Contrast optimization by joint diagonalization\n'); end
while encore, encore=0;
if verbose, fprintf('jade -> Sweep #%d\n',sweep); end
sweep=sweep+1;
for p=1:m-1,
for q=p+1:m,
Ip = p:m:m*nbcm ;
Iq = q:m:m*nbcm ;
g
= [ CM(p,Ip)-CM(q,Iq) ; CM(p,Iq)+CM(q,Ip) ];
gg
= g*g';
ton
= gg(1,1)-gg(2,2);
toff
= gg(1,2)+gg(2,1);
encore = 1 ;
updates = updates + 1;
c
= cos(theta);
= sin(theta);
= [ c -s ; s c ] ;
pair
= [p;q] ;
V(:,pair)
= V(:,pair)*G ;
= [ c*CM(:,Ip)+s*CM(:,Iq) -s*CM(:,Ip)
+c*CM(:,Iq) ] ;
75
end
end
end
end
if verbose, fprintf('jade -> Total of %d Givens rotations\n',updates); end
B
= V'*W ;
= iW*V ;
[vars,keys] = sort(sum(A.*A)) ;
B
= B(keys,:);
= B(m:-1:1,:) ;
= B(:,1) ;
signs = sign(sign(b)+0.1) ;
B
= diag(signs)*B ;
return ;
bphamming.m
function Hd = bphamming
Fs = 14.2113;
N = 128;
Fc1 = 0.7;
Fc2 = 4;
flag = 'scale';
win = hamming(N+1);
b = fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag);
Hd = dfilt.dffir(b);
76
9.REFERENCE
[1] Decheng Yang and Weiting Chen,An illumination insensitive framework
using robust
illumination normalization and Spectral Regression Kernel Discriminant Analysis
for face recognition, IEEE Transactions, 2015.
[2] Fida Al-Obaisi, Ja far Alqatawna, Hossam Faris, Ali Rodan and Omar AlKadi, Pattern Recognition of Thermal Images for Monitoring of Breathing
Function,International Journal of Control and Automation vol. 8,No.6,2015.
[3]
77
[6] Paolo Melillo, et.al, Heart Rate Variability and renal organ damage in
hypertensive patients, International Conference of the IEEE EMBS,2012.
[7] Magdalena Lewandowska,et.al, Measuring Pulse Rate with a Webcam a
Non-contact Method for Evaluating Cardiac Activity,IEEE Transactions,2011.
[8]
Transactions,2005.
78
[19] T. Riklin-Raviv and A. Shashua, The quotient image: Class-based rerendering and recognition with varying illumination conditions,IEEE Trans.
Pattern Anal. Mach. Intell.,2001
[20] A. S. Georghiades, D. Kriegman, and P. N. Belhumeur, Illumination cones
for recognition under variable lighting: Faces, in Proc. IEEE Conf. CVPR, 1998.
[21] V. Blanz, S. Romdhani, and T. Vetter, Face identification across different
poses and illuminations with a 3D morphable model, in Proc. IEEE Conf. Autom.
Face Gesture Recognit., 2002.
[22] R. Gross and V. Brajovic, An image preprocessing algorithm for
illumination invariant face recognition, in Proc. 4th Int. Conf. Audio-Video-Based
Biometric Person Authentication (AVBPA), 2003.
[23] Z. Wu and N. E. Huang, Ensemble empirical mode decomposition:A noiseassisted data analysis method, Centre Ocean-Land-Atmos. Stud., Tech. Rep. Ser.,
vol. 193, no. 173,2004.
79