Académique Documents
Professionnel Documents
Culture Documents
Ali S. AlMejrad
Biomedical Technology Department, College of Applied Medical Sciences
King Saud University, P.O.Box 10219, Riyadh 11433, Kingdom Saudi Arabia
E-mail: amejrad@ksu.edu.sa
Abstract
This paper discusses the issues and challenges of research project that was designed
to assess the different human emotions through Electroencephalogram (EEG). This work
led to the development of real time system for human emotion detection through EEG, and
has been the benchmark for continuing international study. EEG measurement is non-
invasive and inexpensive, have a very high sensitivity to receive information about the
internal (endogenous) changes of brain state, and offer a very high time resolution in the
millisecond range. Because of the latter property, these data are particularly suited for
studies on brain mechanisms of cognitive-emotional information processing which occurs
in the millisecond range. It has been well known that specific cortical and sub-cortical
brain system is utilized and have been differentiated by regional electrical activities
according to the associated emotional states. There are important challenges have to be
faced for developing efficient EEG signals emotion recognition such as (i) designing a
protocol to stimulate unique emotion than multiple emotions (ii) develop a efficient
algorithm for removing noises and artifacts from the EEG signal (iii) utilize the suitable
and efficient artificial intelligence technique to classify the emotions. In addition,
emotional activities of the brain causes difference EEG characteristics waves, it has been
attempted to investigate the brain activity related to emotion through analyzing EEG.
1. Introduction
EEG signal represents the effect of the superimposition of diverse processes in the brain. Very little
research has been done to separately study the effects of these individual processes. Evidence from
imaging research suggests that processing of facial affect relies on the interplay of several distinct brain
areas. The inferior occipito-temporal cortex, especially the fusiform gyrus, plays a key role for the
detection of facial configurations. [1]. Further analysis of facial affect has been shown to be related to
activation of the superior temporal sulcus, the amygdale, the orbito-frntal cortex and the insular cortex.
[2]. In medical and basic research, the correlation of particular brain waves with sleep phases,
emotional states, physiological profiles, and types of mental activities is ongoing.
Nonverbal information appearing in human facial expressions, gestures, and voice plays an
important role in human communication. Especially, by using information of emotion and/or affection
the people can communicate with each other more smoothly. This means that non verbal
Human Emotions Detection using Brain Wave Signals:
A Challenging 641
2. Neuro-Imaging Methods
The cognitive neuroscience methods have been widely classified to two different types such as (i)
Single Cell Recording (ii) Brain Imaging. The main limitations of single cell recording method are (i)
Invasive, because it needs brain surgery to the patients (ii) It is stressful and often involves medication
(iii) Time constraint during the experimental procedure and (iv) Retesting is not possible. Secondly, the
brain imaging methods are basically classified into Positron Emission Tomography (PET), functional
Magnetic Resonance Imaging (fMRI), Electroencephalogram (EEG), Magneto electroencephalogram
(MEG), Magnetic Resonance Imaging (MRI), and Transcranial Magnetic Simulation (TMS). The basic
advantages of the above methods are: non invasiveness, no brain surgery is required, high speed and
good accuracy. The PET and fMRI methods provide an indirect measure of blood flow. In which fMRI
are BOLD (Blood Oxygen Level Dependent) and provides a measure of hemodynamic adjustments.
The limitations of this method are: sensitive to artifacts (e.g., Movement, cavities, and tissue
impedance difference), blood flow level in CNS (Central Nervous System) can change the imaging
characteristics and Limited Temporal Resolution.
The basic idea behind the MEG is, measurement of magnetic fields occurring outside the head
as a result of naturally occurring electrical activity in the brain. It gives a better spatial resolution the
EEG and it is rarely used in clinical applications. But the concept of TMS is completely different from
the previous methods; it uses electromagnetic induction to temporarily disrupt brain function. Though
this procedure may be very focal in nature and gives high temporal resolution, there may be a chance to
induce the seizures in the human brain [4]. Unfortunately, the above methods require sophisticated
devices that can be operated only in special facilities. Moreover, techniques for measuring blood flow
have long latencies and thus are less appropriate for interaction. [5]. The main drawback of the above
method is lies on, bulky scanner, slower vascular response to local response. The most important is the
642 Ali S. AlMejrad
limitation in mobility of the user and the imaging can be controlled by the oxygen circulation in the
brain. In this research, we are going to investigate the basic and fundamental issues and challenges rely
on assessing the emotions through EEG signals.
3. Emotions
Emotions and their expression are key element in social interactions, being used as mechanisms for
signaling, directing attention, motivating and controlling interactions, situation assessment,
construction of self- and other's image, expectation formation, inter subjectivity, etc. It is not only
tightly intervened neurologically with the mechanisms responsible for cognition, but that they also play
a central role in decision making, problem solving, communicating, negotiating, and adapting to
unpredictable environments. Emotion consists of more than its outward physical expression: it also
consists of internal feelings and thoughts, as well as other internal process of which the person
experiencing the emotion may not be aware. Individual emotional state may be influenced by kinds of
situations, and different people have different subjective emotional experiences even response to the
same stimulus.
Recently, a constellation of findings, from neuroscience, psychology, and cognitive science,
suggests that emotion plays surprising critical roles in rational and intelligent behavior. When we are
happy, our perception is biased at selecting happy events, likewise for negative emotions. Similarly,
while making decisions, users are often influenced by their affective states. Reading a text while
experiencing a negatively valence emotional state of often leads to very different interpretation than
reading the same text while in a positive state [63]. They are classified the different types of emotions
elicited from the subjects through the physiological signals. They also described the different kind of
emotions, the type of feature extraction technique, the method of eliciting emotions and the
physiological signals used for classifying the emotions.
After a century of research, there is little agreement about a definition of emotions and many
theories have been proposed. A number of these could not be verified until recently when improved
measurement of specific physiological signals became available. In general emotions are short-term
(existing for few micro second- mille second) [6]. One of the hallmarks in emotion theory is whether
distinct physiological patterns accompany each emotion [7]. Ekman et al [8] and Winton et al [9]
provided some of the first findings showing significant differences in autonomic nervous system
signals according to a small number of emotional categories or dimensions, but there was no
exploration of automated classification.
It is also apparent that we as humans, while extremely good at feeling and expressing emotions,
still cannot agree on how they should best be defined [10]. These reasons are then topped by the fact
that emotion recognition is itself a technically challenging field. Being inherently multi-modal, there
are a number of ways in which emotions can be recognized. Then how to evaluate individual’s
emotional state objectively and find out the functional areas of emotion processing comes to be the
most attention issue. Hence the ability to recognize emotion is one of the hallmarks of emotional
intelligence, an aspect of human intelligence that has been argued to be even more important than
mathematical and verbal communication. This may be via speech, facial expression, gesture and or a
variety of other physical and physiological cues. This spread of modalities across which emotion is
expressed leaves the field open for many different potential recognition methods.
classified physiological patterns for a set of six emotions (happy, sad, disgust, fear, joy and anger) by
showing the video clips to the subjects. The features used for classification are skin conductance, heart
beat rate, and temperature. In [12] this work, they have proposed the independent emotion recognition
by gathering data from the multiple subjects (Multi Modal) and classifieds the simple emotions such as
Pleasure and unpleasure using neural Networks and Support Vector Machine.
One of the researcher Lang P.J, claimed that emotions can be characterized in terms of judged
valence (Pleasant or unpleasant) and arousal (calm or aroused). The above Fig 1 (a) and Fig 1(b),
shows the region of relation between arousal with valence. The relation between physiological signals
and arousal/valence is established due to the activation of the automatic nervous system when emotions
are elicited.
After perceiving a stimulating event, an individual instantly and automatically experiences
physiological changes, these response to this changes are called Emotion (William James).
644 Ali S. AlMejrad
Figure 2: Brain Model for Emotion Recognition
Hence the emotions from brainwaves, which are an index of the central nervous system, seems
to be general and effective since emotions are excited in the limbic system and are deeply related to
cognition process. In the several region of brain, Amygdala plays a major role on recognizing fear
emotion. The above Fig 2 simply shows the region of brain in which amygdale present. In earlier
studies, the experimental study on animals proves that, the amygdale plays a major role in recognizing
fear in an animal brain. And also it states that, the bilateral removal reduces level of aggression and
fear in rats and monkeys.
Hence the Bilateral amygdale damage reduces recognition of fear-inducing stimuli reduces
recognition of fear in others. It simply reduces recognitions of fear in others. Bilateral amygdala
damage impairs recognition of negative emotions from facial expressions. Bilateral amygdala damage
does not in general impair recognition of emotions from complex static visual stimuli, provided those
stimuli contain cues in addition to facial expressions. This finding is especially notable in regard to
fear, whose recognition is often impaired following bilateral amygdala damage. Whereas the inclusion
of facial expressions improved recognition of negative emotions for all other subject groups, subjects
with bilateral amygdala damage derived much less benefit from the inclusion of facial expressions. A
variety of brain regions are involved in the processing of facial expressions of emotion. They are active
at different times and some structures are active at more than one time. The amygdala is particularly
implicated in the processing of fear stimuli receiving early (<120 ms) sub cortical as well as late (~170
ms) cortical input from the temporal lobes.
capture neural activity on a millisecond scale from the entire cortical surface while fNIRS records
hemodynamic activity in second’s scale [13].
Figure 3: Cerebral hemispheres showing the motor areas (towards the front) and the sensory areas (towards
the back)
Regarding the EEG related to emotion, the researchers are often focusing on the reduction in
Alpha Band (8 Hz - 13 Hz) activity. Much research suggests an inverse relationship between alpha
activity and brain activation in adults. The EEG has recorded the electrical difference between resting
state to stimulus conditions in the human brain from the two hemispheres and other physiological
activity in response to stimuli. The use of EEG has been pivotal in studies concerned with brain
asymmetry and emotion.
According to (Lee M, 2000) this study, the positive and negative emotions may or may not be
estimated from the EEG signal using Skinner’s Point –Wise Correlation Dimension (PD2) analysis.
But this PD2 represents some of the mental activity in the brain areas. The Fig 4 simply shows the
region of brain activated in emotion recognition. Where the Green colour indicates the neutral, red
colour indicates the anger emotion and purple colour indicates the happiness. At last the blue colour
indicates the sadness of the human.
It concluded that the arithmetic task play major role when compared to all other methods of
eliciting emotion from the subjects and more concentration increases the dimensional complexity of
dynamics of EEG measures. It is the result of interplay between an individual’s cognitive appraisal of
an event and his or her physical response to it.
The ambulatory electroencephalograph based system for monitoring brain waves becomes a
recent technology and it gives a lot of freedom to the patients to sitting in front of the EEG monitoring
device for the entire duration of recording. Because of this method, the patients can be in any place,
where the electrode cap pitted on the patients will collect the brain wave and transmit to the receiver
using the transmitter. The received EEG signals can state the condition of the patient’s.
For instance, it was found that the alpha band power (8 Hz - 13 Hz) was less in the left
hemisphere than in the right for verbal tasks and less in the right hemisphere than in the left for spatial
tasks. This phenomenon is referred to in the literature as alpha band asymmetry. In the same work, they
Human Emotions Detection using Brain Wave Signals:
A Challenging 647
were investigated with both motor and non motor tasks and it was found that tasks that require motor
output engage the hemispheres more asymmetrically [16].
Other papers [17, 18] investigated the alpha asymmetry using only non motor tasks. Their
findings showed that the alpha asymmetry also exists for non motor tasks. In one paper [19], a subject
is described who can voluntarily suppress alpha waves in the left or right hemisphere. They concluded
that there are some measurable differences in the EEG that correlate with different types of mental
processes. With this in mind one can see the possibility of training a subject to produce and control
mental processes that can be distinguished from one another by an external device using the measured
EEG data as input.
The same researchers had studied pattern recognition techniques to try to search for differences
in the EEG during performance of tasks previously used to elicit hemispheric responses. Their findings
showed that unless the task re-among different tasks. They also found very little asymmetry associated
with the non-motor tasks. The above experiment was done on a group of subjects as a whole and no
attempt was made to distinguish tasks on an individual level. It was not the goal of this research to
prove or disprove the theory of hemispheric specialization; however, we chose emotional tasks based
on research in this area in hopes of producing measurably different responses in the EEG that could be
used to distinguish between the various tasks.
The first one can be done by using the universal data base for facial emotion recognition
prepared by Ekman and Frieson [20]. According the above methods, we are not sure that the original
emotion is elicited by the subject. This kind of methods for eliciting the emotion is also affect the
gathering of good data. Yuankui Y et al [21] has proposed to get the data from the children’s when
they are playing games, learning and etc for analyzing the EEG signal for recognizing the emotions.
None of the researchers have been done their data collection from the children’s for recognizing the
emotions so far.
After the long discussion with the neurologists, we come to a conclusion that, the ordinary
person can be able to control their emotions by externally showing their facial expression for the
different kind of emotion elicitation method. But the people those who have suffered by the diseases
like Paralysis, Brain Stroke, Short- Temper cannot able to control their emotion, thereby we can get the
original data as maximum as possible.
With the development of computer science and electronic technique, event-related potentials
(ERP) have been used to research cognition and emotion. Using this technique, some special emotion-
related components during emotional processing have been found to interpret the relationship between
psychological activities and changes of brain potentials.
12
non-target stimuli
target stimuli
10
6
Amplitude( uV )
-2
-4
0 100 200 300 400 500 600
According to [22] this proposal, the research on the brainwave should consider the subjects of
mixed- sex samples rather than having single sex samples with clinical and normal populations and
hence include broader age-range. Most of the researches have been done on human emotion
recognition to recognize the happy and disgust, because more people don’t realize disgust.
Hence the limitations on the above methods are obvious in nature. The inclusion of sad, anger
and fear of negative emotions to be experimentally manipulated would substantially advance the study
of cerebral activity.
Under the condition of long-term recording of EEG signal, the activities of penitent always
cause disturbance during observation. The status of the brain under various stimulus conditions are
shown in Fig 5, Fig 6 and Fig 7. In which, the Stimulus on P300 plays a major role in recent day
analysis for analyzing the functional state of the human brain.
4. Research Methodology
The research methodology of this work has been shown in Fig 8. The electrical activity of the human
brain is recorded through the electrodes, which are placed on the scalp of the brain. These recorded
brain waves are undergone for preprocessing. In the preprocessing stage mainly constitutes of removal
of noise, artifacts, and other external interferences. Removal of noise can be done by using Wavelet
Transform and the artifacts can be removed by using Independent Component Analysis or by using
Rejection Filters. After the preprocessing, wavelet transform will be applied on the pre processed
signal for extracting the features from the EEG signals. Generally, the EEG signals are non stationary
in nature; hence the statistical properties of the signals like Mean, Median, Variance, Average Power,
Average Energy, Power Spectral Density Function, Skewness and other parameters. After this, the
Wavelet transform coefficients are dimensionally reduced for simplifying the classification process.
Human Emotions Detection using Brain Wave Signals:
A Challenging 649
Figure 8: Basic Diagram for Human Emotion Recognition using EEG signals
Dimension reduction can be done by several methods such as, Principal Component Analysis
(PCA), Independent Component Analysis (ICA). These extracted features are classified for seven
different kinds of emotions say, Happy, Anger, Joy, Disgust, Fear, Relax, and Neutral through artificial
intelligence techniques such as Neural Network, Genetic Algorithm, Support Vector Machine, Fuzzy
Logic, and Hybrid Structure of the above method.
Though the EEG signals consist of several frequency bands with ranges from 0 Hz- 80 Hz, we
can concentrate only to 0 Hz - 40 Hz for analyzing the human brain activity. In general, EEG signal
frequency rhythm is classified into five types of frequency rhythms are particularly important: Delta,
Theta, Alpha, Beta, and Gamma.
4.3. Artifacts
Signals in the EEG that are of non-cerebral origin are called artifacts. Due to eye movements or
muscular activity which contaminates the EEG signals. Especially eye blinks cause larger artifacts in
the signal, since the corresponding muscles are very closer to the EEG electrodes. Since artifacts are
spread multiple channels. Removal of one more such components would remove too much useful EEG
signal information. This is one of the reasons why it takes considerable experience to interpret EEGs
clinically. These artifacts are impulsive in nature, larger in amplitude and different shape in
comparison to signal sequences. Because of the larger amplitude value, it dominates characterizations
of the signals based on second order statistics such as correlation and spectral analysis. These are
classified into (i) Patient related (ii) Technical related. The patient related artifacts are highly disturbs
the EEG signal rather the later one can be decreased by decreasing the electrode impedance, and by
using shorter electrode wires. The most common types of artifacts are:
• Eye artifacts (including eyeball, ocular muscles and eyelid)
• EKG artifacts
• EMG artifacts
• Gloss kinetic artifacts
Normally, the Eye artifacts such as eye-blinks or jaw clenching are common and strong
artifacts in EEG recordings and it can be reduced by using cross-hair fixation point and the EMG
artifacts can be reduced by having the EEG measurement away from the EMG sources. All these
artifacts can be reduced by using rejection filters and Independent Component Analysis (ICA). The
rejection filters uses a robust transform to remove the artifacts effectively. This filter gives a robust in
performance to the choice of user- specified parameters [27]. They reported that, these rejection filters
can be applied for the biomedical signals which are corrupted by occasional, short-duration artifacts.
ICA has been shown to be very efficient for the purpose of artifact removal [28]. The ICA based
artifact removal is more efficient to extract the useful information from the EEG signal [28].
insufficient and it losses a most valuable information of the EEG signal. In order to preserve the
valuable information from the EEG signal for emotion recognition and for analyzing the condition of
human brain, the time domain, frequency domain and the time-frequency analysis may be used.
In this stage, the pre processed signals are converted into vectors of extracted features that can
be used by the intelligent emotion recognition module in order to determine subject’s emotions. The
selected features provide a combination of simple statistics and complicated characteristics which are
related to the nature of the physiological signals and the underlying classification problem. Generally
there are two approaches in the time series analysis for extracting the feature from the EEG signal. One
is Frequency Analysis and another one is Time analysis. The time domain methods based on
Parametric Models are very useful for extracting the feature from the EEG signal. Anderson C.W et.al
[29], has modeled the EEG signal using Auto Regressive models with sixth order co efficients. But this
AR based estimation methods will reduces the spectral loss problems and gives better frequency
resolution. When compared to FFT, AR model will require only shorter duration of data records [30].
For parametric model approach, Auto regressive model with time varying co efficients is the common
method for parametric spectral estimate for non stationary signals. The significant drawback existing
on this method is difficulties in establishing the model property for different EEG signals.
But Frequency domain based approaches are quite popular one in recent day applications. The
aim of signal analysis by this method is to extract relevant information from a signal by transforming it
to the frequency domain. Here the features are extracted from the transformed signal. The popular way
of frequency analysis is power spectral analysis via Fourier Transformation, which is widely used for
the standard quantitative analysis of the spectral decomposition of EEG Signals [31, 32, 33]. But this
well known method is valid only for the signals for stationary nature and linear random processes. It
also lags in simultaneous time and frequency measurement. There are some methods which make a
priori assumptions on the signal to be analyzed. This may yield sharp results if these assumptions are
valid, but is obviously not of general applicability. With the development of modern signal processing
techniques, there have been some attempts to automate the recognition of transients in EEG signals.
Generally, there are three main methods for the analysis of the time-dependent spectrum of non-
stationary signals. (i) STFT (ii) Wigner-Ville Distribution (iii) Time-Varying Parametric Model.
Here the STFT assumes the stationary of the signal within a temporal window to match the
time-frequency resolution chosen for spectral estimate. The main problem of this method is lies on
fixed time- frequency resolution chosen for the spectral estimate. WVD is good for time-frequency
concentration and edge characteristic, but for multi-component signal, it can introduce cross-
disturbance term, which may cause misunderstand for signal’s tome-frequency feature.
A powerful method was proposed in late 1980’s to perform time-scale analysis of signals: the
Wavelet Transform. It is a mathematical microscope is used to analyze different scales of neural
rhythms is shown to be a powerful tool for investigating small-scale oscillations of the brain signals.
This transform is suitable for Time Series Analysis and it will nullify some of these drawbacks through
its variable window size and time-frequency filtering properties [34, 35]. Though the wavelet
decomposition of the EEG records, transients features are accurately captured and localized in both
time and frequency context. This results in excellent feature extraction from non-stationary signals
such as EEGs [36].
on a Fourier Transform (FT) have been most commonly applied. This approach is based on the EEG
spectrum contains some characteristics waveforms that fall primarily within four frequency bands,
such as Alpha, Beta, Gamma, Delta and Theta. But the Fourier Transform and its Discrete Version FT
suffer from large noise sensitivity [29].
Where wavelet transforms is a new two dimensional time-scale processing method for
analyzing non stationary signals with adequate scale values and shifting in time [38, 39]. Thus it can be
used as a powerful tool for characterizing the frequency as well as time components of EEG signals
[40]. The importance of using wavelet transforms are lies on its Multi Resolution Analysis (MRA),
capable to analyze the signal which is having discontinuities through the variable window size, and
localizing the information in Time-Frequency Plane. The above features would not be present in any of
the other transforms except the wavelet transform [41].
Mathematically speaking, the wavelet transform is a convolution of the wavelet function ψ (t)
with the signal x(t). Orthonormal dyadic discrete wavelets are associated with scaling functions φ (t),
The scaling function can be convolved with the signal to produce approximation coefficients S.
The scaling function Ф(t) can be defined as
Φ (t ) = ∑ h(k )φ (2t − k )
k (1)
We can also define the wavelet function ψ(t) as
Ψ (t ) = ∑ g (k )φ (2t − k )
k (2)
Where h(k) and g(k) are the high pas and low pass filter coefficients. The admissibility
condition should be satisfied.
The Mallat algorithm based on orthogonal wavelets, have been widely used in various areas of
non stationary EEG signal processing. [38, 39, 42, 43]. However, the orthogonal decomposition of
EEG signals is not able to detect the exact four basic rhythms of the spontaneous EEG signals. This
algorithm would not detect the high frequency signal rather it detects only the low-frequency signals.
The discrete wavelet transform decomposes a signal onto a set of basis functions called
wavelets. Here non-stationary biomedical signals is to expand them onto basis functions created by
expanding, contracting, and shifting a single prototype function (Mother Wavelet), specifically selected
for the signal under consideration. A wavelet function is a rapidly decreasing oscillation function given
by [43, 44, 45].
1 ⎛t −b ⎞
Ψa ,b (t ) = Ψ⎜ ⎟, a, b ∈ R, a ≠ 0
a ⎝ a ⎠ (3)
Where a is the scale parameter, R is the real number, and the analyzing wavelet function is
centered at time b. the wavelet transform of a signal f(t) is defined as
+α
W f (a, b) =
−
∫α f (t )ψ a ,b * ( t ) dt
+α
1 ⎛t − b ⎞
= ∫ f (t ) ψ *⎜
⎝ a ⎠
⎟ dt
−α a (4)
Wf (a,b) at given time a can be interpreted as a filter version of the signal band passed by the
filter ψa,b(t).
The large number of known wavelet families and functions provides a rich space in which to
search for a wavelet which will very efficiently represent a signal of interest in a large variety of
applications. Wavelet families include Bi-orthogonal, Coiflet, Harr, Symmlet, Daubechies wavelets,
etc. There is no absolute way to choose a certain wavelet. The choice of the wavelet function depends
on the application. But the only requirement is that the wavelet satisfies an admissibility condition: In
particular, it must have zero mean. The Haar wavelet algorithm has the advantage of being simple to
654 Ali S. AlMejrad
compute and easy to understand. The Daubechies algorithm is conceptually more complex and has a
slightly higher computational overhead. But, the Daubechies algorithm picks up detail that is missed by
the Haar wavelet algorithm. Even if a signal is not well represented by one member of the Daubechies
family, it may still be efficiently represented by another. Selecting a wavelet function which closely
matches the signal to be processed is of utmost importance in wavelet applications [37]. Daubechies
wavelet families are similar in shape to QRS complex and their energy spectrums are concentrated
around low frequencies.
The joint Time – Frequency resolution obtained by wavelet transform makes it good candidate
for the extraction of details as well as approximations of the signal which cannot be obtained by other
methods like Fast Fourier Transform (FFT) and Short Time Fourier Transform (STFT). Because of the
variable window size over the length of the signal, which allows the wavelet to be stretched or
compressed depending on the frequency of the signal [46, 47]. The time-frequency resolution of STFT
and WT is shown in fig 14. Normally the value of variance in ‘Joy’ and ‘Anger’ is having larger
amplitude when compared to the smaller magnitude of variance of ‘Sorrow’ and ‘Relaxation’.
In [48] this work, the researchers were used the concept of Band Relative Intensity Ratio
(BRIR) with Multi Resolution Time Frequency Analysis (MRTFA) of wavelet packet transform for
extracting the various frequency bands on low frequency region. Besides the Time Domain and
Frequency Domain methods, there is one another method called Fractal Analysis. Fractal analysis is a
new scientific paradigm that has been used successfully in many domains including biological and
physical sciences. These are objects which possess a form of self-scaling: Parts of the whole can be
made to fit the whole by shifting and stretching. Fractal description of an EEG signal can be a useful
tool for feature extraction [49]. Fractal features represent the morphology of the signals. These
morphological differences can be picked up and used by several applications. There are several
features based on fractal theory/morphological analysis that can be extracted from a usual signal [50].
Fractal dimension has been proven useful in quantifying the complexity of dynamical signals in
biology and medicine. Fractal dimensions are measures of the self similarity of the signals.
The Researcher [51] was used the time series approach for analyzing EEG signal for both 1D
and 2D Cartesian grid of electrodes. In the above work, the spline based WT are used to tune the
window size in order to extract the spikes into their component frequencies with high speed of
computation. One of the problems in applying WT on neural signal was identified by
W.Przybyszewski that, lack of consistent methodology to handle the pervasive noise which is recorded
from the brain- whether from the summated field potential or from neuron spike train data. Extracting
the various frequency bands from the central nervous system was first tested with the animals by Dixon
T.L et. al [52]. In this work they used the Wavelet Transform in addition to the Fast Fourier Transform
for analyzing the data. The features which are extracted from the EEG signals are the average power
and its relative differences. They concluded that the Wavelet Transform is most powerful tool for
studying the activities of human brain before and after the critical exposure to the drug.
Normally the basic features derived from the EEG signals are lies on detaching the various
frequency bands [53]. After separating the frequency bands, the basic parameters such as Energy,
Power, Average Power, Standard Deviation, Mean, Variance and Skewness are determined through the
power spectrum of the signals. The parameters are used to distinguish the functional state of the brain.
Here they observed that, ‘Joy’ and ‘Anger’ have large variance of amplitude and ‘sorrow’ and
‘relaxation’ have small variance. They also considered the variance and average amplitude as a
separate feature for classifying the emotions. The wavelet Packet Transform is also used to extract the
features such as energy [54], distance [55] or Clusters [56]. In order to derive features from the various
bio-signals, we use a common set of feature values which are processed and used as an additional input
or as a substitute to the raw signal for the classification. These common feature values are:
methods will lose some amount of useful data features on data reduction process. The one way of
reducing the number of wavelet co efficient to be used for feature extraction is to prescribe a ‘stooping
criteria’, called thresholding operation [57]. This method of data reduction is also removes the noises
present in the original EEG signals by using wavelet transform denoising method. The denoising
method uses the threshold values as σ (2log L) ½. Here σ is the standard deviation of the noise and L
the length of the data vector [30]. But Principal Component Analysis method is also used to reduce the
data size of the wavelet co efficients [58]. Some of the researchers have used the Linear Discriminant
Bases (LDB) and Joint Best Bases (JBB) for reducing the feature space. Here the LDB uses “Distance”
measures among the energy distribution of signal classes as the criterion in finding optimal spaces.
5. Applications
The use of emotional understanding using computers is a field of increasing importance. In many ways
emotions are one of the last and least explored frontiers of intuitive human computer interaction.
Human Emotions are considered to be power tool for to enhance the communication between humans
and computer application. In recent years, a growing interest has developed in recording, detecting and
analyzing brain signals to investigate, explore, and understand human motor control systems with the
aim of attempting to build interfaces that use real-time brain signals to generate commands to control
and/or communicate with the environment [59].
Analysis of brain waves are mainly used for humans who are physically disabled by, such as
paralysis, brain stroke and other kind of brain disorders to communicate with the real world application like
controlling of house hold equipments, playing games, making or receiving calls from phone, and using the
computer. The standard keyboard/mouse model of computer use is not only unsuitable for many people
with disabilities, but also somewhat clumsy for many tasks regardless of the capabilities of the user. EEG
signals provide one possible means of human-computer interaction, which requires very little in terms of
physical abilities.
These brainwaves are also used importantly for lie detection in police enquires and to find out
the mental fatigue of the pilots and drivers. In medical fields, these are useful one for analyzing the
human diseases based on psychological, physiological and psycho physiology. By determining their,
emotional states inputs, the real time system may be able to adapt their behaviors, allowing users to
experience the interaction in a sensible way. This can perhaps be explained by the fact that computers
are traditionally viewed as logical and rational tools, something which is incompatible with the often
irrational and seeming illogical nature of emotions [60].
656 Ali S. AlMejrad
The detection of emotion is becoming an increasingly important field for human computer
interaction as the advantages emotion recognition offer become more apparent and realizable. The
basic system for Brain Computer Interface is shown in Fig 11. Emotion recognition can be achieved by
a number of methods. The use of emotion in computers is a field which is becoming increasingly in
vogue at the current time. In many ways emotions are one of the last and least explored frontiers of
intuitive human computer interaction.
lines between the instrument and human brain activity measurement device. When the conventional
EEG acquisition equipment is intended to transfer to a portable device, such as personal digital
assistant (PDA), wired transmission always caused inconvenience in mobilization. If the technical
advantages of wireless communications, such as the Bluetooth technology, are used, the application
field of EEG machine can be extended more widely. Besides, the computer usually lacks an effective
program to read, analyze, and then display the EEG signals stored in conventional EEG Machines. If
the recorded EEG data can be treated more completely, the serviceability of the EEG acquisition
system would be enhanced significantly.
References
[1] Kanwisher N, McDermott J, and Chun M.M, 1997. “The fusiform face area: a module in
human extrastriate cortex specialized for face perception”, Journal of Neuroscience, 17, pp,
4302- 4311.
[2] Haxby J.V, Hoffman E.A, and Gobbini M.I, 2000. “The distributed neural system for face
perception”, Trends Cognitive Neuroscience, 4, pp, 223-233.
[3] Picard R.W, Vyzas E, and Healey J, 2001. “Towards Machine Emotional Intelligence: Analysis
of Affective Physiological State”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23(10), pp, 1175- 1191.
[4] Dr. R.Newport, Human Social Interaction perspectives from neuroscience, www.psychology.
nottingham.ac.uk/staff/rwn.
[5] Marcel S, Jose del and R.Millan, 2006. “Person Authentication Using Brainwaves (EEG) and
Maximum A Posteriori Model Adaptation”, IEEE Trans on Pattern Analysis and Machine
Intelligence, Special issue on Biometrics, pp, 1-7.
[6] Jenkins J.M, Oatley K, and Stein NL, 1998. “Human Emotions”, A reader Balck Well
Publisher.
[7] Cacioppo C.J, Tassinary LG, 1990. “Inferring Physiological Significance from Physiological
Signals”, American Psychologist.
[8] Ekman P, Levenson R.W, and Freison W.V, 1983. “Autonomic Nervous System Activity
Distinguishes Among Emotions”, Journal of Experimental Social Psychology, pp, 195-216.
[9] Winton WM, Putnam L, and Krauss R, 1984. “Facial and Autonomic Manifestations of the
dimensional structure of Emotion”, Journal of Experimental Social Psychology, pp, 195-216.
[10] Richins, Marsha L, 1997. "Measuring Emotions in the Consumption Experience," Journal of
Consumer Research, 24 (September), 127-146.
[11] Picard R.W, 2000. “Affective Computing”, MIT Press.
[12] Takahashi K, Tsukaguchi A, 2003. “Remarks on Emotion Recognition from Multi-Modal Bio-
Potential Signals”, IEEE Trans on Industrial Technology, 3, pp, 1654-1659.
[13] Savran A, ciftci K, Chanel G, Javier Cruz Mota, Luong Hong Viet, Sankur B, Akarun L,
Caplier A, and Rombaut M, 2006. “Emotion Detection in the Loop from Brain Signals and
Facial Images” eNTERFACE’06.
[14] Orrison Jr. W. W, Lewine J. D, Sanders J. A and Hartshorne M. F, 1995. “Functions Brain
Imaging”, St Louis: Mosby-Year Book, Inc.
[15] Lopes da Silva F. H, Van Rotterdam A, 1982. “Biophysical aspects of EEG and MEG
generation”. In E. Niedermeyer and F. H. Lopes da Silva, editors, Electroencephalography,
pages 15-26. Urban & Schwarzenberg, München-Wien- Baltimore, 1982.
[16] Doyle J. C, Omstein R, and Galin D, 1974. “Lateral specialization of cognitive mode: I1 EEG
frequency analysis,” Psychophysiology, 11, pp. 567-578.
[17] Ehrlichman H, Wiener M. S, 1980. “EEG asymmetry during covert mental activity,”
Psychophysiology, 17, pp. 228-235.
658 Ali S. AlMejrad
[41] DA-Zeng, Ming-Hu Ha, 2004. “Applications of Wavelet Transform in Medical Image
Processing”, IEEE Proce on Machine Learning and Cybernetics, 3, pp, 1816-1821.
[42] Unser.M , Aldroubl.A, 1996. “A review of wavelets in biomedical application”, Proce IEEE,
84, (4), pp, 626-638.
[43] Vetterli M, Kovacevic J, 1995. “Wavelets and Sub-band Coding”, Englewood Cliffs, NJ:
Prentices Hall.
[44] Blanko.S, 1996. “Time –Frequency Analysis of electroencephalograms series”, Phys Rev E. 54,
pp, 6661-6672.
[45] Grap A, 1995. “An Introduction to Wavelets”, IEEE Computer Science and Engg., 2(2).
[46] Mallat S. G, 1989. “A theory for multiresolution signal decomposition: the wavelet
representation,” , IEEE Trans on Pattern Anal. & Mach. Intelligence, 11(7), pp, 674–693.
[47] Qin S, Ji Z, 2004. “Multi-Resolution Time-Frequency Analysis for Detection of Rhythms of
EEG Signals”, IEEE Proce on Signal Processing, 11’th International DSP Workshop, Vol 4,
pp, 338-341.
[48] Adlakha A, 2002. “Single trial EEG classification,” Tech. Rep., Swiss Federal Institute Of
Technology.
[49] P. Smreka, 2002. “Fractal and Multifractal analysis of Heart Rate variability in Extremal States
of the Human Organism”, Ph.D. Thesis, Czech Technical University in Prague.
[50] Schiff S.J, 1994. “Wavelet Transforms For Epileptic Spike and Seizure Detection”, IEEE
Proce, pp, 1214-1215.
[51] Dexon T.L, Livezey G.T, 1996. “Wavelet- Based Feature Extraction for EEG Classification”,
IEEE Proce on EMBS, 3, pp, 1003-1004.
[52] Ishino K, hagiwara M, 2003. “A Feeling Estimation System Using a Simple
Electroencephalograph”, IEEE Proce, pp, 4204-4209.
[53] Learned R.E, Willsky A.S, 1995. “A Wavelet Packet Approach to transient signal
classification”, Applied Computer Harmonic Analysis, 2, pp, 265-278.
[54] Cocchi M, Seeber R, Ulrici A, 2001. “WPER: Wavelet Packet Transform for Pattern
Recognition of Signals”, Chemometrics Intel Lab System, 57, pp, 97-119.
[55] Pittner S, Kamarthi S.V, 1999. “Feature Extraction from Wavelet Co efficient for Pattern
Recognition Task”, IEEE Transaction on Pattern Recognition Analysis and Machine
Intelligence, 21, pp, 83-88.
[56] Juang B.H, Soong F.K, 2001. “Hands-free Telecommunications” HSC 2001, pp.5-10.
[57] Herrera R.E, Sclabassi R.J, Sun M, Dahl R.E, Ryan N, 1999, “Single Trial Visual Event-
Related Potential EEG Analysis Using The Wavelet Transform”, IEEE Proce on BMES/EMBS,
2, pp, 947.
[58] Walpow J.R, Birbaumer N, McFarland D.J, Pfrutscheller G, Vaughan T.M, 2002. “Brain-
Computer Interfaces for Communicatio and Control”, Clinical Neurophysics, 113, pp, 767-791.
[59] Zheng P, Li X.P, Soh W J, Shen KQ, Ong C J, Wilder-Smith E.P.V, 2006. “Lie Detection
Using EEG and Support Vector Machine”, IEEE Trans on Biomedical Engineering.
[60] Takahashi K, Tsukaguchi A, 2003. “Remarks on Emotion Recognition from Multi-Modal Bio-
Potential Signals”, IEEE Trans on Industrial Technology, 3, pp, 1654-1659.