Vous êtes sur la page 1sur 4

Reduced Complexity Space-Time-Frequency Model for

Multi-Channel EEG and Its Applications


Yodchanan Wongsawat, Soontorn Oraintara, and K. R. Rao
Department of Electrical Engineering, University of Texas at Arlington,
416 Yates St., Arlington, TX, 76019-0016 USA
Email: yodchana@gauss.uta.edu, oraintar@uta.edu, and rao@uta.edu

Abstract— Searching for an efficient summarization of multi-channel suffer from the high computational complexity when measured in a
electroencephalogram (EEG) behavior is a challenging signal analysis long period of time or with high number of electrodes.
problem. Recently, parallel factor analysis (PARAFAC) is reported as
In this paper, we propose two methods to reduce the computational
an efficient tool for extracting features of multi-channel EEG by simul-
taneously employing space-time-frequency knowledge, i.e. decomposing complexity for the STF model of a multi-channel EEG, and then
multi-channel EEG signal into a linear combination of its space-time- suggest their possible applications. The first method aims to estimate
frequency feature. However, this decomposition scheme suffers from the STF model using the space-time-frequency-time/segment model
expensive computational load when applied to either long term or high (STF-TS model), by sub-dividing the time domain into a number
number of channels EEG signals. In this paper, a reduced computational
complexity space-time-frequency model for multi-channel EEG signal is of segments resulting in the fourth dimension. This approach is
proposed by dividing selected content into segments yielding additional appropriate when the signals are recorded in a long period of time.
segment signatures. By carefully selecting the number of segments, The 4-way PARAFAC is applied for the analysis of the 4-D data.
features extracted from the proposed model are comparable with those The second method aims to estimate the STF model using the
from the conventional space-time-frequency model while the time used
in computation is reduced by more than 50%. Simulation results show
space-time-frequency-space/segment model (STF-SS model), which
that the proposed model can efficiently extract eye blink artifact from is suitable when the number of channels (dimension of space domain)
background EEG. Furthermore, classification accuracy when employing is high. By partitioning the channels into sub-groups, a 4-D data is
the proposed model to brain computer interface (BCI) application is also constructed and the 4-way PARAFAC will be applied for the analysis.
comparable with the conventional model. The STF-TS model is found to be useful for artifact removal in long
term EEG signals, while the STF-SS model is found to be useful for
I. I NTRODUCTION
classification as the number of channels can be arbitrary high.
Multi-channel electroencephalogram (EEG) is widely known for
its potential in real-time brain understanding. In order to understand II. S PACE - TIME - FREQUENCY MODEL
this multi-channel signal, all signal features should be incorporated This section reviews some backgrounds on the STF model. Each
to form the model. Therefore, finding the right model to extract the channel of the 1-D time-domain signal is first transformed to reveal
features of this signal with less time consuming becomes one of the its 2-D time-frequency representation. By stacking all the 2-D time-
challenging problems in neuroscience. frequency arrays from all the channels, we form a 3-D array in space-
EEG is first modeled by its frequency statistics in [1]. The model is time-frequency domain. Then, 3-way atomic decomposition is further
further improved by using time-frequency representation of a single applied to this 3-D array in order to decompose the data into its
channel EEG, [2], [3], [4] which is known as a nonstationary signal. fundamental components.
Usually, EEG signals are recorded at multiple locations, yielding
information about which part of the brain is functioning. This spatial A. Time-frequency transform
knowledge is efficiently exploited using principal component analysis Time-frequency modeling is known to be practical for the analysis
(PCA) in [5], [6]. However, by using PCA nonuniqueness occurs of 1-D nonstationary signals e.g. EEG [4], [14]. There are two main
due to arbitrary choice of rotation axes [11], which leads to the methods to achieve this goal, i.e. to simultaneously localize signals
robustness problem of the model. Recently, independent component in both time and frequency domains, the Cohen’s class (translate
analysis (ICA) is applied to eliminate this nonuniqueness problem by signal in time and frequency) and the affine class (translate signal for
imposing the statistical independent constraint which is even stronger time resolution and scale the signal for frequency resolution). Since
than orthogonality of PCA, [7], [8]. In conventional PCA and ICA, the affine class yields nonuniform nature of time-frequency signal
no frequency knowledge is exploited even though it can be separately components, it is more suitable for EEG [11]. An EEG signal, s(t),
employed later. All space, time, and frequency domains are employed can be efficiently decomposed into the affine class time-frequency
in [9] by analyzing the region of time-frequency plane. Another atoms by convolving with the complex Morlet wavelet basis, w(f, t),
interesting work on topographic-time-frequency decomposition is as
proposed in [10] by imposing the minimum norm and maximal
ý(f, t) = |w(f, t) ∗ s(t)|2 . (1)
smoothness to the time-frequency signature, respectively, for model
uniqueness. Recently, Miwakeichi et al [11] found that by using By stacking ý(f, t) of all channels, a 3-D array can be formulated
parallel factor analysis (PARAFAC), [12], [13] these uniqueness as ý(n, f, t), where n is the channel index [11].
constraints are unnecessary. Therefore, they propose a novel model
that applies space-time-frequency representation of a multi-channel B. Parallel Factor Analysis (PARAFAC)
EEG to the 3-way PARAFAC to obtain the space, time and fre- In order to decompose a 3-D array data into space, time and
quency signatures (features), called space-time-frequency model (STF frequency components, the 3-way PARAFAC is applied to the 3-
model). Although, all domains are exploited in these models, they D array data, ý(n, f, t) (denoted in array form as Ý), which can be
formulated as and E is now a 4-D array residual of the model. AN×M is the space
signature where its matrix elements are denoted as a(n, m), BS×M is
Ý N×F ×T = h(Á, Ć, D́) + É N×F ×T , (2)
the time/segment signature where its matrix elements are denoted as
where the 3-way PARAFAC model, i.e. the STF model, is b(s, m), CFs ×M is the frequency signature where its matrix elements
M
are denoted as c(fs , m), and DTs ×M is the time signature where its
X
´ m), matrix elements are denoted as d(ts , m). It should be noted that T
h(Á, Ć, D́) = á(n, m)ć(f, m)d(t,
in the STF model is equal to Ts × S in the STF-TS model. The loss
m=1
function is
and É is a 3-D array residual of the model. ÁN×M is the space
M


signature where its matrix elements are denoted as á(n, m), ĆF ×M
X
argmina,b,c,d = Y − a(n, m)b(s, m)c(fs , m)d(ts , m) .

is the frequency signature where its matrix elements are denoted as
m=1

ć(f, m), D́T ×M is the time signature where its matrix elements are
´ m), n is the channel index ranging from 1 to N ,
denoted as d(t, 1) Parameter analysis of the STF-TS model: By decomposing the
f is the frequency index ranging from 1 to F , t is the time index multi-channel EEG signal using the STF-TS model, the number of
ranging from 1 to T and m is the component index ranging from free parameters [15], i.e. the number of elements that PARAFAC have
1 to M . A suggested number of components M should be the one to find is P4 = M (N + S + Fs + Ts ), while the number of free
that maximizes the core consistency diagnostic (CORCONDIA) value parameters of the STF model is as high as P3 = M (N + F + T ).
which in [12] is known as an efficient model validation criteria. The It is clear that when T is large, P4 << P3 . This means that less
parameters Á, Ć, and D́ can be estimated by using the alternate least parameters need to be estimated and thus reduces the computational
square algorithm (ALS) [12] where the loss function is complexity of the PARAFAC algorithm.

M

2) Estimation methods for calculating the STF model from the
STF-TS model: In this section, we aim to estimate the STF model
X
argminá,ć,d´ = Ý −
´
á(n, m)ć(f, m)d(t, m) .

from the STF-TS model, i.e. we have to estimate the signatures of
m=1
the STF model using the signatures of the STF-TS model. According
Intuitively, the space signature Á obtained from this STF model to (3), the time signatures of the long-term multi-channel EEG signal
represents the weighting parameters of the inter-channel correlation can be estimated by cascading all S segments of the time signatures
among time-frequency representations of each channel. Taking into D which are weighted by their corresponding time/segment signa-
account that this STF model needs to simultaneously process a 3- tures B. In order to efficiently estimate the STF model from the
D data, hence, if one of its three dimensions, i.e. space, time or STF-TS model, the suggested number of segments S and number of
frequency, is large, the factorization will be very complex and makes components M should be the ones that maximize the CORCONDIA
this elegant model infamous for real-world applications. value [12].
III. P ROPOSED ESTIMATION METHODS FOR CALCULATING STF [S, M ] = argmaxS,M {CORCONDIA(Y, A, B, C, D)} . (4)
MODEL

In this section, the modeling schemes based on ‘divide and con- In addition, to efficiently estimate the frequency signatures, the
1
quer ’ philosophy is introduced. We found that instead of calculating length of each segment should be greater than , where
min(fEEG )
the model signatures from the original data as in section II, we can min(fEEG ) denotes the minimum (fundamental) frequency of the
estimate these signatures by cascading the weighted versions of their multi-channel EEG signal.
local signatures. In this paper, in order to estimate the STF model, When the residual is neglected, the STF model (2) can be written
the STF-TS and STF-SS models are presented. in a matrix form as

= D́ ΣÁn ĆT ,

A. Space-time-frequency-time/segment model (STF-TS model) Ý n×F ×T (5)
For a long-term EEG signal, the calculations of both the time-
where ΣÁn is the diagonal matrix with the n-th row of Á along the
frequency transform and the PARAFAC are very complex. Therefore,
diagonal, n = 1, ..., N . With similar form, the STF-TS model (3)
we aim to reduce this computational complexity by dividing the time
can be written in matrix form as
domain into segments. After that, the time-frequency transform is
applied individually to each segment forming a 4-D array signal, = DΣBs ΣAn CT ,

Y n×s×Fs ×Ts (6)
y(n, s, fs , ts ) (denoted in array form as Y), where n is the channel
index ranging from 1 to N , s is the time/segment index ranging from where ΣAn is the diagonal matrix with the n-th row of A on the
1 to S, fs is the frequency index ranging from 1 to Fs , ts is the time diagonal and ΣBs is the diagonal matrix with the s-th row of B
index ranging from 1 to Ts and m is the component index ranging along the diagonal, s = 1, ..., S.
from 1 to M . The 4-way PARAFAC is then applied to this 4-D According to (5) and (6), the time signature D́ of the STF model
array signal. The 4-way PARAFAC model of the 4-D array Y can can be estimated from the time signature D of the STF-TS model as
be formulated the same way as the 3-way PARAFAC as mentioned  
in section II-B except that a new parameter D is added: DΣB1
D́ ≈  ..
. (7)
 
Y N×S×Fs ×Ts = f (A, B, C, D) + E N×S×Fs ×Ts , (3) .
DΣBS
where the 4-way PARAFAC model (STF-TS model) is
M
X The scale version of the frequency signature Ć can be efficiently
f (A, B, C, D) = a(n, m)b(s, m)c(fs , m)d(ts , m), approximated as C, while the space signature Á is approximately
m=1
equal to A.
TABLE I
B. Space-time-frequency-space/segment model (STF-SS model)
F REE PARAMETERS AND NORMALIZED TIME COMPLEXITY CONSUMED BY
Instead of segmenting the time domain of the signal, we can also THE STF AND STF-TS MODELS OF A LEFT EYE BLINK EEG SIGNAL
use a similar approach for the space domain. All channels of EEG ( ASSUME THAT TIME CONSUMED BY THE STF MODEL= 1)
are first equally divided into groups in the space domain to form a 3-
D array signal of space, time, and space/segment domains. Then,
Models STF STF-TS
a time-frequency transform is applied to each channel to form a
4-D array signal of the space, time, frequency and space/segment Free parameters 2,080 1,060
domains. After that the 4-way PARAFAC is applied to extract the Time complexity 1 0.49
features of this 4-D array resulting in the STF-SS model. The space
signatures of the STF model can be estimated by simply cascading all
the space signatures of the STF-SS model which have been weighted
is 16 × 512 × 512 and the size of the 4-D array Y N×S×Fs ×Ts for
by their corresponding space/segment signatures. By using the STF-
the STF-TS model is 16 × 2 × 256 × 256. Consequently, the second
SS model, it is obvious that some global space correlations might be
row of Table I illustrates the relative calculation time of both the STF
lost. Therefore, clustering all channels of EEG into segments is very
model and the STF-TS model. By assuming that the calculation time
important and needs more study. In this paper, the STF-SS model
of the STF model is 1, the STF-TS model uses only 0.49.
has been tested with the application on left-right imagined signal
classification [16]. Classification procedures and simulation results B. Left-right imagined signal classification
are illustrated in section IV-B.
In this section, we extract the features of a multi-channel EEG
1) Parameter analysis of the STF-SS model: By decomposing a
signal in order to distinguish between left and right imagined signals
signal using the STF-SS model, the number of free parameters [15]
using the STF and STF-SS models. The dataset [18] contains 9
is P4 = M (Ns + F + T + S) while the number of free parameters
subjects of 59-channel EEG at sampling rate of 100 Hz. Each
of the STF model is as high as P3 = M (N + F + T ). Ns is the
subject is asked to perform some specific tasks, e.g. push imagined
number of channels in each group which is equal to N/S. It is clear
left or right bottom. For simplicity, 15 channels are selected from
that when N is high, P4 << P3 .
the original 59 channels. The first three subjects are used in this
IV. S IMULATION RESULTS experiment.
Originally, the STF model is used for this left-right imagined
The goal of this section is to investigate the performance of the
signal classification in [16]. In [16], a multi-channel EEG signal is
proposed STF-TS and STF-SS models, whether they yield good
decomposed using the 3-way PARAFAC, then the space features of
approximations of the STF model for the purpose of real-world
the selected component, i.e. the component of PARAFAC selected
applications. The STF-TS model is applied to the artifact removal
from all M components, are employed as the feature vectors for
problem which usually happens in long-term EEG while the STF-SS
classification using the support vector machine (SVM). To compare
model is applied to left-right imagined signal classification where the
the performances of the STF and STF-SS models, we follow the same
number of channels can be arbitrary high.
process as [16] except that, instead of using the SVM, only simple
A. Eye blink artifact removal linear discriminant analysis (LDA) is used for our classification
In this experiment, we use a dataset of 64-channel left eye blink experiment.
EEG [17]. The proposed STF-TS model is applied to a selected subset In this experiment, the space domain is divided into 3 groups
of 16 channels of eye blink EEG data. The space (or channel) domain (segments). Each group contains 5-channel EEG as follows:
is arranged from channels 1-16 in the following order: Cz, FC2, CP1, • Segment 1 (channels 1-5): FC3, C3, CP3, CP1, C1,
F3, F4, P4, P3, FC5, AF4, C6, PO4, PO7, FT7, FP2, TP8 and Oz. • Segment 2 (channels 6-10): FC1, FCz, Cz, CPz, CP2,

Both the STF model and the STF-TS model are applied to the same • Segment 3 (channels 11-15): C2, FC2, FC4, C4, CP4.
data. By choosing the number of segments S defined in (4), the STF The second and third columns of Table II demonstrate that the STF-
model can be efficiently estimated from the STF-TS model with less SS model is comparable to the STF model in EEG classification.
time complexity. In addition, the estimated space signatures using the STF-SS model
In this experiment, the number of components, M = 2, is (third column of Table II) are indeed reconstructed from cascading
selected according to the nature of the tested EEG signal and the the weighted versions of the space signatures by their corresponding
CORCONDIA value. The CORCONDIA of the signal using the STF space/segment signatures. Therefore, in this classification experiment,
model is 99.9, while the CORCONDIA of the signal using the STF- it makes sense to consider only the space/segment signatures of the
TS model with S = 2 is 98.1. Figs. 1 (d), (e), (f) illustrate the STF-SS model which contains much shorter lengths as the feature
estimated space, time and frequency signatures of the 16-channel vectors. The fourth column of Table II illustrates that using length-3
left eye blink EEG signal of the STF-TS model compared with the space/segment signatures as feature vectors also yields comparable
ones of the STF model in Figs.1 (a), (b), (c). It should be noticed that results with the STF model while the length of feature vectors
the first component of both the STF model and the STF-TS model is reduced by a factor of 5. It should be noted that even though
can efficiently extract eye blink artifact which is mainly occurred in the overall classification results are not so impressive, they can be
channel 9 (AF4) and 14 (FP2) at time interval around 0 to 0.2 second. improved by using a more sophisticated classifier.
By using the STF model, we have to calculate the PARAFAC
of the 3-way array of size N × F × T . This process consumes a V. C ONCLUSIONS
longer period of time due to the calculations of more free parameters The reduced complexity model of a multi-channel EEG signal is
compared with the STF-TS model. The first row of Table I shows that proposed by introducing segment signatures over selected domains,
the number of free parameters is greatly reduced by using the STF-TS i.e. time and space. The proposed estimated space-time-frequency
model, where the size of the 3-D array Ý N×F ×T for the STF model models can efficiently summarize the nature of a multi-channel EEG
Space signature Temporal signature Frequency signature
1 0.2
1st component 1st component 1st component
0.9 2nd component 0.18 2nd component 2nd component
0.25
0.8 0.16

0.7 0.14
0.2
0.6 0.12

C
0.5 0.15
A

D
0.1

0.4 0.08
0.1
0.3 0.06

0.2 0.04
0.05
0.1 0.02

0 0 0
2 4 6 8 10 12 14 16 0 1 2 3 4 5 6 7 8 0 5 10 15 20
Channel Time (second) Frequency (Hz)

(a) (b) (c)


Space signature Temporal signature Frequency signature
1 0.25 0.4
1st component 1st component 1st component
0.9 2nd component 2nd component 2nd component
0.35
0.8 0.2
0.3
0.7
0.25
0.6 0.15

C
0.5 0.2
A

0.4 0.1
0.15
0.3
0.1
0.2 0.05
0.05
0.1

0 0 0
2 4 6 8 10 12 14 16 0 1 2 3 4 5 6 7 8 0 5 10 15 20
Channel Time (second) Frequency (Hz)

(d) (e) (f)


Fig. 1. (a) Space signatures, (b) time signatures and (c) frequency signatures obtained from the STF model of a 16-channel left eye blink EEG signal. (d)
Estimated space signatures, (e) estimated time signatures and (f) estimated frequency signatures, obtained from the STF-TS model of the same 16-channel
left eye blink EEG signal.

TABLE II
[4] L. Cohen, Time-frequency analysis, Upper Saddle River, Prentice Hall,
C LASSIFICATION ACCURACY (%) FOR LEFT- RIGHT IMAGINED SIGNAL 1995.
CLASSIFICATION OF 3 SUBJECTS USING SPACE SIGNATURES ( LENGTH =15) [5] T. D. Lagerlund, F. w. Sharbrough and N. E. Busacker, “Spatial filtering
OF THE STF MODEL , ESTIMATED SPACE SIGNATURES ( LENGTH =15) of multichannel electroencephalographic recording through principal com-
CALCULATED BY THE STF-SS MODEL AND SPACE / SEGMENT SIGNATURES ponent analysis by singular value decomposition,” J. Clin. Neurophysio.,
vol.14, pp.73-83, 1997.
( LENGTH =3) OF THE STF-SS MODEL AS FEATURE VECTORS
[6] A. C. Soong and P. Z. Koles, “Principal-component localization of sources
of the background EEG,” IEEE Trans. Biomed. Eng., vol.42, pp.59-67,
1995.
Models STF STF-SS STF-SS [7] A. Cichocki and S. Amari, Adaptive blind signal and image processing,
John Wiley and Sons, 2002.
Signatures Space Estimated space Space/segment [8] A. Hyvarinen, J. Karhunen and E. Oja, Independent component analysis,
Length of feature vectors 15 15 3 John Wiley and Sons, 2001.
[9] A. Gonzalez, R. G. Menendez, C. M. Lantz, O. Blank, C. M. Michel
Subject 1 63.33 60 58.89 and T. Landis, “Non-stationary distributed source approximation: an al-
Subject 2 62.22 66.67 65.56 ternative to improve localization procedures,” Hum. Brain Mapp., vol.14,
Subject 3 57.78 57.78 55.56 pp.81-95, 2001.
[10] T. Koenig, F. Marti-Lopez and P. A. Valdes-Sosa, “Topographic time-
frequency decomposition of EEG,” J. NeuroImage, vol.14, pp.383-390,
2001.
[11] F. Miwakeichi, E. Martinez-Montes, P. A. Valdes-Sosa, N. Nishiyama,
signal with less time consuming compared with the conventional STF H. Mizuhara and Y. Yamaguchi, “Decomposing EEG data into space-time-
model. The efficiency of the model depends closely on the choice of frequency components using parallel factor analysis,” J. NeuroImage,
the number of segments. Possible applications of the proposed model vol.22, pp.1035-1045, 2004.
include artifact removal and signal classification. More sophisticated [12] R. Bro,“Multi-way analysis in the food industry: models, algorithms and
applications,” Ph.D. Thesis, University of Amsterdam and Royal Veteri-
model is also possible by combining the STF-TS model with the nary and Agricultural University, MATLAB toolbox available [online] at
STF-SS model or by introducing the frequency/segment domain. http://www.models.kvl.dk/users/rasmus/.
[13] R. A. Harshman, “Foundations of the PARAFAC procedure: models and
VI. ACKNOWLEDGEMENT conditions for an explanatory multi-modal factor analysis,”UCLA Work.
Pap. Phon, vol.16, pp.1-84, 1970.
This work was supported in part by NSF Grant ECS-0528964. [14] Time-frequency toolbox available [online] at http://tftb.nongnu.org/.
[15] M. Morup, L. K. Hansen, C. S. Herrmann, J. Parnas and S. M. Arnfred,
R EFERENCES “Parallel factor analysis as an exploratory tool for wavelet transformed
event-related EEG,” J. NeuroImage, vol.29, pp.938-947, 2006.
[1] L. D. Silva, “Analysis of EEG nonstationarities,” Electroencephalogr. [16] K. Nazarpour, S. Sanei, L. Shoker and J. Chambers, “Parallel Space-
Clin. Neurophysiol. Suppl., vol.34, pp.163-179, 1978. time-frequency decomposition of EEG signals for brain computer inter-
[2] S. Makeig, “Auditory event-related dynamics of the EEG spectrum and facing,” EUSIPCO, Sep. 2006.
effects of exposure to tones,” Electroencephalogr. Clin. Neurophysiol. [17] Y. Wongsawat, S Oraintara, T. Tanaka and K. R Rao, “Lossless Multi-
Suppl., vol.86, pp.283-293, 1993. channel EEG Compression,” IEEE ISCAS, pp.1611-1614, May 2006.
[3] O. Bertrand, J. Bohorquez and J. Pernier, “Time-frequency digital filtering [18] A. Osman and A. Robert, “Time-course of cortical activation during
based on an invertible wavelet transform: an application to evoked overt and imagined movements,” Proc. Cognitive Neuroscience Annu.
potential,” IEEE Trans. Biomed. Eng., vol.41, pp.77-88, 1994. Meet., New York, 2001.

Vous aimerez peut-être aussi