Vous êtes sur la page 1sur 8

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO.

4, APRIL 2008 1319

An ECG Signals Compression Method and Its


Validation Using NNs
Catalina Monica Fira3 and Liviu Goras, Senior Member, IEEE

Abstract—This paper presents a new algorithm for electrocar- The methods based on linear transformations use
diogram (ECG) signal compression based on local extreme extrac- various linear transformations (Fourier, Walsh, Cosine,
tion, adaptive hysteretic filtering and Lempel–Ziv–Welch (LZW) Karhunen–Loeve, wavelet, etc. [6]–[9]) to code a signal
coding. The algorithm has been verified using eight of the most
frequent normal and pathological types of cardiac beats and an
through the most significant coefficients of its representation
multi-layer perceptron (MLP) neural network trained with orig- with respect to a particular basis chosen by means of an error
inal cardiac patterns and tested with reconstructed ones. Aspects criterion.
regarding the possibility of using the principal component anal- Parametric methods, more recently reported in the litera-
ysis (PCA) to cardiac pattern classification have been investigated ture, are combinations of direct and transformation techniques
as well. A new compression measure called “quality score,” which methods, typical examples being beat codebook [10], artificial
takes into account both the reconstruction errors and the compres-
sion ratio, is proposed. NN [11], peak picking, and vector quantization [11].
The three important features of a compression algorithm are
Index Terms—Biomedical signal processing, data compression, the compression measure, the reconstruction error and the com-
neural networks (NNs), signal processing.
putational complexity, the first two being interdependent. The
computational complexity is directly related to practical imple-
mentation considerations and is needs to be as low as possible,
I. INTRODUCTION
especially for portable equipment [12].
The compression ratio (CR) is defined as the ratio between
C OMPRESSION methods have gained in importance in re-
cent years in many medical areas like telemedicine, health
monitoring, etc. All these imply storage, processing, and trans-
the number of bits needed to represent the original and the
compressed signal. The compression efficiency of an algorithm
mission of large quantities of data. Compression methods can can be also evaluated using the bit rate, bits per second (BPS)
be classified into two main categories: lossless and lossy. Com- (number of bits in the compressed data/original signal dura-
pression algorithms can be constructed through direct methods, tion) and/or bits per sample (b/sample) (amount of bits in the
linear transformations, and parametric methods. compressed data/number of samples of original signal).
Even though many compression algorithms have been re- For lossy compression techniques, the definition of the error
ported so far in the literature, not so many are currently used criterion to appreciate the distortion of the reconstructed signal
in monitoring systems and telemedicine. The most important with respect to the original one is of paramount importance,
reason seems to be the fear that the recovery distortions pro- particularly for biomedical signals like the electrocardiogram
duced by compression methods with loss of information might (ECG), where a slight loss or modification of information can
lead to erroneous interpretations. The aim of this paper is to lead to wrong diagnostics. The measurement of these distor-
propose a new low complexity compression method leading to tions is a difficult problem and it is only partially solved for
compression ratios better that 15:1 and to suggest a qualita- biomedical signals. In most ECG compression algorithms, the
tive validation of the compression results through classifications percentage root-mean-square difference (PRD) measure defined
based on neural networks (NNs). as
The following provides a summary of previous work investi-
gating the problem of ECG compression.
Direct methods as the turning point (TP) [1], amplitude zone (1)
time epoch coding (AZTEC) [2], coordinate reduction time en-
coding system (CORTES) [3], scan along polygonal approxi-
mation (SAPA) [4], and entropy coding [5] are based on the ex-
traction of a subset of significant samples. is employed, where is the original signal, is the recon-
structed signal, and is the length of the window over which
the PRD is calculated. The normalized version of PRD, PRDN,
Manuscript received February 27, 2007. Asterisk indicates corresponding au-
thor. which does not depend on the signal mean value is defined as
*C. M. Fira is with the Institute for Computer Science, Bd. Carol I 22A, Iasi
700505, Romania (e-mail: mfira@scs.etc.tuiasi.ro).
L. Goras is with the Faculty of Electronics and Telecommunications, “Gh.
Asachi” Technical University, Iasi 700505, Romania, and also with the Institute
for Computer Science, Iasi 700505, Romania (e-mail: lgoras@etc.tuiasi.ro).
Color versions of one or more of the figures in this paper are available online (2)
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TBME.2008.918465

0018-9294/$25.00 © 2008 IEEE


1320 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

Thus, samples for which the difference from previous ones is


less than a threshold TH are discarded. This is done in two steps
with an adaptive hysteretic filtering based on the statistics of the
ECG signal.
The calculation of TH consists in the computation of a first
Fig. 1. Scheme of the proposed compression method.
threshold, denoted

Other measures such as the root mean square error (RMS) and
the signal-to-noise ratio (SNR) are used as well [1]. In order (3)
to evaluate the relative preservation of the diagnostic informa-
tion in the reconstructed signal compared to the original one, where ST is the standard deviation of
Zigel [13], [14] introduced a new measure, called weighted di- and represents the amplitude of the th
agnostic distortion (WDD), which consists in comparing the P- sample of the skeleton. The aim of this threshold is to select
and T-wave, and QRS complexes features of the two ECG sig- samples with a relatively small variation that do not convey
nals; however, this is not always easy to use. relevant information (they can be considered noise) and that
In all cases, the final verdict regarding the fidelity and clin- will determine the level of TH.
ical acceptability of the reconstructed signal should be validated The standard deviation of the skeleton samples having am-
through visual inspection by the cardiologist physician. plitude variations less than
is then calculated. The threshold TH is determined using
II. COMPRESSION METHOD the formula
The proposed compression method [15] is a combination of
signal processing (resembling the peak-picking compression
techniques [12], [16]) and information transmission theory (4)
based techniques. It can be viewed as a cascade of two stages as
shown in Fig. 1. In the first stage, the essential information from where the most convenient values for , have been found to be
the ECG signal is extracted and in the second one, the resulted between 1.5 and 2.5. Values outside the above interval have been
information is delta and Lempel–Ziv–Welch (LZW) coded. tested as well. Those below 1.5 lead to a poor filtering of the
extreme values representing noise while values higher than 2.5
A. Coding Method lead to distortion errors of the reconstructed signal as it will be
later shown in Fig. 6.
The preprocessing stage consists of a filtering with a 6-de-
The reconstruction errors based on the skeleton obtained in
gree Savitzky-Golay filter (SGF) using a 17-points constant
the previous manner proved to be acceptable except for the
window. SGFs, also called digital smoothing polynomial filters
zones of the QRS complexes where adjacent skeleton samples
or least-squares smoothing filters, are typically used to “smooth
are rather far from each other. The error can be further de-
out” a noisy signal whose frequency span (without noise)
creased by adding extra samples to the skeleton resulted after
is large. Compared to finite-impulse response (FIR) filters
the application of the threshold TH. The location of the inter-
which are good in rejecting the high frequency noise, SGFs are
mediary points which are added to the skeleton is determined
more efficient in preserving the high frequency components of
through a third threshold, as follows: where the absolute
the signal [17]. The parameters of the Savitzky–Golay filter
value of the difference between two successive amplitudes is
have been empirically adopted after testing various degrees
higher than , a sample of the original signal taken in the
of the polynomial and the dimensions of the window. It has
middle of the distance between the skeleton samples is added
been found that smaller degrees of the polynomial and higher
to the skeleton.
dimensions of the window lead to amplitude distortion of the
It has been found that a convenient formula for the value of
R-waves while higher degrees of the polynomial and smaller
the , threshold is
dimensions for the windows lead to insignificant filtering.
The next step consists in extracting and rounding the local
minima and maxima values from the filtered ECG signal, which
is equivalent to a nonuniform sampling followed by a quantiza- (5)
tion of both amplitude and position. We will call the resulting
discrete signal with nonuniformly spaced samples the signal
skeleton. Knowing the location and the amplitude of the local where is the length of the signal and a convenient
extremes in a first approximation it is possible to reconstruct has been empirically found giving various values for in (4)
most parts of the ECG signal without loss of relevant informa- and in (5). It has been observed that the previous constants
tion. significantly affect the compression results based only on the
In order to improve the compression rate without significantly described preprocessing method. For example, for ,
increasing the distortion error, some of the skeleton samples are a mean compression rate and a PRD of 4.55 and 0.77, respec-
discarded while a few others are added, as discussed in the fol- tively, have been obtained while for and the com-
lowing. pression rate and the PRD were 6.51 and 0.91, respectively.
FIRA AND GORAS: ECG SIGNALS COMPRESSION METHOD AND ITS VALIDATION USING NNS 1321

Fig. 2. Original signal (continuous line) and skeleton after hysteretic filtering
(including extra samples).

Fig. 3. Segmented ECG signal.


Fig. 2 represents part of the record number 100 from the
MIT-BIH Arrhythmia database1 as well as the points of the “en-
riched” skeleton obtained using the procedure described before. The segmentation was done with respect to the R-wave lo-
In the last two steps, the obtained skeleton is delta coded for calization using the wave detection method presented in [20].
both amplitudes and distances and it is LZW coded. The LZW The detection algorithm consists of the Pan–Tompkins prepro-
coding [18], [19] is a lossless “dictionary-based” compression cessing technique and a modified R-wave detection with low
algorithm which looks for repetitive sequences of data which computational complexity. Following the R-wave detection, the
are used to build a dictionary. segmentation has been done as follows: a pattern begins from
B. Decoding the middle of the RR interval between the previous and current
heartbeat and finishes at the middle of the next RR interval. A
The reconstruction of the ECG signal from its compressed
sequence of the original ECG signal and its segmentation into
version begins with the LZW decoding of the “enriched”
cardiac beats based on the R-waves localization is presented in
skeleton leading to two vectors representing the amplitudes and
Fig. 3. Each segment contains the P-wave, the QRS complex
the positions of the skeleton lines. The ECG reconstruction is
and the T-wave. Failed or false detections of the R-wave will
made through linear or cubic interpolation. Obviously, linear
determine wrong segmentations. This will not affect the com-
interpolation implies a smaller computation complexity with
pression rates but will lead to low recognition rates. Thus, the
the price of a higher reconstruction error. Even though the
proposed qualitative method of compression evaluation is sig-
reconstruction errors expressed in PRD and PRDN are smaller
nificantly influenced by segmentation accuracy.
in the case of cubic interpolation, it has been found that the
To evaluate the precision of the reconstructed signal, a trained
reconstruction distortions evaluated by means of the method
MLP neuronal network has been used for classification. For
of pattern classification based on NNs (to be further described
training and testing, the original heartbeat patterns of the ECG
in this paper) give comparable results, without significant
signals (normal and seven cardiac pathologies) from the data-
differences in cardiac pattern classification as expected from
base and, respectively, the corresponding heartbeat patterns of
pure visual inspection.
the compressed signal have been used. As the segmented output
C. Evaluation of the Compression Method patterns had different dimensions, for training the MLP neural
network each pattern was resampled to 100 samples. This value
As shown before, compression methods can be evaluated by
has been chosen to decrease the dimension of the patterns while
means of classical distortion measures like PRD, PRDN, RMS,
preserving the waveform. Even though the initial number of
and SNR.
samples was about 300, no practical loss of information oc-
In the following, we propose an alternative estimation of the
curred through resampling (visual inspection made by the spe-
reconstruction errors based on a multi-layer perceptron (MLP)
cialist physician). The 300 samples cardiac patterns have been
NN trained and tested initially with original heartbeat patterns
decimated into 100 samples as follows. First, a resampling of
and then tested with reconstructed signals. The validation of
the ECG has been achieved introducing extra samples through
the method has been performed for eight classes of segmented
linear interpolation which were then passed through a low-pass
heartbeats for which the confusion matrix has been studied. The
FIR filter with 90-Hz cutoff frequency followed by decimation.
eight envisaged classification classes are the most frequent types
The resampling of the ECG segments did not affect the wave-
of heartbeats, i.e., atrial premature beat, normal beat, left bundle
form.
branch block beat, right bundle branch block beat, premature
For training the MLP network a back-propagation algorithm
ventricular contraction, fusion of ventricular and normal beat,
with gradient descent and cross-validation was used. 7500 pat-
paced beat, fusion of paced and normal beat.
terns have been used of which 70% for training, 15% for the
1[Online]. Available: http://www.physionet.org/physiobank/database/mitdb/ validation set, and 15% for testing.
1322 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

TABLE I
AVERAGE RESULTS AFTER THE FEATURE EXTRACTION STAGE OF THE
COMPRESSION ALGORITHM FOR 24 RECORDS

TABLE II
AVERAGE FINAL RESULTS OF THE COMPRESSION ALGORITHM FOR 24 RECORDS

The patterns in the three sets were distinct. The following re-
sults have been obtained using the records no. 100, 101, 103,
104, 106,118, 119, 200, 201, 205, 207, 210, 213, 214, 217, 219
Fig. 4. Original and reconstructed ECG signals for record no. 217 (best case:
for training and the records no. 102, 105, 107, 202, 203, 208, CR = 34 : 1 BPS = 127
, , and PRD = 4 13%
: ).
209, 212, 215 for testing and validation. From these last records,
the cardiac patterns have been divided into two data sets, valida-
tion set, and testing set. These sets have been obtained through
a random selection mechanism, each pattern being selected ei-
ther in the testing set or in the validation set but never in both of
them.
The error function used was the mean square error and the
stop criterion was the least error for the validation set.
As an alternative to the previous method, the possibility of
the heartbeats classification using PCA applied on the patterns
obtained from both original and compressed ECG has also been
investigated using an MLP trained and tested with the principal
components corresponding to the most significant eigenvalues

III. EXPERIMENTAL RESULTS


In order to validate the proposed classification algorithm
and to compare it with other classification methods, 24 records Fig. 5. Original and reconstructed ECG signals for record no. 232 (worst case:
consisting of the first 10 000 samples from the MIT-BIH CR = 6 37, BPS = 678, and PRD = 0 29%).
: :

Arrhythmia database1 have been used. The ECG signals were


digitized through sampling at 360 samples/s, quantized and
encoded with 11 bits. coded and then compressed with LZW. For the location and am-
Even though, as shown before, linear interpolation has been plitude vectors the average compression rates were 2:1 and 3:1,
used as well, the results reported in this paper are based only on respectively.
cubic interpolation. The global average compression rate obtained with the algo-
Since the variability of the signal around its baseline is what rithm was 18.27:1 (see Table II). Since the LZW compression
should be preserved and not the baseline itself, the performance is lossless, the PRD is conserved.
measure used to reveal the accuracy of the algorithm was the High quality reconstructed signals were obtained as shown in
variance of the error with respect to the variance of the signal. Figs. 4 and 5, where the best and worst cases were presented.
The average compression ratios and average PRD and PRDN Good reconstruction has been obtained in all cases including
for the 24 data records show very low vales as shown in Table I. the one with the highest compression ratio, PRD, respectively
After the first preprocessing stage, the highest compression (record no. 232). After using the LZW algorithm, the highest
ratio ( and ) was achieved for and lowest compression ratios were achieved for records no.
record no. 217. The lowest compression ratio was achieved for 217 and 232, (CR of 34:1 and 6.37:1, respectively). The global
record no. 232 ( and ). results for all 24 records are shown in Table III.
The average values for the first stage were 7.41:1 for the For all cases presented so far, the value of the parameter
compression ratio and 1.17% for PRD with extremes values of used for the calculations of the TH was equal to 2. The com-
0.29% and 4.13%. Only for five records much higher values than pression results for the case when took values between 1.5
the mean have been obtained due to the presence of noise. and 2.5 are shown in Table IV. However, a conclusion about
In the second stage, the two vectors (amplitude and location) compression quality using only the values in Table IV cannot
representing the skeleton from the previous stage were delta be drawn. Visual inspection showed that the acceptable values
FIRA AND GORAS: ECG SIGNALS COMPRESSION METHOD AND ITS VALIDATION USING NNS 1323

TABLE III
RESULTS OF THE COMPRESSION ALGORITHM FOR 24 RECORDS

Fig. 6. Original and reconstructed ECG signals for record no. 100 and k = 2:5.

TABLE V
COMPARISON BETWEEN THE PROPOSED METHOD AND OTHER COMPRESSION
ALGORITHMS FOR RECORD NO. 117

TABLE IV
RESULTS OF THE COMPRESSION ALGORITHM FOR SEVERAL VALUES OF k
IN RELATION (4)

The optimal value of has been established in close agree-


ment with the opinion of a cardiologist who inspected visually
all 24 original and reconstructed records for various values of
between 1 and 3.
The compression evaluation found in the literature envisages
either the reconstruction distortions or a quantitative description
of the compression itself. To our knowledge, a measure taking
into consideration both aspects has not been used so far. This
is why we define a compression measure called “quality score”
(QS) as the ratio between the CR and the PRD. A high score rep-
resents a good compression. The QS may be very useful when it
is difficult to estimate the best compression method while taking
for should not surpass 2.3 in agreement with the small PRD into consideration the compromise compression reconstruction
and the cardiologist’s opinion. From the example in Fig. 6, it is errors as well. As an example, it has been possible to compare
obvious that the results for involve unacceptable dis- three compressed records with close values of CR and PRD, i.e.,
tortions for the cardiologist. records 117 ( , ), 119 ( ,
1324 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

TABLE VI TABLE VIII


COMPARISON BETWEEN THE PROPOSED METHOD AND OTHER COMPRESSION CONFUSION MATRIX FOR ORIGINAL PATTERN CLASSIFICATION WITH A
ALGORITHMS FOR AVERAGE VALUES FOR 24 RECORDS 100-50-8 MLP (THE SYMBOLS ON THE ROWS AND COLUMNS REPRESENT THE
NETWORK OUTPUT AND THE CORRECT OUTPUT CLASS, RESPECTIVELY)

TABLE IX
CONFUSION MATRIX FOR A 100-50-8 MLP TRAINED WITH ORIGINAL
PATTERNS AND TESTED WITH RECONSTRUCTED SIGNALS (CLASSIFICATION
ACCURACY = 83.5%) (THE SYMBOLS ON THE ROWS AND COLUMNS REPRESENT
TABLE VII THE NETWORK OUTPUT AND THE CORRECT OUTPUT CLASS, RESPECTIVELY)
RESULTS OF PATTERN CLASSIFICATION WITH VARIOUS MLP CONFIGURATIONS
(ORIGINAL PATTERNS)

), and 104 ( , ) the previous


scores were: for the record 117, for
the record 119, and for the record 104.
It is implicitly assumed that the ECG decoded signals have with patterns derived from the compressed signal, a classifica-
been validated through visual inspection by the physician. For tion ratio of 83.5% has been obtained. The confusion matrix
the 24 records analyzed, an average qualitative score (QS) of presented in Table IX proves the same uniform distribution of
20.73 was obtained, with the maximum value of 45.56 for record the classification rate across the eight classes and validates the
no. 100, and minimum value of 7.69 for record no. 214 (see compression method from a qualitative point of view.
Table III). In another set of experiments made with the aim of de-
The results obtained from the classification with ECG pat- creasing the complexity of the MLP and of the training time,
terns depend on the MLP configuration. In the case of the clas- PCA has been used. Even though PCA is a compression method
sification using segmented original ECGs, the results are pre- with losses, its use for cardiac pattern analysis and compression
sented in Table VII. For the configuration MLP 100-50-50-8 a keeping the first principal components does not involve signif-
classification rate of 92.25% has been obtained, compared to icant distortion, which has been confirmed by the good results
91.54% obtained using the configuration 100-50-8. The com- obtained for cardiac pattern classification. Preserving only the
plexity of 2 layers NN, its disadvantages over the one layer as first 19 principal components resulted from applying PCA on
well as the good classification results obtained with the one layer the original patterns matrix, and using these components for
network (100-50-80) were reasons for not using the two-layer training and testing the MLP, similar results as in the case of
networks. The good classification results obtained with an NN using the actual patterns have been obtained. The results are
trained and tested with original patterns certified the relevance presented in Table X. Table XI presents the uniform distribution
of the cardiac beat classification method in the eight proposed of classification in the eight classes used.
classes. The confusion matrix (see Table IX) obtained in the case When PCA has been applied on patterns obtained from the
of the 100-50-8 configuration presents a uniform repartition of compressed signal, is has been found that in order to recon-
the distribution between the eight classes used. struct the patterns with good quality the first 30 principal com-
Starting from the previous results, using the training of an ponents are necessary. In this case, the MLP configuration be-
MLP (100-50-8 configuration) with original patterns and testing comes 30-50-8.
FIRA AND GORAS: ECG SIGNALS COMPRESSION METHOD AND ITS VALIDATION USING NNS 1325

TABLE X PRD, CR, and QS for record no. 117 are presented and from
RESULTS REGARDING PATTERN CLASSIFICATION WITH VARIOUS MLP Table VI where the mean values of the same indices are given .
CONFIGURATIONS USING PCA

V. CONCLUSION
A new algorithm for ECG signal compression based on
local extreme extraction, adaptive hysteretic filtering and LZW
coding has been presented.
The algorithm was tested for the compression of eight of the
most frequent normal and pathological types of cardiac beats
TABLE XI ECG signals from the MIT-BIH database and has been validated
CONFUSION MATRIX FOR ORIGINAL PATTERN CLASSIFICATION WITH A 19-50-8 using neural networks trained with original heartbeat patterns
MLP USING PCA (THE SYMBOLS ON THE ROWS AND COLUMNS REPRESENT
THE NETWORK OUTPUT AND THE CORRECT OUTPUT CLASS, RESPECTIVELY)
including PCA and tested with the reconstructed signals. The
mean value of the CR for the 24 records analyzed was 18.2725
and the , all clinical information being preserved
as validated through visual inspection for all cases by the car-
diologist physician. The method is fast and easy to implement.
A new compression measure called “quality score,” which takes
into account both the reconstruction errors and the compression
ratio has been proposed.

ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers for
their help in improving the quality of this paper.

REFERENCES
[1] W. C. Mueller, “Arrhythmia detection program for an ambulatory ECG
IV. DISCUSSION monitor,” Biomed. Sci. Instrument., no. 14, pp. 81–85, 1978.
[2] J. R. Cox, F. M. Nolle, H. A. Fozzard, and C. G. Oliver, “AZTEC,
In the following, a short comparison of our findings to pre- a preprocessing program for real time ECG rhythm analysis,” IEEE
viously reported results is given. To the authors’ knowledge, Trans. Biomed. Eng., vol. BME-15, no. 4, pp. 128–129, Apr. 1968.
the number of papers dealing with ECG classification into a [3] J. P. Abenstein and W. J. Tompkins, “A new data reduction algorithm
for real time ECG analysis,” IEEE Trans. Biomed. Eng., vol. BME-29,
higher number of classes is rather small, most of them treating no. 1, pp. 43–48, Jan. 1982.
the problem of simultaneous detection or of classification of [4] M. Ishijima, S. B. Shin, G. H. Hostetter, and J. Sklansky, “Scan along
few pathologies. Among them, De Chazal [21] reports a clas- polygon approximation for data compression of electrocardiograms,”
IEEE Trans. Biomed. Eng., vol. BME-30, no. 11, pp. 723–729, Nov.
sification accuracy of 97.4% for five classes (with a total of 15 1983.
pathologies) while Prasad [22] and Osowski [23] using wavelet [5] D. A. Huffman, “A method for the construction of minimum redun-
and SVM, respectively, report 96%. dancy coders,” Proc. IRE, vol. 40, no. 9, pp. 1098–1101, 1952.
[6] A. Bilgin, M. W. Marcellin, and M. I. Altbach, “Compression of elec-
We may also remark that the proposed compression method trocardiogram signals using JPEG2000,” IEEE Trans. Consumer Elec-
validation has been done through reconstruction errors measure- tron., vol. 49, no. 4, pp. 833–840, Nov. 2003.
ments using PRD, PRDN, RMS, SNR, and QS as well as by the [7] A. Al-Shrouf, M. Abo-Zahhad, and S. M. Ahmed, “A novel compres-
sion algorithm for electrocardiogram signal based on the linear pre-
visual inspection of a cardiologist with whom the value of diction of the wavelet coefficients,” Digit. Signal Process., vol. 13, pp.
in the TH formula based on the results in Table IV has been es- 604–622, 2003.
tablished. [8] Z. Lu, D. Y. Kim, and W. A. Pearlman, “Wavelet compression of ECG
signals by the set partitioning in hierarchical trees (SPIHT) algorithm,”
Since the objective of the method was not that of finding an IEEE Trans. Biomed. Eng., vol. 47, no. 7, pp. 849–856, Jul. 2000.
optimal method for classifying heartbeats but to verify the com- [9] M. L. Hilton, “Wavelet and wavelet packet compression of electrocar-
pression quality using classifiers, we consider that classifying diograms,” IEEE Trans. Biomed. Eng., vol. 44, no. 5, pp. 394–402, May
1997.
rates 90% validate a good compression quality from the distor- [10] P. S. Hamilton, “Adaptive compression of the ambulatory electrocar-
tion point of view. We are aware that using other NN architec- diogram,” Biomed. Inst. Technol., vol. 27, no. 1, pp. 56–63, Jan. 1993.
tures or classification algorithms better classification results can [11] A. Cohen, P. M. Poluta, and R. Scott-Millar, “Compression of ECG sig-
nals using vector quantization,” in Proc. IEEE-90 S. A. Symp. Commun.
be obtained. Signal Process., 1990, pp. 45–54.
A comparison with the classifying results reported in the [12] R. W. McCaughern, A. M. Rosie, and F. C. Monds, “Asynchronous
literature is rather difficult to make due, among others, to the data compression techniques,” in Proc. Purdue Centennial Year Symp.
Inf. Process., Apr. 1969, vol. 2, pp. 525–531.
databases used. The accuracy of the classification used for [13] Y. Zigel, A. Cohen, and A. Katz, “ECG signal compression using anal-
the validation of the compression method compares favorably ysis by synthesis coding,” IEEE Trans. Biomed. Eng., vol. 47, no. 10,
with other compression techniques (JPEG2000 [6], wavelet pp. 1308–1316, Oct. 2000.
[14] Y. Zigel, A. Cohen, and A. Katz, “The weighted diagnostic distortion
[7], SPHIT [8], Djohn [9], Hilton [9], AZTEC [24], TP [24], (WDD) measure for ECG signal compression,” IEEE Trans. Biomed.
CORTES [24], SAPA [24]), as seen from Table V, where the Eng., vol. 47, no. 11, pp. 1422–1430, Nov. 2000.
1326 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 4, APRIL 2008

[15] M. N. Fira and L. Goras, “On a compression algorithm for ECG sig- Catalina Monica Fira received the B.S. and M.S.
nals,” presented at the 13th Eur. Signal Process. Conf. (EUSIPCO), degrees in biomedical engineering from the “Gr.
Antalya, Turcia, Sep. 2005. T. Popa” University of Medicine and Pharmacy
[16] E. A. Giakoumakis and G. Papakonstantinou, “An ECG data reduction Iasi, Iasi, Romania, in 2001 and 2002, respectively,
algorithm,” Comput. Cardiol., pp. 675–677, 1986. and the Ph.D. degree in electronics engineering
[17] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Nu- from “Gh. Asachi” Technical University Iasi, Iasi,
merical Recipes in C: The Art of Scientific Computing. Cambridge, Romania, in 2006.
MA: University Press, 1992, ch. 14, pp. 650–655. She is now with the Institute for Theoretical In-
[18] K. R. Rao and P. C. Yip, The Transform and Data Compression Hand- formatics of the Romanian Academy, Iasi, Romania.
book. Boca Raton, FL: CRC Press, 2001. Her research interests include electrical heart activity
[19] M. Crochemore and T. Lecroq, “Text data compression algorithms,” analysis, biomedical signal processing, and neural
in Algorithms and Theory of Computation Handbook, M. J. Atallah, networks.
Ed. Boca Raton, FL: CRC Press, 1998.
[20] M. N. Fira and L. Goras, “The R-wave detection with low computation
complexity based on the Pan-Tompkins algorithm,” in Buletinul Insti-
tutului Politehnic Din Iasi, Tomul L (LIV) 2004, vol. Fasc 3-4, Elec- Liviu Goras (M’92–SM’05) was born in Iasi, Ro-
trotehnica, Energetica, Electronica. mania, in 1948. He received the Diploma Engineer
[21] P. De Chazal, M. O’Dwayer, and R. B. Reilly, “Automatic classifica- and the Ph.D. degree in electrical engineering from
tion of heartbeats using ECG morphology and heartbeat interval fea- the “Gh. Asachi” Technical University (TU) Iasi, Iasi,
tures,” IEEE Trans. Biomed. Eng., vol. 51, no. 7, pp. 1196–1206, Jul. Romania, in 1971 and 1978, respectively.
2004. Since 1973, he was successively Assistant, Lec-
[22] G. K. Prasad and J. S. Sahambi, “Classification of ECG arrhythmias turer, Associate Professor and, since 1994, he has
using multi resolution analysis and neural networks,” in Proc. Conver- been a Professor with the Faculty of Electronics
gent Technol. Asia-Pacific Region (Tencon), 2003, vol. 21, pp. 15–17. and Telecommunications, TU Iasi. From September
[23] S. Osowski, L. T. Hoai, and T. Markiewicz, “Support vector machine 1994 to May 1995, he was on leave, as a senior
based expert system for reliable heartbeat recognition,” IEEE Trans. Fulbright Scholar, with the Department of Electrical
Biomed. Eng., vol. 51, no. 4, pp. 582–589, Apr. 2004. Engineering and Computer Sciences, University of California at Berkeley,
[24] S. M. S. Jalaleddine, “ECG data compression techniques—A unified Berkeley. His main research interests include nonlinear circuit and system
approach,” IEEE Trans. Biomed. Eng., vol. 37, no. 4, pp. 329–343, Apr. theory, cellular neural networks, and signal processing. He is the main organizer
1990. of the International Symposium on Signal, Circuits and Systems, ISSCS, held
in Iasi every two years since 1993.
Dr. Goras was the recipient of the IEEE Third Millennium Medal.

Vous aimerez peut-être aussi