Vous êtes sur la page 1sur 4

IEEE SIGNAL PROCESSING LETTERS, VOL. 12, NO.

4, APRIL 2005 281

Blind Separation of Impulsive Alpha-Stable Sources


Using Minimum Dispersion Criterion
Mohamed Sahmoudi, Karim Abed-Meraim, and Messaoud Benidir

Abstract—This letter introduces a novel blind source separation A. Why Heavy-Tailed -Stable Distributions?
(BSS) approach for extracting impulsive signals from their ob-
served mixtures. The impulsive signals are modeled as real-valued The stable distribution is a very flexible modeling tool in that
symmetric alpha-stable ( ) processes characterized by infinite it has a parameter , called the characteristic ex-
second- and higher-order moments. The proposed approach uses ponent, that controls the heaviness of its tails. A small positive
the minimum dispersion (MD) criterion as a measure of sparseness value of indicates severe impulsiveness, while a value of
and independence of the data. A new whitening procedure by a close to 2 indicates a more Gaussian type of behavior. Stable dis-
normalized covariance matrix is introduced. We show that the
proposed method is robust, so-named for the property of being tributions obey the Generalized Central Limit Theorem (GCLT),
insensitive to possible variations in the underlying form of sam- which states that if the sum of independent and identically dis-
pling distribution. Algorithm derivation and simulation results tributed (i.i.d.) random variables with or without finite vari-
are provided to illustrate the good performance of the proposed ance converges to a distribution by increasing the number of
approach. The new method has been compared with three of variables, the limit distribution must be stable [7]. Thus, non-
the most popular BSS algorithms: JADE, EASI, and restricted
quasi-maximum likelihood (RQML). Gaussian stable distributions arise as sums of random variables
in the same way as the Gaussian distribution. Another defining
Index Terms— -stable distribution, blind source separation
(BSS), minimum dispersion criterion, normalized covariance, feature of the stable distribution is the so-called stability prop-
robustness. erty, which says that the sum of two independent stable random
variables with the same characteristic exponent is again stable
and has the same characteristic exponent. For these reasons,
I. INTRODUCTION statisticians [7], economists [8], and other scientists engaged in
a variety of disciplines have embraced -stable processes as the
H EAVY-TAILED distributions, which are largely used to
model impulsive signals, assign relatively high probabil-
ities to the occurrence of large deviations from the median. A
model of choice for heavy-tailed data.

common characteristic property of many heavy-tailed distribu- B. Symmetric -Stable Distributions


tions, such as the -stable family, is the nonexistence of finite No closed form exists for the -stable probability density
second- or higher-order moments. There are several well-known function (pdf), except for the cases (Levy distribu-
methods for blind source separation (BSS) [1], [2], based in gen- tion), (Cauchy distribution), and (Gaussian distri-
eral on second- or higher-order statistics of the observations and bution), and is best defined by its characteristic function [7].
so are inadequate to handle heavy-tailed sources. In that case, Definition 1: A random variable (RV) is said to have a sym-
fractional lower-order theory is used instead [7]. Only a limited metric -stable distribution if its characteristic func-
literature was dedicated to the BSS of impulsive signals. In [12], tion is of the form , where is
the restricted quasi-maximum likelihood (RQML) approach is the characteristic exponent, which measures the thickness of the
introduced as an extension of the popular Pham’s quasi-max- tails of the distribution, is the location parameter, and
imum likelihood approach to the -stable sources case. Other is the dispersion of the distribution. The dispersion
solutions exist in the literature based on the spectral measure parameter determines the spread of the distribution around its
[6], the normalized statistics [10], and the order statistics [12]. location parameter .
In this letter, we introduce a new method for -stable source sep- The following properties of laws will be used next
aration from their observed linear mixtures using the minimum for BSS of -stable signals [7].
dispersion criterion [9]. In the finite variance case, a similar ap- Property 1: Let and be independent RVs with
proach for principal components analysis (PCA) that uses the . Then, with
output variances has been proposed in [3]. and .
Property 2: Let and be a real constant.
Manuscript received August 25, 2004; revised October 25, 2004. The asso- Then, .
ciate editor coordinating the review of this manuscript and approving it for pub- Property 3: If and , then
lication was Prof. Jonathon A. Chambers. , where is a constant
M. Sahmoudi and M. Benidir are with the Laboratoire des Signaux et
Systèmes, Centre National de Recherche Scientifique-SUPELEC-Université that depends on only.
Paris-Sud, 91192 Gif-sur-Yvette, France (e-mail: sahmoudi@lss.supelec.fr; A direct consequence of this property is that for RV, th
benidir@lss.supelec.fr). moments are finite if and only if .
K. Abed-Meraim is with the Dèpartement Traitement du Signal et Image,
Telecom Paris, 75634 Paris, France (e-mail: abed@tsi.enst.fr). Property 4: The fractional lower order moments (FLOMs)
Digital Object Identifier 10.1109/LSP.2005.843771 of an -stable random variable with zero location parameter and
1070-9908/$20.00 © 2005 IEEE
282 IEEE SIGNAL PROCESSING LETTERS, VOL. 12, NO. 4, APRIL 2005

dispersion are given by for where is unitary, denotes the whitened data, i.e., ,
, where denotes the expectation operator, and is and is a separating matrix to be estimated. Let us consider
a constant depending only on and . the global MD criterion given by the sum of dispersions of all
entries of , i.e.,
C. Problem Formulation
In many situations of practical interest, one has to consider (7)
mutually independent signals whose linear combina-
tions are observed. They are formulated as , where
is the real-valued impulsive source where denotes the dispersion of , which is the th entry
vector, and is a full-rank mixing matrix. The source of .
signals are assumed to be mutually indepen- In this letter, we prove that the MD criterion defines a con-
dent processes with the same characteristic expo- trast function in the sense that the global minimization of the
nent . The purpose of blind source separation is to find a sepa- objective function given in (7) leads to a separating solution.
rating matrix, i.e., an matrix such that is The th-order moment of an -stable RV and its dispersion are
an estimate of the source signals up to a permutation and scaling related through only a constant (see Property 4). Therefore, the
factors [1]. MD criterion is equivalent to least -norm estimation, where
. Although the most widely used contrast func-
II. SOURCE SEPARATION tions for BSS are based on the second- and fourth-order cumu-
lants [1], we believe, however, that there are good reasons to
A. Whitening by Normalized Covariance Matrix extend the class of contrast functions from cumulants to frac-
The first step consists in whitening the observations (orthog- tional moments, as we argue next. Mutual information (MI) is
onalizing the mixture matrix ). For finite variance signals, the usually chosen to measure the degree of independence. Because
whitening matrix is computed as the inverse square root of the direct estimation of MI is very difficult, one can then de-
the signal covariance matrix. At first glance, this should not be rive approximative contrast functions, which are often based
applied to -stable sources. However, we proved in [11] that a on cumulant expansions of the densities. However, one can ap-
properly normalized covariance matrix converges to a finite ma- proximate the Shannon entropy (that is closely related to the
trix with the appropriate structure when the sample size tends MI) using the -norm concept (see [5]) and, hence, use it to
to infinity: approximate the MI. For example, in [4], the author uses the
Theorem 1: Let and be two variables of dis- -norm concept to approximate the MI and then to find the op-
persions and and pdfs and , respectively. timal contrast function for exponential power family of density
Then, we have , . Thus, we propose the MD criterion for
where denotes the time-averaging operator measuring independence of -stable distributed data, as shown
.1 by the following result.
Theorem 2: Let be a data vector of an -stable Theorem 3: The minimum dispersion criterion in (7) is a
process mixture and its sample contrast function under an orthogonality constraint for sepa-
covariance matrix. Then, the normalized covariance matrix of rating an instantaneous mixture of sources.
defined by The proposed method requires little or no a priori knowledge
of the input signals. The dispersion as well as the characteristic
exponent are estimated according to [7], where the proposed
(4) estimator is proved to be consistent and asymptotically normal.
Trace

converges asymptotically to the finite matrix , where C. Separation Algorithm


diag with , where Theorem 3 proves that under orthogonal transform, the signal
denotes the Frobenius norm. has minimum dispersion if its entries are mutually indepen-
Proposition 1: Let be the normalized covariance matrix dent. The problem now is to minimize a cost function under
defined above in (4) of the considered -stable mixture. Then orthogonal constraints. Different approaches exist to perform
the inverse square root matrix of is a data-whitening matrix. this constrained optimization problem. We chose here to esti-
mate as a product of Givens rotations according to
B. Minimum Dispersion (MD) Criterion # , where is the elementary
The MD criterion is a common tool in linear theory of stable Givens rotation defined as orthogonal matrix where all diagonal
processes as the dispersion of a stable RV plays a role analogous elements are 1 except for the two elements in rows
to the variance. In addition, we should note that the MD crite- (and columns) and . Likewise, all off-diagonal elements of
rion is a direct generalization of the minimum mean-square error are 0 except for the two elements and
at positions and , respectively. The minimization of
(MMSE) criterion in the Gaussian case [7]. Let ,
is done numerically by searching using a fine grid
1Due to limited space, we will not give the proofs in this letter, but they can into . The so-called MD algorithm can be summarized
be found in [11]. as follows:
SAHMOUDI et al.: BLIND SEPARATION OF IMPULSIVE ALPHA-STABLE SOURCES 283

Fig. 1. Generalized mean rejection level versus for N = 1000. Fig. 2. Generalized mean rejection level versus the estimation error 1 .
Minimum Dispersion Algorithm
Step 1. Whitening transform.
Step 2. Sweep. For all pairs , do
Compute the Givens angle that maximize the pairwise indepen-
dence for and by minimizing the global dispersion .
If , rotate the pair accordingly.
If no pair has been rotated in the previous sweep, end. Otherwise, perform another
sweep.

III. PERFORMANCE EVALUATION


Consider mixtures of i.i.d. impulsive standard
( and ) source signals. The statistics are evalu-
ated over 100 Monte Carlo runs, and the mixing matrix as well
as the sources are generated randomly at each run. The perfor-
mance of our MD method is compared to three widely used BSS
algorithms: JADE [1], EASI [2], and RQML [12]. To measure
the quality of source separation, we did use the generalized re-
Fig. 3. Generalized mean rejection level versus sample size N .
jection level criterion defined below.

A. Generalized Mean Rejection Level (GMRL) Criterion sample size is set to . It appears that the param-
To evaluate the performance of the separation method, we eter is of crucial importance since it has a major influence
propose to define the rejection level as the mean value of on the separation performance. Two important features are ob-
the interference signal dispersion over the desired signal dis- served: The mean rejection level increases when the sources
persion. This criterion generalizes the existing one [9] based on are very impulsive ( close to zero) or when they are close
signal powers,2 which represent the mean value of interference to the Gaussian case ( close to two). In the latter case (i.e.,
to signal ratio. If source is the desired signal, the related gen- ), the source separation is not possible. Moreover, we
eralized rejection level would be observe that the MD algorithm outperforms the other existing
algorithms for most values of . In Fig. 2, the simulation study
shows that estimation errors of the characteristic exponent of
(9)
sources distribution have little influence on the performance of
where denotes the dispersion of a RV . Therefore, the algorithm. In Fig. 3, for our proposed MD algorithm, two
the averaged rejection level is given by . different scenarios lead to similar performance. In the first sce-
nario, we consider a mixture of three -stable sources with the
B. Experimental Results same characteristic exponent . In the second one, we
wrongly assume three sources with , while in
In Fig. 1, the GMRL of the MD, EASI, JADE, and RQML
reality, the sources are with different characteristic expo-
algorithms versus the characteristic exponent is plotted. The
nents (Cauchy pdf) and (Gaussian
2For S S processes, the variance (power) is replaced by the dispersion. pdf). It can be observed that the algorithm can separate sources
284 IEEE SIGNAL PROCESSING LETTERS, VOL. 12, NO. 4, APRIL 2005

source signal values is not optimal because these observations


must be very informative. In the sixth experiment, we consider
the case where the observation is corrupted by an additive white
Gaussian noise. The GMRL versus noise power is depicted in
Fig. 5 for and . In this experiment, the noise
level is varied from 0 to dB. As can be seen, the per-
formance degrades significantly when the noise power is high.
This might be explained by the fact that the theory does not take
into consideration additive noise. Improving robustness against
noise is still an open problem under investigation. It can be seen
from Fig. 5, however, that the proposed MD method has reliable
performance and outperforms the RQML algorithm in the low
or moderate noise power situation.

IV. CONCLUSION
We have introduced a two-step procedure for -stable source
Fig. 4. Generalized mean rejection level versus sample size N for = 1:5. separation. A first whitening step allows the orthogonalization
of the mixing matrix using a normalized covariance matrix of
the observation. In the second step, the remaining orthogonal
matrix is estimated by minimizing a global dispersion criterion.
The proposed method is robust to modelization errors of the
sources pdf. Numerical examples are presented to illustrate the
effectiveness of the proposed method that is shown to perform
better than the RQLM method. Moreover, they confirm that ex-
isting BSS methods, which are not designed specifically for im-
pulsive signals, fail to provide good separation quality.

REFERENCES
[1] J.-F. Cardoso and A. Souloumiac, “Blind beamforming for non-
Gaussian signals,” Proc. Inst. Elect. Eng. Radar Signal Process. F, vol.
140, no. 6, pp. 362–370, Dec. 1993.
[2] J. F. Cardoso and B. Laheld, “Equivariant adaptive source separation,”
IEEE Trans. Signal Process., vol. 44, no. 12, pp. 3017–3030, Dec. 1996.
[3] D. Erdogmus, Y. N. Rao, J. C. Principe, J. Zaohao, and K. E. Hild II,
“Simultaneous extraction of principal components using givens rotations
and output variances,” in Proc. ICASSP, 2002, pp. 1069–1072.
[4] A. Hyvarinen, “Fast and robust fixed-point algorithms for independent
component analysis,” IEEE Trans. Neural Netw., vol. 10, no. 3, pp.
Fig. 5. Generalized mean rejection level versus the additive noise power for 626–634, May 1999.
= 1 :5 . [5] J. Karvanen and A. Cichocki, “Measuring sparseness of noisy signals,”
in Proc. ICA, Nara, Japan, 2003.
[6] P. Kidmose, “Blind separation of heavy tail signals,” Ph.D. dissertation,
from their mixtures, even though we deviate from the assump- Dept. Tech. Univ. Denmark, Lyngby, Denmark, 2001.
tions under which it is derived. Consequently, the MD algorithm [7] C. L. Nikias and M. Shao, Signal Processing with -Stable Distributions
and Applications. New York: Wiley, 1995.
is robust to possible source modelization errors. Fig. 4 shows the [8] Handbook of Heavy Tailed Distributions in Finance. Amsterdam, The
performance realized by each of the four BSS algorithms as a Netherlands: Elsevier, 2003. S. T. Rachev.
function of the sample size for . One can observe that [9] M. Sahmoudi, K. Abed-Meraim, and M. Benidir, “Blind separation of
instantaneous mixtures of impulsive alpha-stable sources,” in Proc. 3rd
good performance is reached by the MD algorithm for relatively Int. Symp. Image Signal Process. Anal., vol. 1, Rome, Italy, 2003, pp.
small/medium sample sizes. This figure also demonstrates that 353–358.
EASI fails to separate -stable signals and that JADE is subop- [10] , “Blind separation of heavy-tailed signals using normalized statis-
tics,” in Proc. ICA, Granada, Spain, Sep. 2004, pp. 22–24.
timal in this context. This is due to the fact that EASI and JADE [11] M. Sahmoudi, “Robust separation and estimation of non-stationary
are not specifically designed for heavy-tailed signals. Moreover, 2nd/or non-Gaussian signals,” Ph.D. dissertation, Université Paris-Sud,
we observe a certain performance gain in favor of the MD al- Gif-sur-Yvette, France, 2004.
[12] Y. Shereshevski, A. Yeredor, and H. Messer, “Super-efficiency in blind
gorithm compared to RQML. This is due to the fact that trun- signal separation of symmetric heavy-tailed sources,” in Proc. 11th IEEE
cating observations, in the RQML procedure, created by large Workshop Stat. Signal Process., Aug. 2001, pp. 78–81.

Vous aimerez peut-être aussi