Vincent Wens
(r)
T
_
(t) (L)(t)
_
p
,
(2)
where
p
= (p 2)/(p 1). This shows that the spatial
prole of the inverse operator is strongly controlled by
the kernel (r). In particular, when (r) is continuous
at r
0
, Eq. 2 shows that W
r
shares this property. This
leads to spurious local connectivity around r
0
since we
have (r, t) (r
0
, t), for r in some neighborhood of
r
0
, independently of (t). Only the precise shape and
size of this neighborhood depends on data (t) and pa
rameter p. Likewise, longdistance correlations in (r)
induce spurious couplings. For example, (r) = (r
0
)
implies W
r
=
1/(p1)
W
r0
by Eq. 2, so that (r, t) and
(r
0
, t) are temporally correlated whatever (t).
More generally, various assumptions and parameters
can aect the spatial properties of inverse operators. Yet,
the issue of spatial leakage is of quite general concern
and applies to other inverse models [1618] as well [15].
For example, the whole class of models based on unitary
invariant constraints, as in the case p = 2 above, present
a spatial prole W
r
= (r)
T
A, with A independent of r,
leading to spatial leakage eects [15]. We now show how
to correct for the spurious couplings emanating from a
given location r
0
by modeling spatial leakage.
Structural model. Let us rst consider an ideal
state localized at r
0
, i.e.,
0
(r, t) = (r r
0
)f(t).
Its reconstruction (r, t) = (W
r
)(t) from noiseless
observations (t) = (L
0
)(t) is a functional of f(t),
(r, t) = (R
r,r0
f)(t), which denes the resolving op
erator R
r,r0
[6] and describes spatial leakage from r
0
.
For example, R
r,r0
= W
r
(r
0
) in the case of Eq. 1.
Assuming R
r0,r0
invertible, the unknown function f(t)
can be reformulated in terms of calculable quantities
as (R
1
r0,r0
(r
0
, ))(t). This leads to a closed expression
(r, t) = (r, t) for this pure leakage eld, where
(r, t) =
_
R
r,r0
R
1
r0,r0
(r
0
, )
_
(t) . (3)
When no assumptions are made about
0
(r, t), we shall
use Eq. 3 as a model for the contribution of spatial leak
age from r
0
to the reconstruction (r, t) = (W
r
)(t),
and dene a corrected eld
(r, t) = (r, t) (r, t) . (4)
It can also be written as (r, t) = (W
new
r
)(t) in terms
of a new inverse operator
W
new
r
= W
r
R
r,r0
R
1
r0,r0
W
r0
(5)
depending on the direct and inverse models only. Equa
tions 4, 5 form the analytical basis of our new method.
Although it can be applied to any state and inverse
model, its strict validity presents two restrictions. First,
this subtractive scheme requires spatial leakage to lin
early contribute to the reconstruction. This assumption
holds locally when the inverse operator is continuous at
r
0
, since then (r, t) (r
0
, t) = (r
0
, t) (r, t) for
r close enough to r
0
. It is otherwise invalid, except when
the resolving operator is linear. Second, the node r
0
must
be isolated from the other activated regions, i.e.,
0
(r, t)
assumes the form (r r
0
)f(t) +g(r, t) with g(r, t) = 0
only where spatial leakage to r
0
is negligible, R
r0,r
0.
Otherwise, g(r, t) would also contribute to the expression
(R
1
r0,r0
(r
0
, ))(t) used for f(t) to derive Eq. 3. Explic
itly, in the case of Eq. 1,
R
1
r0,r0
(r
0
, ) = f +
_
d
3
r R
1
r0,r0
R
r0,r
g(r, ) . (6)
The neighborhood of r
0
where R
r0,r
g(r, ) = 0 being
typically of nonzero but limited size, local leakage is
generally overcorrected, as illustrated by the identity
W
new
r0
= 0, whereas the reconstruction at suciently long
distances remains unchanged, since W
new
r
W
r
wher
ever leakage from r
0
is negligible, R
r,r0
0. Note also
that spatial leakage between r and r
away from r
0
is
left uncorrected [Fig. 1(e)]. At the level of connectivity
maps between (r
0
, t) and (r, t), this means that local
coupling is suppressed whereas true longdistance inter
actions are preserved, although they are still deformed
by spatial leakage [Fig. 1(f)]. Despite this limitation, the
correction scheme given by Eqs. 4, 5 is useful to empha
size longrange connectivity. This makes it suitable, in
particular, for studies of largescale brain networks.
In this application, Eq. 1 is used and inverse mod
els are taken of the form (W
r
)(t) = w(r, t)
T
(t) with
w(r, t) R
m
. It then follows from the denitions that
(R
r,r0
f)(t) = w(r, t)
T
(r
0
)f(t) and
(r, t) = (r, t) (r, r
0
, t) (r
0
, t) , (7)
(r, r
0
, t) = w(r, t)
T
(r
0
) / w(r
0
, t)
T
(r
0
) . (8)
3
In the rest of this Letter, we shall use this particular case
and compare this new method to alternative techniques
used in electrophysiology.
Comparison with orthogonalization. The philos
ophy behind these techniques is to consider connectiv
ity estimates insensitive to any instantaneous linear cou
pling, of which spatial leakage (Eqs. 7, 8) is a particular
case. To implement this idea, we still consider Eq. 7 for
complex scalar elds but now determine the real coe
cient (r, r
0
, t) via linear regression. This yields
] / (r
0
, t)
2
. (10)
The orthogonalization method [14] uses these equations
to remove spatial leakage from r
0
. Geometrically,
(r, t)
coincides with the orthogonal projection of (r, t) onto
the direction perpendicular to (r
0
, t). This implies that
(r, t) =
1
2
_
(r, t)
(r
0
, t)
2
(r
0
, t)
2
(r, t)
_
, (11)
which directly relates the two correction schemes.
Orthogonalization uses no information about the struc
ture of spatial leakage beyond Eq. 7, but requires com
plex elds since otherwise
(r, t) vanishes identically. It
is driven by data and is applicable to any pair of signals.
Equation 11 overcorrects spatial leakage from r
0
at the
expense of true interactions, since all instantaneous lin
ear relations between (r, t) and (r
0
, t) are suppressed.
In particular, the reconstruction is aected even when
no leakage is present, since then (r, t) = (r, t) but
(r, )
2
, where
brackets denote time averaging. In any case, regression
always requires linearity of the resolving operator, as for
Eqs. 4, 5, and modies (r, t) even when spatial leakage
is negligible, contrary to the structural model correction.
To further explore the limitations of orthogonalization,
we shall consider some explicit connectivity indices and
show that it can nontrivially bias their estimation.
Brain network analyses are often based upon cere
bral rhythms and concentrate on phaselocking and am
plitude comodulation [19], which appear more gener
ally as longrange communication mechanisms within dis
tributed oscillatory networks in the mean eld approxi
mation. We now study phase and amplitude couplings
between (r
0
, t) at a given r
0
and (r, t) at r = r
0
,
to those obtained between (r
0
, t) and
(r, t). To that
aim, it is useful to rewrite Eq. 11 geometrically in terms
of phases and amplitudes:
(r, r
0
, t) =
2
sign[sin (r, r
0
, t)] , (12)

(r, r
0
, t)) denotes the phase lag be
tween (r, t) (
(r, )(r
0
, )
= i Im
_
(r, )(r
0
, )
. (14)
Orthogonalization removes the real part of crossdensity,
and thus generally underestimates its modulus compared
to our new method. Likewise, it forces phase coherence
to be purely imaginary. More precisely, Eq. 12 yields
exp i
(r, r
0
, )
_
= i (f
+
f
) , (15)
where f
+
and f
(r, r
0
, ) = 0. Interestingly, the righthand sides
of Eqs. 14 and 15 respectively determine imaginary coher
ence [10] and phase lag index [11, 12]. Notably, the phase
lag index f
+
f
1 c
2
. Eliminating c from Eq. 19
exhibits the nonlinear dependence of orthogonalized am
plitude correlation (r, r
0
) on (r, r
0
) and phase lag .
Let us notice that, depending on the values of c and ,
orthogonalization may underestimate (e.g., for = 0)
or overestimate (e.g., for = /2) amplitude correla
tion. In the latter case, the ratio (r, r
0
)/(r, r
0
) always
lies between 1 and
vwens@ulb.ac.be
[1] R. Albert and A.L. Barab asi, Rev. Mod. Phys., 74, 47
(2002); M. E. J. Newman, SIAM Rev., 45, 167 (2003).
[2] R. Diestel, Graph Theory, 3rd ed., Graduate Texts in
Mathematics (SpringerVerlag, 2005).
[3] C. T. Butts, Science, 325, 414 (2009).
[4] S. Bialonski, M.T. Horstmann, and K. Lehnertz, Chaos,
20, 013134 (2010).
[5] A. Tarantola, Nat. Phys., 2, 492 (2006); G. Jobert, Eu
rophysics News, 44, 21 (2013).
[6] A. Tarantola, Inverse Problem Theory and Methods for
Model Parameter Estimation (SIAM, 2004).
[7] The niteness of data also biases connectivity estimates;
see S. Bialonski, M. Wendler, and K. Lehnertz, PLoS
ONE, 6, e22826 (2011); A. Zalesky, A. Fornito, and
E. Bullmore, NeuroImage, 60, 2096 (2012).
[8] This notion diers from that of spectral leakage, which
originates from nitedimensional truncations of the
space of elds (r, t); see J. Trampert and R. Snieder,
Science, 271, 1257 (1996).
[9] See J.M. Schoelen and J. Gross, Hum. Brain Mapp.,
30, 1857 (2009) and references therein.
[10] G. Nolte, O. Bai, L. Wheaton, Z. Mari, S. Vorbach, and
M. Hallett, Clin. Neurophysiol., 115, 2292 (2004).
[11] C. J. Stam, G. Nolte, and A. Daertshofer, Hum. Brain
Mapp., 28, 1178 (2007).
[12] A. Hillebrand, G. R. Barnes, J. L. Bosboom, H. W.
Berendse, and C. J. Stam, NeuroImage, 59, 3909 (2012).
[13] M. J. Brookes, M. Woolrich, and G. R. Barnes, Neu
roImage, 63, 910 (2012).
[14] J. F. Hipp, D. J. Hawellek, M. Corbetta, M. Siegel, and
A. K. Engel, Nat. Neurosci., 15, 884 (2012).
[15] See Supplementary Material for Examples of inverse
models, Unitary symmetry and spatial structure, Deriva
tions for amplitude correlation between coherent sources.
[16] B. D. Van Veen and K. M. Buckley, IEEE ASSP Mag.,
5, 4 (1988); B. Van Veen, W. Van Drongelen, M. Yucht
man, and A. Suzuki, IEEE Trans. Biomed. Eng., 44, 867
(1997).
[17] M. W. Woolrich, A. Baker, H. Luckhoo, H. Mohseni,
G. Barnes, M. Brookes, and I. Rezek, NeuroImage, 77,
77 (2013).
[18] J. Gross, J. Kujala, M. H am al ainen, L. Timmermann,
A. Schnitzler, and R. Salmelin, Proc. Natl. Acad. Sci.
USA, 98, 694 (2001).
[19] M. Siegel, T. H. Donner, and A. K. Engel, Nat. Rev.
Neurosci., 13, 121 (2012).
5
Supplementary material
Appendix A: Examples of inverse models.
Appendix B: Unitary symmetry and spatial structure.
Appendix C: Derivations for amplitude correlation between coherent sources.
Appendix A: Examples of inverse models
We showed in the main text that the kernel (r) of the
direct operator is a prime ingredient controlling the spa
tial structure of the minimum L
p
norm inverse model.
In this Appendix, we extend our discussion and indicate
how reconstruction priors of the inverse model may af
fect the structure of spatial leakage. We also consider a
fundamentally dierent type of inverse model based on
spatial ltering. In the following, the direct operator L is
always assumed to be given by Eq. 1.
Minimum L
p
norm estimates. The prototypal
example of inverse models consists in optimizing the
t between observed data (t) and prediction (L)(t),
with constraints on (r, t) imposing prior model as
sumptions. We minimize the strictly convex functional
V = V
mist
+ V
prior
over real scalar elds (r, t), where
V
mist
=
1
2
_
dt
(t) (L)(t)
2
, (S1)
V
prior
=
p
_
dt d
3
r
(r, t)
p
, (S2)
with x
2
= x
T
x and > 0, p > 1 [6]. The solu
tion is unique, and, when expressed as a functional of
data, (r, t) = (W
r
)(t), denes the inverse operator
W
r
. The extremization equation V/(r, t) = 0 reads
(r)
T
_
(t) (L)(t)
_
=
(r, t)
p2
(r, t) . (S3)
Taking the absolute value and eliminating (r, t),
(r, t) =
p1
(r)
T
_
(t) (L)(t)
_
(r)
T
_
(t) (L)(t)
_
p
,
(S4)
where
p
= (p 2)/(p 1) < 1. Equation 2 is recovered
by setting = 1. Note that this derivation does not
directly apply to the important case p = 1 where V
prior
is not dierentiable [6].
The qualitative spatial properties of W
r
described in
the main text are unaected by certain model parame
ters but are by others. For example, changing Eq. S1,
which embodies the physical assumption that measure
ment noise (t) = (t) (L)(t) is a gaussian white
process, into V
mist
=
1
2
_
dt dt
(t)
T
M(t, t
)(t
), which
allows to incorporate a more realistic description of noise,
replaces (r)
T
(t) by (r)
T
_
dt
M(t, t
)(t
) in Eq. S4.
This leaves the structure of spatial leakage, i.e. data
independent spatial properties of the reconstructed eld,
qualitatively unmodied. On the other hand, chang
ing the L
p
norm aects spatial leakage. Indeed, using
V
prior
=
p
_
dt d
3
r (r)(r, t)
p
, where (r) > 0 is a
weight function, instead of Eq. S2, incorporates an extra
factor (r)
p1
into the righthand side of Eq. S4. This
can strongly modify the structure of spatial leakage. For
example, a discontinuity in the weight function forces the
reconstruction to be discontinuous too, even when (r)
is smooth.
The linear case. The situation simplies in the
widely used case p = 2, for which Eq. S4 is linear since
2
= 0. The inverse operator now takes the form
W
r
= (r)
T
A, (S5)
where A is an rindependent operator. In this case, W
r
is seen to be at least as smooth as (r), while mere con
tinuity holds in general. Interestingly, Eq. S5 actually
follows from the unitary symmetry of V
prior
with p = 2,
as we show in Appendix B. The qualitative structure of
spatial leakage for the minimum L
2
norm estimate thus
applies to all inverse models possessing this symmetry,
may they be linear or nonlinear. Only A depends on
the details of the model. An explicit expression for this
operator can not be obtained in general, unlike for the
minimum L
2
norm model where [6]
(A)(t) =
_
+
_
d
3
r (r)(r)
T
_
1
(t) . (S6)
This result can be derived by rearranging Eq. S4 with
p
= 0 as
_
d
3
r
a(r, r
) (r
, t) = (r)
T
(t) , (S7)
where a(r, r
) = (r r
) +(r)
T
(r
_
(r r
)
1
(r)
T
(r
) +
1
2
_
d
3
r
(r)
T
(r
)(r
)
T
(r
) . . .
_
(r
)
T
(t)
=
1
(r)
T
_
1
1
_
d
3
r
(r
)(r
)
T
+
1
2
_
_
d
3
r
(r
)(r
)
T
_
2
. . .
_
(t) ,
(S8)
6
which leads to Eqs. S5, S6 after resummation of this
geometric series and analytic continuation in .
Spatial ltering. A widely used alternative to the
above constrained leastsquares criterion is the beam
former [16]. It relies on fundamentally dierent ideas,
yet the discussion on spatial leakage remains similar. The
inverse operator is dened by a linear spatial lter
(W
r
)(t) = w(r)
T
(t) (S9)
that focuses on the signal originating from location r. In
the linearly constrained minimum variance beamformer
[16], this ltering is achieved by imposing the unitgain
constraint w(r)
T
(r) = 1, which ensures that eld ac
tivity at r passes through the lter undeformed, and by
minimizing the mean square (W
r
)
2
to dampen the
contribution from elsewhere. Introducing the data co
variance matrix =
T
and a Lagrange multiplier
(r) enforcing the constraint, this problem amounts to
minimize the function
w(r)
T
w(r) + (r)
_
w(r)
T
(r) 1
_
(S10)
over w(r) R
m
. A standard calculation yields [16]
w(r) =
1
(r)
(r)
T
1
(r)
.
(S11)
The analytical dependence of the inverse operator in (r)
is qualitatively similar to that of Eq. S4 in the formal case
p = 0, and the discussion of spatial leakage properties
made in the main text holds unchanged, except that the
spatial lter w(r) is at least as smooth as the kernel (r).
These comments also apply to some generalizations
of the beamformer because the qualitative structure of
spatial leakage in Eq. S11 does not depend on the data
covariance matrix . For example, the nonstationary
beamformer of [17] uses a timedependent data covari
ance (t), making the spatial lter weights given by
Eq. S11 vary in time but with the same qualitative de
pendence in (r). They also apply to frequencydomain
colored beamformers where is replaced by the cross
density matrix () = ()()
U(r, r
) (r
) (S15)
and leaving the inner product invariant,
(U
1
U
2
) = (
1

2
) . (S16)
Since we consider here realvalued elds, this imposes the
orthogonality constraint
_
d
3
r U(r, r
) U(r, r
) = (r
) . (S17)
The presence of the parameter (r) explicitly forbids uni
tary transformations to be symmetries of the direct and
inverse problems. A standard trick is to promote (r) to
a background eld transforming in the same representa
tion, hence restoring symmetry in the direct model since
(UU) = ().
We now consider inverse models that are invariant un
der unitary transformations U, U. We also
assume that, among its parameters, (r) is the only func
tion of r.
Extremizing invariant functionals. Let us sup
pose that the reconstruction (r, t) = (W
r
)(t) is de
ned as the extremum of some unitary invariant func
tional
V = V[m, s] (S18)
in which (r, t) and (r) appear through the invariants
m(t) = ((, t)) and s(t, t
) = ((, t)(, t
)) only.
The extremization equation V/(r, t) = 0 reads
(r)
T
V
m(t)
+ 2
_
dt
V
s(t, t
)
(r, t
) = 0 . (S19)
7
Applying the inverse of the linear operator in the second
term, if it exists, we obtain the implicit equation
(r, t) = (r)
T
B[t, m, s] , (S20)
for some coecients B[t, m, s] depending on the deriva
tives of V, as well as on all other rindependent param
eters such as data (t). Since this holds whatever (t),
we obtain Eq. S5 with (A)(t) = B[t, m, s], which is in
dependent of r, as claimed. This argument represents a
direct generalization of the derivation of Eq. S4 for p = 2.
It shows in particular that physical assumptions about
measurement noise (t) = (t) (L)(t) are irrelevant
to this argument, since V
mist
in Eq. S1 can be replaced
by any functional (t) and (L)(t). Likewise, the gaus
sian white noise prior assumption on (r, t) embodied in
the L
2
norm term, Eq. S2 with p = 2, can be waived by
using any functional of s(t, t
) for V
prior
.
Equivariance constraint. A more explicit result can
be obtained by directly solving the general constraints
implied by unitary symmetry. The following argument is
also slightly more general, in that it does not assume the
inverse model to be dened as an extremization problem.
The reconstruction is a functional
= F[
1
, . . . ,
m
] (S21)
of the m components
k
(r) of the kernel (r). Unitary
symmetry imposes that, if is the reconstruction for
given functions
k
, then the reconstruction for the func
tions U
k
must be U. Mathematically, this translates
into the property that F is unitary equivariant, i.e., for
any linear operator U satisfying Eqs. S15, S17, we have
F[U
1
, . . . , U
m
] = UF[
1
, . . . ,
m
] . (S22)
This assumption imposes strong constraints on the func
tional F, which in consequence must be of the form
F[
1
, . . . ,
m
] =
m
k=1
k
C
k
_
(
T
)
_
, (S23)
as can be shown using classical theorems from invariant
theory. Functions C
k
of the matrix (
T
) of inner prod
ucts (
j

k
) are the most general unitary invariant func
tionals of the
k
s, whereas the factors
k
are necessary
for this sum to transform in the fundamental represen
tation. All rindependent model parameters, including
time t and data (t), enter Eq. S23 through the C
k
s
only. We thus recover Eq. S5 again, with the k
th
compo
nent of (A)(t) given by C
k
. The operator A is indeed
independent of r, since (r) only appears via the inner
products (
T
), as was explicit in Eq. S6.
Let us notice that the above arguments can be gen
eralized to the case where other parameters transform
nontrivially, by promoting them to background elds.
Direct models involving other inner products and isom
etry groups or subgroups can also be considered along
these lines.
Appendix C: Derivations for amplitude correlation
between coherent sources
The eect of phase coupling on orthogonalized am
plitude correlation was investigated in [14] using simu
lations of the coherent sources model presented in the
main text. We claimed that Eqs. 18, 19 yield a good ap
proximation of their results. In this Appendix, we make
the approximation scheme used explicit and then derive
these equations using elementary calculations. We also
check the validity of this approximation by reproducing
simulations of [14]. All probabilities considered here are
based on ergodic densities, i.e., temporal averaging.
Let us recall that a
0
(t) and a
1
(t) are two positive sig
nals with Rayleigh distribution, i.e., their fraction of time
spent between values a and a + da is d exp(a
2
/2
2
),
> 0 (Fig. S1), that
0
(t) and
1
(t) are uniformly dis
tributed over [0, 2), and that all four variables are in
dependent of each other. These assumptions imply in
particular that a
k
(t) exp i
k
(t) is distributed according
to a complex gaussian with zero mean and variance
2
.
Our goal is to compute the correlations
(r, r
0
) = corr
_
(r, ), (r
0
, )
, (S24)
(r, r
0
) = corr
_

, (S25)
between the amplitudes of (r
0
, t) = a
0
(t) exp i
0
(t) and
(r, t) given by Eq. 17 or
(r, t) given by Eq. 11.
The approximation. Computing these correlations
as functions of c and is a priori dicult. However, the
following approximation scheme can be used. All ampli
tudes having nite variance, e.g.,
2
= (2 /2)
2
for
the Reyleigh distribution, they are eectively supported
on a region where the amplitude squared can be reason
ably approximated by a linear function, as exemplied
in Fig. S1. Since correlation is invariant under linear
transformations, this yields
(r, r
0
) corr
_
(r, )
2
, (r
0
, )
2
, (S26)
(r, r
0
) corr
_

(r, )
2
, (r
0
, )
2
. (S27)
Derivation of Eq. 18. It will prove useful to observe
that (r, t) is gaussian itself with zero mean, since it
is given by a linear combination of a
k
(t) exp i
k
(t), see
Eq. 17. Using
(r, t) =
c a
0
(t) +
_
1 c
2
a
1
(t) e
i(1(t)0(t))
,
(S28)
a
2
0
= a
2
1
and exp i(
1
0
) = 0, we also see that
(r, )
2
= a
2
k
, (r, )
2
= 0 . (S29)
We conclude that (r, t) actually has same variance, and
thus same distribution, as a
k
(t) exp i
k
(t). In particular,
(r, )
n
= a
n
0
= a
n
1
, (S30)
8
FIG. S1. Linear approximation of squared amplitude. The
Rayleigh distribution of a(t) with maximum reached at value
= 1 is shown together with the function a
2
. The Taylor
expansion a
2
2a
2
around a = is seen to be a good
approximation in the interval (green shaded region),
which contains about 80% of samples. Over a larger interval,
linear regression still yields a reasonable approximation.
for n 0. Now, using Eq. S28, the mutual independence
between a
0
(t), a
1
(t) and
1
(t)
0
(t), and the identities
a
2
0
= a
2
1
and cos(
1
0
) = 0, we nd
(r, )
2
a
2
0
= a
2
0
2
+ c
2
2
a
2
0
, (S31)
where
a
2
0
denotes the standard deviation of a
0
(t)
2
. Since
(r
0
, t) = a
0
(t) and (r, t) have the same moments
according to Eq. S30, this shows that the righthand side
of Eq. S26 equals c
2
. We thus recover Eq. 18.
Let us notice that the independence in actually holds
exactly. This is seen from Eq. S28, which shows that
0
(t),
1
(t) and only appear as
1
(t)
0
(t) , and
from the trivial property
_
d
0
d
1
f(
1
0
) =
_
d
0
d
1
f(
1
0
) , (S32)
for any function f on the circle. To check the validity of
our approximation for (r, r
0
) as a function of c, results
obtained from Eq. 18 and from direct simulations are
plotted in Fig. S2(a). Comparison shows fair agreement
and thus indicates that the approximation in Eq. S26 is
quite good. Actually, tting a power law (r, r
0
) = c
k
on the simulation curve using a loglog linear regression
yields k 2.1.
Derivation of Eq. 19. Let us now focus on Eq. S27.
The orthogonalized eld
(r, t) does not follow the same
distribution as (r
0
, t) and (r, t). This is seen from its
explicit expression
(r, t) =e
i0(t)
_
c sin a
0
(t)+
_
1 c
2
a
1
(t) sin(
1
(t)
0
(t))
,
(S33)
which is obtained by direct application of Eqs. 11 and
17. Since the rescaling 
(r, t) 
(r, t)/
1 c
2
leaves the correlation in Eq. S25 unchanged, and since

(r, t)/
1 c
2
depends on c and through the com
bination c = c sin /
1 c
2
, we conclude that (r, r
0
)
must be a function of c only. Its approximate depen
dence in c can be obtained by direct computation of the
righthand side of Eq. S27. Calculations are similar to
those presented above, although they are slightly more
cumbersome. We obtain

(r, )
2
a
2
0

(r, )
2
a
2
0
= c
2
sin
2
2
a
2
0
(S34)
for the covariance between squared amplitudes a
0
(t)
2
and

(r, t)
2
. Correlation is then obtained upon division by
their standard deviations
a
2
0
and

(r,)
2
. For the lat
ter, we nd
2

(r,)
2
= (1 c
2
)
_
( c
4
+ 3/8)
2
a
2
0
+ (2 c
2
+ 1/8)a
2
0
.
(S35)
Plugged into Eq. S27 and using the denition of c, these
results yield
(r, r
0
)
c
2
_
c
4
+ 3/8 + (2 c
2
+ 1/8)M[a
2
0
]
2
,
(S36)
where M[x] was dened in the main text after Eq. 16.
Here, we have M[a
2
0
] = a
2
0
/
a
2
0
= 1 for the Reyleigh
distribution, from which Eq. 19 nally follows.
This analytical result is compared to the simulation in
Fig. S2(b), and good agreement is again observed. In
particular, the fact that (r, r
0
) c (r, r
0
) when
= /2, whose interpretation is given in the main text,
is seen to hold both for Eq. 19 and for the simulation.
!"# !$#
FIG. S2. Analytical results versus simulations. Comparison of
Eqs. 18 (a) and 19 (b) to simulations, in which the parameters
0 c 1 and 0 /2 were varied, show very good
consistency.