MIMO Radar

© All Rights Reserved

1 vues

MIMO Radar

© All Rights Reserved

- (Handbook of Environmental Engineering 16) Lawrence K. Wang, Chih Ted Yang, Mu-Hao S. Wang (Eds.)-Advances in Water Resources Management-Springer International Publishing (2016)
- Question 2
- Recommendation Reports 調查性與建議性報告(上)
- DARPA00 Technical Report Jul00
- Ahrens 2015Performance Comparison of SVD- and GMD-assisted MIMO Systems
- 438471
- Bio Kinetics Model
- MUSA Method
- Nick.1113
- Advanced Automation & Robotics
- Improved Algorithm for MIMO Antenna Measurement
- LTE_MIMO_Pres_0903_Agilent_2
- 1-s2.0-S0020740303001723-main
- 86999_prefsed
- Mixed Integer Optimization in the CHemical Processes Industries
- 1 lot of inf.pdf
- MIMO
- Dethithudh2011 Co Dap An
- art%3A10.1007%2Fs40903-015-0019-4
- 1. Formulation-Graphical Sol

Vous êtes sur la page 1sur 14

, 2016 1

Lagrange Programming Neural Network

Hao Wang, Chi-Sing Leung, Senior Member, IEEE, Hing Cheung So, Fellow, IEEE, Junli Liang, Senior

Member, IEEE, Ruibin Feng, and Zifa Han.

arXiv:1805.12300v1 [eess.SP] 31 May 2018

Abstract—This paper focuses on target localization in a widely For distributed MIMO radar system, the target localization

distributed multiple-input-multiple-output (MIMO) radar sys- can be obtained directly or indirectly. For direct approaches,

tem. In this system, range measurements, which include the sum including the maximum likelihood (ML) estimators [1], [3],

of distances between transmitter and target and the distances

from the target to receivers, are used. We can obtain an accurate the position of target is directly calculated according to the

estimated position of the target by minimizing the measurement measurements collected by the antennas. These methods are

errors. In order to make our model come closer to reality, based on two-dimensional (2-D) search, which requires enor-

we introduce two kinds of noises, namely, Gaussian noise and mous computation. On the other hand, the indirect approaches

outliers. When we evaluate a target localization algorithm, its first detect the time-of-arrival (TOA) measurements, then

localization accuracy and computational complexity are two

main criteria. To improve the positioning accuracy, the original estimate the target position according to TOAs. In this case,

problem is formulated as solving a non-smooth constrained the ML methods can also be employed to estimate the target

optimization problem in which the objective function is either position. Generally speaking, ML methods solve a non-convex

l1 -norm or l0 -norm term. To achieve a real-time solution, the optimization problem [4]. Hence, the ML methods are usually

Lagrange programming neural network (LPNN) is utilized to transformed into the least squares (LS) method to reduce their

solve this problem. However, it is well known that LPNN requires

twice-differentiable objective function and constraints. Obviously, complexity [5], [6]. The LS solutions are usually obtained in

the l1 -norm or l0 -norm term in the objective function does not an iterative manner, and their accuracy is largely dependent on

satisfy this requirement. To address this non-smooth optimization the initial position estimate. Besides, due to the property of l2 -

problem, this paper proposes two modifications based on the norm, this method is highly sensitive to outliers. Our proposed

LPNN framework. In the first method, a differentiable proximate algorithms are also based on the LS method, but we provide

l1 -norm function is introduced. While in the second method,

locally competitive algorithm is utilized. Simulation and experi- modifications to avoid the above mentioned disadvantages.

mental results demonstrate that the performance of the proposed In this paper, we develop a robust target localization al-

algorithms outperforms several existing schemes. gorithms for distributed MIMO radar system based on the

Index Terms—Multiple-input multiple-output (MIMO) radar, Lagrange programming neural network (LPNN) [2], [7]–[11].

target localization, Lagrange programming neural network And the l1 -norm or l0 -norm term is applied as objective

(LPNN), locally competitive algorithm (LCA), outlier. function to achieve robustness against outliers. In particular,

we focus on the LPNN solver to handle optimization problems

with l1 -norm or l0 -norm term. However, the LPNN framework

I. I NTRODUCTION requires that its objective function and constraints should be

Generally speaking, multiple-input multiple-output (MIMO) twice differentiable. In the first proposed method, we introduce

radar systems, use multiple antennas to transmit multiple a differentiable proximate function to replace l1 -norm term

signals and employ multiple receivers to receive the echoes in the objective function. While, in the second method, the

from the target [1]. MIMO radar systems can be grouped into internal state concept of the LCA is utilized to convert the

two categories, namely, colocated and distributed antennas. non-differentiable components due to the l1 -norm or l0 -norm

The former positions its antennas closely, and utilizes the as differentiable expressions.

waveform diversity for performance improvement. While for The rest of this paper is organized as follows. Background of

the distributed MIMO radar, its transmitters and receivers MIMO radar target localization, LPNN and LCA are described

are widely separated with each other. It employs the spatial in Section II. In Section III, two target localization algorithms

diversity to improve its localization accuracy [2]. Compared are devised. The local stability of the two approaches is proved

with traditional radar systems, the MIMO radar, especially the in Section IV. Numerical results for algorithm evaluation and

distributed variant, has many improvements in the aspect of comparison are provided in Section V. Finally, conclusions are

localization accuracy and robustness against noise. Hence, in drawn in Section VI.

this paper, we focus on the target localization problem under

distributed MIMO radar system. II. BACKGROUND

Hao Wang, Chi-Sing Leung, Hing Cheung So, Ruibin Feng, and Zifa Han A. Notation

are with the Department of Electronic Engineering, City University of Hong We use a lower-case or upper-case letter to represent a scalar

Kong, Hong Kong.

Junli Liang is with Northwestern Polytechnical University, Xi’an 710072, while vectors and matrices are denoted by bold lower-case

China. and upper-case letters, respectively. The transpose operator is

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 2

denoted as ()T . Other mathematical symbols are defined in vector z and the Lagrange multiplier vector ζ, respectively.

their first appearance. The dynamics of the neurons can be expressed as

dzz ∂L(zz , ζ)

= − (7a)

B. MIMO Radar Localization dt ∂zz

A MIMO radar localization system [2], [12] normally dζ ∂L(zz , ζ)

= . (7b)

includes M transmitters and N receivers in a 2-D space. dt ∂ζ

The positions of these transmitters, receivers and the target After the neurons settle down at an equilibrium point, the

to be detected are expressed as t i = [xti , yit ]T , i = 1, · · · , M , output of the neurons is the solution we want. The purpose

r j = [xrj , yjr ]T , j = 1, · · · , N and p = [x, y]T , respectively. of dynamic in (7a) is to seek for a state with the minimum

Assume that each transmitter sends out a distinct electromag- objective value, while (7b) aims to constrain its outputs into

netic wave. All these electromagnetic waves are reflected by the feasible region. The network will settle down at a stable

the target, and then collected by receivers. The propagation state if several conditions are satisfied [2], [7], [11]. Obviously,

time from the transmitter t i to the target is τit , while the f and h should be differentiable. Otherwise, the dynamics in

propagation time between the target and the receiver r j is (7) cannot be defined.

τjr . Thus, the distance from the transmitter t i to target, and

that from the target to receiver r j can be respectively defined D. Locally Competitive Algorithm

as

q The LCA [13] is also an analog neural network which is

dti = kpp − t i k2 = (xti − x)2 + (yit − y)2 , (1) designed for solving the following unconstrained optimization

q problem:

drj = kpp − r j k2 = (xrj − x)2 + (yjr − y)2 . (2) 1

Llca = kbb − Φzz k22 + λkzz k1 (8)

The total propagation distances are 2

where z ∈ Rn , b ∈ Rm , Φ ∈ Rm×n (m < n). To construct

di,j = dti + drj , i = 1, · · · , M, j = 1, · · · , N. (3) the dynamics of LCA, we need to calculate the gradient of

We see that this system needs to measure M × N distances. Llca with respect to z . However, λkzz k1 is non-differentiable

However, in practice, noise is almost inevitable. Therefore, the at zero point. Thus the gradient of (8) is

expressions of observed propagation distances are ∂z Llca = −Φ(bb − Φzz ) + λ∂kzz k1 , (9)

dˆi,j = c(τit + τjr ) = di,j + ǫi,j , (4) where ∂kzz k1 denotes the sub-differential of kzz k1 . According

where i = 1, · · · , M , j = 1, · · · , N , ǫi,j denotes the noise, c to the definition of sub-differential, we know that at the non-

is the speed of light. The aim of this system is to estimate the differentiable point the sub-differential is equal to a set1 . To

position of target p from {tti },{rr j } and {dˆi,j }. For simplicity, handle this issue, LCA introduces an internal state vector u =

most off-the-shelf algorithms directly assume that ǫi,j obeys a [u1 , · · · , un ]T and defines a relationship between u and z ,

zero-mean Gaussian distribution. While, in fact, the impulsive 0, |ui | ≤ λ,

zi = Tλ (ui ) = (10)

noise even outliers cannot be avoided in this system. For this ui − λsign(ui ), |ui | > λ.

reason, in this paper, we consider that ǫi,j includes the zero-

In the LCA, z and u are known as the output state variable

mean Gaussian white noise and some outliers.

and internal state variable vectors, respectively, λ denotes

the threshold of the function.Furthermore, from (10), we can

C. Lagrange Programming Neural Network deduce that

The LPNN, introduced in [7], is an analog neural network u − z ∈ λ∂kzz k1 . (11)

which is able to solve the general nonlinear constrained Hence, LCA defines its dynamics on u rather than z as

optimization problem, given by

u

du

= −∂z Llca = −u u + z + ΦT (bb − Φzz ). (12)

min f (zz ) (5a) dt

z

s.t. h(zz ) = 0, (5b) It is worth noting that if the dynamics of z are used directly,

we need to calculate ∂kzz k1 which is equal to a set at the zero

T

where z = [z1 , · · · , zn ] is the variable vector being opti- point. Therefore, LCA uses du u/dt rather than dzz /dt.

mized, f : Rn → R is the objective function, and h : Rn → In [13], a general element-wise threshold function is also

Rm (m < n) represents m equality constraints. Both f and h proposed, which is described as

should be twice differentiable in LPNN framework. First, we

|ui | − δλ

set up the Lagrangian of (5): zi = T(η,δ,λ) (ui ) = sign(ui ) , (13)

1 + e−η(|ui |−λ)

L(zz , ζ) = f (zz ) + ζ Th (zz ), (6) where λ still denotes the threshold, η is a parameter used to

control the threshold transition rate, and δ ∈ [0, 1] indicates

where ζ = [ζ1 , · · · , ζm ]T is the Lagrange multiplier vector.

In LPNN framework, there are n variable neurons and m 1 For the absolute function |z|, the sub-differential ∂|z| at z = 0 is equal

Lagrangian neurons, which are used to hold the state variable to [−1, 1].

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 3

adjustment fraction after the internal neuron across threshold Different threshold functions

[13]. Some examples of this general threshold function are 4

given in Fig.1. The general threshold function in (13) is used 3

for solving the unconstrained optimization problem given by

1 2

L̃lca = kbb − Φzz k22 + λSη,δ,λ (zz ). (14)

2 1

where Sη,δ,λ (zz ) is a proximate function of Lp -norm (0 ≤ p ≤

1), and it has an important property: 0

∂Sη,δ,λ (zz ) −1

λ ≡ u − z. (15)

∂zz

−2 T1(u)=T(η,δ,λ)(u) with η=∞, δ=1, λ=1

The exact form of Sη,δ,λ (zz ) cannot be obtained. However, it

T(η,δ,λ)(u) with η=10,000, δ=0, λ=1

does not influence the application of it due to the fact that the −3

T(η,δ,λ)(u) with η=10, δ=0, λ=1

neural dynamics are defined in terms of its gradient function

rather than the penalty term itself. According to the discussion −4

−4 −2 0 2 4

in [13], when η → ∞ and δ = 0, we obtain an ideal hard u

threshold function given by Fig. 1: Examples of general threshold function.

0, |ui | ≤ λ,

zi = T(∞,0,λ) (ui ) = (16)

ui , |ui | > λ.

Denote,

The corresponding penalty term is

n d t = [dt1 , ..., dtM , dt1 , ..., dtM , ..., dt1 , ..., dtM ]T ,

1X

λS∞,0,λ (zz ) = I(|zi | > λ), (17) d r = [dr1 , ..., dr1 , dr2 , ..., dr2 , ..., drN , ..., drN ]T ,

2 i=1

d̂d = [dˆ1,1 , ..., dˆM,1 , dˆ1,2 , ..., dˆM,2 , ..., dˆ1,N , ..., dˆM,N , ]T ,

where I(·) denotes an indicator function. Obviously,

S∞,0,λ (zz ) is a proximate function of l0 -norm. It is worth where they are all M N ×1 vectors. Then (21) can be modified

noting that the variables zi produced by the ideal threshold as

function (16) cannot take values in the range of [−1, 0) and
2

t r

min
d

d̂ − d − d
, (22a)

(0, 1]. t r p ,di ,dj 2

While, if we let η → ∞ and δ = 1, then the general 2

threshold function is reduced to the soft threshold function, s.t. dti = kpp − t i k22 , i = 1, · · · , M, (22b)

given by drj 2 = kpp − r j k22 , j = 1, · · · , N, (22c)

dti ≥ 0, i = 1, · · · , M (22d)

zi = T(∞,1,λ) (ui ) = Tλ (ui ) (18)

dti ≥ 0, j = 1, · · · , N. (22e)

and the penalty term

It is well known that, compared with l2 -norm, l1 -norm is less

λS∞,1,λ (zz ) = λkzz k1 . (19) sensitive to outliers, hence in our proposed model, the problem

More details about parameter setting of the threshold function is modified as the following form:

can be found in [13]. Besides, the behavior of the dynamics
t r

min
d

d̂ − d − d
, (23a)

under different settings has been studied in [13]–[15]. t r p ,di ,dj 1

2

III. D EVELOPMENT OF P ROPOSED M ETHOD s.t. dti = kpp − t i k22 , i = 1, · · · , M, (23b)

A. Problem Formulation drj 2 = kpp − r j k22 , j = 1, · · · , N, (23c)

In traditional TOA system, it is generally assumed that the dti ≥ 0, i = 1, · · · , M, (23d)

noise ǫi,j (i = 1, · · · , M, j = 1, · · · , N ) follows Gaussian dti ≥ 0, j = 1, · · · , N. (23e)

distribution. And according to LS method [2], [4], the problem (23) includes M +N equality constraints and M +N inequality

can be formulated as constraints. Since LPNN can handle the problem with equality

M X N

X 2 constraint only, the inequality constraints in (23) should be

min dˆi,j − kpp − t i k2 − kpp − r j k2 . (20) removed. To achieve this purpose, we use the following

p

i=1 j=1

proposition.

Combining with the definitions given in (1) and (2), the Proposition 1: The optimization problem in (23) is equiv-

problem can be rewritten as alent to

PM PN ˆ 2
t r

min di,j − dt

i − dr

j , (21a) min
d

d̂ − d − d
, (24a)

t r i=1 j=1 t r p ,di ,dj 1

p ,di ,dj

2

s.t. dti = kpp − t i k2 , i = 1, · · · , M, (21b) s.t. dti = kpp − t i k22 , i = 1, · · · , M, (24b)

drj = kpp − r j k2 , j = 1, · · · , N. (21c) drj 2 = kpp − r j k22 , j = 1, · · · , N. (24c)

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 4

to (24), we need to prove that the inequality constraints in (23) |x|

∗ ∗

are unnecessary. Suppose that (pp∗ , dt1 , ..., dtM , dr1 ∗ , ..., drN ∗ ) (1/10)ln(cosh(10x))

is the optimal solution of (24). According to the reverse 0.4 (1/20)ln(cosh(20x))

triangle inequality, we see that (1/50)ln(cosh(50x))

M X

X N X M X

N 0.3

ˆ ∗ ∗

∗

di,j − dti − drj ≥ |dˆi,j | − |dti | − |drj ∗ |, (25)

i=1 j=1 i=1 j=1

0.2

As dˆi,j is distance, all of them must be nonnegative. Thus,

(25) can be rewritten as 0.1

M X

X N X M X

N

ˆ ∗ ∗

∗

di,j − dti − drj ≥ dˆi,j − |dti | − |drj ∗ | (26) 0

i=1 j=1 i=1 j=1 −0.5 −0.25 0 0.25 0.5

Fig. 2: (1/a) ln(cosh(axi )).

The inequality in (26) implies that the optimal

∗ ∗

solution (pp∗ , dt1 , ..., dtM , dr1 ∗ , ..., drN ∗ ) of (23) is

greater or equal to the solution achieved by the

∗ ∗

feasible point (pp∗ , |dt1 |, ..., |dtM |, |dr1 ∗ |, ..., |drN ∗ |). Since With this l1 -norm proximate function, the problem in (27)

∗ ∗ ∗ ∗

(pp∗ , dt1 , ..., dtM , dr1 , ..., drN ) is the optimal solution, can be rewritten as

∗ ∗

the equality of (26) must be held. Thus dti = |dti | for PMN

∗ ∗ 1

∀i ∈ [1, · · · , M ], and drj = |drj | for j ∈ [1, · · · , N ]. In mint k=1 a ln(cosh(azk )), (29a)

∗ p ,zk ,di ,drj

other words, dti ≥ 0 for i ∈ [1, · · · , M ], and drj ∗ ≥ 0

for ∀j ∈ [1, · · · , N ]. Hence, we can remove the inequality s.t. d − dt − dr ,

z = d̂ (29b)

2

constraints in (23). dti = kpp − t i k22 , i = 1, · · · , M, (29c)

In order to facilitate the analysis of this paper, we introduce drj 2 = kpp − r j k22 , j = 1, · · · , N. (29d)

a dummy variable z and rewrite (24) as

After the modification, the objective function is dif-

min kzz k1 , (27a) ferentiable.

z ,di ,drj

p ,z t

PMN It is worth noting that the gradient of

(1/a) k=1 ln(cosh(azk ) with respect to zk is equal to

s.t. z = d̂d − d t − d r , (27b) tanh(azk ) (the hyperbolic tangent function) which is fre-

2

dti = kpp − t i k22 , i = 1, · · · , M, (27c) quently used as an activation function in artificial neural

drj 2 = kpp − r j k22 , j = 1, · · · , N, (27d) networks.

Theoretically, we can directly use (29) to deduce the LPNN

where z = [z1,1 , ..., zM,1 , z1,2 , ..., zM,2 , ..., z1,N , ..., zM,N ]T , dynamics. While, our preliminary simulation results show that

is a vector with M × N elements. the neural dynamics may not be stable. Hence, we introduce

several augmented terms into its objective (29a), after that we

have

B. LPNN for MIMO Radar Localization PMN 1

min k=1 a ln(cosh(azk ))

To obtain a real-time solution of the problem given in p ,y,dti ,drj

(27), we consider using the LPNN framework. However, + C2 kzz − d̂d + d t + d r k22

LPNN requires its objective function and constraints are all PM 2 2

twice differentiable. Obviously, due to the l1 -norm term, the + C2 i=1 dti − kpp − t i k22

objective function in (27) does not satisfy this requirement. P 2

Hence, for calculating the gradient of l1 -norm at the non- + C2 N r2

j=1 dj − kp p − r j k22 (30a)

t r

differentiable point, we propose two approaches. s.t. z = d̂d − d − d , (30b)

Method 1: Intuitively, we consider using a differentiable 2

dti = kpp − t i k22 , i = 1, · · · , M, (30c)

l1 -norm proximate function [16]:

drj 2 = kpp − r j k22 , j = 1, · · · , N, (30d)

ln(cosh(ax))

g(x) = , (28) where C is a scalar used for regulating the magnitude of

a

these augmented terms. For any point in feasible region, the

where a > 1 is a scalar. Fig.2 shows the shapes of g(x) with constraints must be satisfied. Hence the augmented terms equal

different a. It is observed that the shape of the proximate 0 at an equilibrium point. They can improve the stability and

function is quite similar with l1 -norm when a is large. convexity of the dynamics and do not influence the optimal

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 5

= −

MN

dt ∂α

X 1 = z − d̂d + d t + d r , (36)

L(pp, z , dti , drj , α, β, λ) = ln(cosh(azk ))

a dβi ∂L(pp, z , dti , drj , α, β, λ)

k=1

t r

= −

T

+α (zz − d̂d + d + d ) dt ∂βi

2

M

X 2

= dti − kpp − t i k22 , (37)

+ βi (dti − kpp − t i k22 ) dλj ∂L(pp, z , dti , drj , α, β, λ)

i=1 = −

N

dt ∂λj

X r2

+ λj (drj 2 − kpp − r j k22 ) = dj − kpp − r j k22 . (38)

j=1

C

kzz − d̂d + d t + d r k22

+

2 In the above dynamics, i = [1, · · · , M ] and j = [1, · · · , N ].

M Equations (32)-(35) are used for seeking the minimum objec-

C X t2 2

+ di − kpp − t i k22 tive value, while (36)-(38) can restrict the equilibrium point

2 i=1

into the feasible region.

N

C X r2 2

Method 2: In this method, we introduce LCA into LPNN

+ dj − kpp − r j k22 (31)

2 j=1 framework to solve sub-differentiable problem in (27). First,

we also introduce several augmented terms into the objective

where p , z , dti (i = 1, · · · , M ), drj (j = 1, · · · , N ) are state function to make the system more stable. Thus, (27) can be

variable vectors, α = [α1 , ..., αMN ]T , β = [β1 , ..., βM ]T and modified as:

λ = [λ1 , ..., λN ]T are Lagrangian variable vectors. According

to this Lagrangian and the concepts of LPNN given in (7), its

dynamics are defined as: min kzz k1 + C

z d + d t + d r k22

z ,dti ,drj

p,z 2 kz − d̂

dzz ∂L(pp, z , dti , drj , α, β, λ)

= − PM t 2 2

dt ∂zz + C2 p − t i k22

i=1 di − kp

= − tanh(azz ) − α − C(zz − d̂d + d t + d r ), (32) PN r2

2

+ C2 j=1 dj − kpp − r j k22 , (39a)

ddti ∂L(pp, z , dti , drj , α, β, λ)

= − s.t. d −d −d ,

z = d̂ t r

(39b)

dt ∂dti

2

N

X −1 dti = kpp − t i k22 , i = 1, · · · , M, (39c)

= − αi+jM − 2βi dti drj 2 = kpp − r j k22 , j = 1, · · · , N. (39d)

j=1

N

X

−C (zi,j − dˆi,j + dti + drj )

Its Lagrangian is given by

j=1

2

−2Cdti (dti − kpp − t i k22 ), (33)

ddrj ∂L(pp, z , dti , drj , α, β, λ) L(pp, z , dti , drj , α, β, λ) = kzz k1 + αT (zz − d̂d + d t + d r )

= −

dt ∂drj M

X 2

M

X + βi (dti − kpp − t i k22 )

= − αi+(j−1)M − 2λj drj i=1

i=1 N

X

M

X + λj (drj 2 − kpp − r j k22 )

−C (zi,j − dˆi,j + dti + drj ) j=1

i=1 C

kzz − d̂d + d t + d r k22

+

−2Cdrj (drj 2 − kpp − r j k22 ), (34) 2

M

dpp ∂L(pp, z , dti , drj , α, β, λ) C X t2 2

= − + di − kpp − t i k22

dt ∂pp 2 i=1

M

X XN N

= 2βi (pp − t i ) + 2λj (pp − r j ) C X r2 2

+ dj − kpp − r j k22 (40)

i=1 j=1 2 j=1

M

X 2

+2C (pp − t i )(dti − kpp − t i k22 )

i=1

N According the concept of LCA, we introduce an internal

X

+2C (pp − r j )(drj 2 − kpp − r j k22 ), (35) variables u = [u1 , ..., uMN ]T . Combining the dynamics of

j=1

LPNN in (7) with the concept of LCA in (12), we can deduce

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 6

min S10000,0,1 (zz ) + C

z

2 kz − d̂d + d t + d r k22

z ,dti ,drj

u

du ∂L(pp, z , dti , drj , α, β, λ) p ,z

= − PM t 2 2

dt ∂zz + C2 p − t i k22

i=1 di − kp

= u + z − α − C(zz − d̂d + d t + d r ).

−u (41) PN 2

r2

ddti ∂L(pp, z , dti , drj , α, β, λ) + C2 j=1 dj − kpp − r j k22 , (48a)

= − t r

dt ∂dti s.t. d −d −d ,

z = d̂ (48b)

2

N

X −1 dti = kpp − t i k22 i = 1, · · · , M, (48c)

= − αi+jM − 2βi dti

drj 2 = kpp − r j k22 j = 1, · · · , N. (48d)

j=1

N

X For this problem we can also solve it with LPNN and LCA.

−C (zi,j − dˆi,j + dti + drj ) The dynamics are same as (41)-(47). The relationship between

j=1 u and z is still given by (13). While in this case, we set

2 η = 10000, δ = 0 and λ = 1. The threshold function given in

−2Cdti (dti − kpp − t i k22 ), (42)

(13) becomes a proximate hard threshold and ∂S10000,0,1 (zz ) =

u − z.

For both two methods mentioned above, the dynamics are

updated with the following rule,

ddrj ∂L(pp, z , dti , drj , α, β, λ)

= − duu (l)

dt ∂drj u (l+1) = u (l) + µ1 , (49)

dt

M (l)

= −

X

αi+(j−1)M − 2λj drj (l+1) (l) ddt

dti = dti + µ2 i , i = 1, · · · , M, (50)

i=1 dt

r (l)

N dd j

−C

X

(zi,j − dˆi,j + dti + drj ) drj (l+1) = drj (l) + µ3 , j = 1, · · · , N, (51)

dt

j=1 dpp(l)

p (l+1) = p (l) + µ4 , (52)

−2Cdrj (drj 2 − kpp − r j k22 ), (43) dt

dpp ∂L(pp, z , dti , drj , α, β, λ) dα(l)

= − α(l+1) = α(l) + µ5 , (53)

dt ∂pp dt

(l)

M N (l+1) (l) dβ

X X βi = βi + µ6 i , i = 1, · · · , M, (54)

= 2βi (pp − t i ) + 2λj (pp − r j ) dt

i=1 j=1 (l)

(l+1) (l) dλ j

M λj = λj + µ7 , j = 1, · · · , N. (55)

X 2 dt

+2C (pp − t i )(dti − kpp − t i k22 )

i=1 where (l) corresponds to the estimate at the lth iteration and

XN µ1 , µ2 , µ3 , µ4 , µ5 , µ6 , µ7 are the step sizes which should be

+2C (pp − r j )(drj 2 − kpp − r j k22 ), (44) positive and not too large to avoid divergence.

j=1 A typical example of the dynamics of the second method

dα ∂L(pp, z , dti , drj , α, β, λ) is given in Fig. 3. The settings are described in the second

= − experiment of Section V. From Fig. 3, we see that the network

dt ∂α

= z − d̂d + d t + d r , (45) can settle down within around 50 characteristic times.

dβi ∂L(pp, z , dti , drj , α, β, λ)

= − IV. L OCAL S TABILITY OF P ROPOSED A LGORITHMS

dt ∂βi

2 In this section, we prove the local stability of the proposed

= dti − kpp − t i k22 , (46)

methods. Local stability means that a minimum point should

dλj ∂L(pp, z , dti , drj , α, β, λ) be stable. Otherwise, the network never converges to the

= −

dt ∂λj minimum.

= drj 2 − kpp − r j k22 . (47) TFor T t methodt 1, we

T let x =

z , p , d1 , · · · , dM , dr1 , · · · , drN be the decision variable

vector and {x x∗ , α∗ , β ∗ , λ∗ } be a minimum point of the

In above dynamics, i = [1, · · · , M ] and j = [1, · · · , N ]. It is dynamics given by (32)-(38). According to Theorem 1 in [7],

worth noting that the relationship between u and z is given there are two sufficient conditions for local stability in the

by (13). When we set η = ∞, δ = 1 and λ = 1, the mapping LPNN approach. The first one is convexity, i.e., the Hessian

given by (13) becomes the soft threshold and u − z ∈ ∂kzz k1 . matrix of the Lagrangian at {x x∗ , α∗ , β∗ , λ∗ } should be

However, in order to further reduce the influence of outliers, positive definite. It has been achieved by introducing the

we replace the l1 -norm term in the objective function of (39) augmented terms. When C is large enough, at the minimum

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 7

2

of these constraints at the minimum point are given by

1

1 0.5

∂h1 (x x∗ ) ∂hMN +M+N (x x∗ )

0 0

∂pp ∂pp

∂h1 (x x∗ ) ∂hMN +M+N (x x∗ )

u

z

−1

−0.5

∂zz ∗ ∂zz

−2 −1

∂h1 (x x )

∂hMN +M+N (x x∗ )

−3

0 10 20 30 40 50

−1.5

0 10 20 30 40 50

∂dt1

∂dt1

characteristics time characteristics time

.. ..

(a) (b)

.

.

∂h1 (x x∗ ) ,· · ·, ∂hMN +M+N (x x∗ )

2.6 3

∂dM t ∂dM t

2.4

2.2

2.5

∂h1 (x x∗ ) ∂hMN +M+N (x x∗ )

2 2 ∂dr1 ∂d1 r

4

4

dt , ..., dt

dr , ..., dr

1.8

.. ..

1

1

1.6 1.5

1.4

. .

1.2

1

∂h1 (x x∗ ) ∂hMN +M+N (x x∗ )

1

0 10 20 30

characteristics time

40 50

0.5

0 10 20 30

characteristics time

40 50 ∂drN ∂drN

(c) (d)

2000 1.5 MN M N

1

z }| { z }| {z }| {

1000

0.5

0 0 t1 − p ∗ tM − p ∗ r 1 − p∗ r N − p∗

0 1 0 0 0 0 0

0

α

p

−1000 0 0 0 0 0 0

−0.5

−2000

.. .. .. .. .. ..

−1 . . . . . .

−3000

0 10 20 30 40 50

−1.5

0 10 20 30 40 50

0 1 0 0 0 0

characteristics time characteristics time

1 0 2dt1 ∗ 0 0 0

(e) (f)

0 0 0 0 0

= 0 ·, ·

,· , ·, ·

,· , ·, ·

,· , (59)

2 4 .. .. .. .. .. ..

. . .

. .

.

0 2

0 1 0 2dt ∗ 0 0

M

−2

0 1 0 0 0 2dr ∗ 0

1

λ

−4

β

−2

0 0 0 0 0 0

−6

. . . .. . ..

−8

−4 .. .. .. . .. .

−10

0 10 20 30 40 50

−6

0 10 20 30 40 50 0 1 0 0 0 2drN ∗

characteristics time characteristics time

(g) (h)

where 0 = [0, 0]T . We can see there are M N + M + N

Fig. 3: Dynamics of estimated parameters when the variance gradient vectors, each one has M N +M +N +2 elements. The

of Gaussian noise is 100 (m2 ), the standard deviation of outlier first M N columns are the gradient vectors of equality (56).

is 1000 (m). (a) u ; (b) z ; (c) dt1 , ..., dt4 ; (d) dr1 , ..., dr4 ; (e) p (f) Obviously, these columns are independent with each others.

α; (g) β1 , ..., β4 ; (h) λ1 , ..., λ4 . Similarly, we can see that the M middle columns are the

gradient vectors of equality (57), and the last N columns

are the gradient vectors of equality (58). Both of them are

independent within themselves. For the constraints given in

point the Hessian matrix is positive definite under mild

(56), their gradients with respect to p∗ are all equal to 0.

conditions [2], [7], [9]–[11].

While for the constraints given in (57) and (58), as long as the

The second one is that at the minimum point, the gradient position of target to be detected does not coincide with any

vectors of the constraints with respect to the decision variable transmitter or receiver, their gradients with respect to p ∗ cannot

vector should be linearly independent. In our case, we have be zero. Hence the first M N columns are independent with

M N + M + N constraints, namely, the last M + N columns. Besides, it is easy to notice that the

middle M columns are independent with the last N columns.

Thus, the gradients of the constraints are linear independent,

hM(j−1)+i (pp, z , dti , drj ) = zM(j−1)+i − dˆi,j − dti − drj (56)

and the dynamics around a minimum point are stable.

2

− kpp − t i k22 ,

T

hMN +i (pp, z , dti , drj ) = dti (57) For method 2, x = u T , p T , dt1 , · · · , dtM , dr1 , · · · , drN . We

hMN +M+j (pp, z , dti , drj ) = drj 2 − kpp − r j k22 , (58) need to show that, at a local minimum point {x x∗ , α∗ , β ∗ , λ∗ },

the gradient vectors of constraints are linearly independent.

The proof process is basically similar with the first method,

where i = 1, · · · , M and j = 1, · · · , N . The gradient vectors except that the decision variable z is replaced by u . Thus, the

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 8

gradient vectors of constraints at the minimum point are given [0, 7500]Tm, t 3 = [10500, 0]Tm, t 4 = [6000, 4000]Tm and

by r 1 = [−10000, −6000]Tm, r 2 = [−9000, 5000]Tm, r 3 =

∂h1 (x x∗ )

∂hMN +M+N (x x∗ )

[0, 4200]Tm, r 4 = [6400, −8000]Tm, respectively. The true

position of the target is p = [−2000, 1000]Tm. The geometry

∂pp ∂pp

∂h1 (x

∗

x )

∂hMN +M+N (x ∗

x )

of transmitters, receivers and target are shown in Fig. 4.

∂u u∗ ∂uu ∗

∂h 1 (xx )

∂h MN +M+N x

(x )

4

x 10

t t

∂d1 ∂d1

Transmitters

. . 1

.. ..

Receivers

∗ ,· · ·, ∗

Target

∂h1 (x

x ) ∂h MN +M+N x

(x )

∂dt t

M ∂d M

0.5

∂h1 (x

x ∗

) ∂hMN +M+N (x x ∗

)

y−axis (m)

∂dr r

1 ∂d 1

. .

0

.

. .

.

∗

x ) ∗

x )

∂h1 (x ∂hMN +M+N (x

−0.5

∂drN ∂drN

MN M N

z −1

z }| { }| {z }| {

0 0 t1 − p∗ tM − p ∗ r 1 − p∗ − p∗ rN

g1 0 0 0 0 −1.5 −1 −0.5 0 0.5 1 1.5

0

x−axis (m) 4

0 0 0 0 0 x 10

0

.. .. .. .. .. .. Fig. 4: The configuration of transmitters, receivers and target.

. . . . . .

0 gMN 0 0 0 0

For the proposed approaches, we set C = 20 and step

1 0 2dt1 ∗ 0 0 0

sizes µ1 = µ4 = µ5 = µ6 = µ7 = 10−3 , µ2 =

0 0 0 0

= 0 ·, ·

,· , ·, ·

,· , 0

·, ·

,· µ = 10−5 . The initial values of variables p , z , dti (i =

, (60)

.. .. .. .. .. .. 3

. . . . 1, · · · , M ), drj (j = 1, · · · , N ), α = [α1 , ..., αMN ], β =

. .

0 1 0 2dt ∗ 0 0 1 , ..., βM ], λ = [λ1 , ..., λN ] are some small random values.

[β

M

1 0 0 0 2dr ∗ 0 And we set a = 50 in the first method. Two state-of-the-

1

0 0 0 0 0

0 art algorithms are implemented for performance comparison.

. . . .. . .. They are the target localization algorithm described in [2] and

.. .. .. . .

. .

the robust target localization algorihtm given by [12]. The

r ∗

0 1 0 0 0 2dN former is also based on LPNN framework, but it assumes that

where k = M (j − 1) + i, the noise follows Gaussian distribution and uses l2 -norm in

its objective function. While, the latter is a robust target lo-

x ∗ ) ∂zk

∂hk (x 1 calization algorithm for MIMO radar system. It introduces the

gk = =

∂zk ∂uk 1 + exp (−η(|uk | − 1)) maximum correntropy criterion (MCC) into the conventional

η(|uk | − δ) exp (−η(|uk | − 1)) ML method to deal with outliers, and apply half-quadratic

+ .

(1 + exp (−η(|uk | − 1)))2 optimization technique to handle the corresponding nonconvex

For the case with L -norm objective function, η → ∞, δ = 1. nonlinear function.

1

If we assume zi 6= 0, in other words, all data points are influ-

enced by noise, thus we have gk = 1 for ∀k = 1, . . . , M N . A. Experiment 1: Target Localization in Gaussian Noise

When the proximate L0 -norm objective function is used, we let In the first experiment we test the root mean squared

η be a large positive number, δ = 0. Without any assumption, error (RMSE) performance of the proposed algorithms under

we can deduce that, for ∀k = 1, . . . , M N , gk is a positive Gaussian noise environment without introducing any outliers.

constant. Next, similar with the proof process of method 1, The standard deviation of the Gaussian noise varies from 1 to

we can prove that a minimum point of the second method has 102 . For each noise level, we repeat the experiment 100 times.

local stability. The results are shown in Fig. 5.

In Fig. 5, CRLB is short for Cramér-Rao low bound,

V. N UMERICAL E XAMPLES which denotes a lower bound on the variance of any unbiased

In this section, we conduct several simulations and ex- estimator [2]; l2 -norm LPNN denotes the algorithm given by

periments to test the performance of our proposed algo- [2]; l1 -norm-like LPNN is our proposed method 1; l1 -norm

rithms. First, we discuss the parameter settings and the LPNN LCA and l0 -norm LPNN LCA represent our proposed

initialization. In our MIMO radar localization system, 4 method 2 with l1 -norm objective function and l0 -norm objec-

transmitters and 4 receivers are used, i.e., M = N = tive function respectively; MCC is the robust algorithm given

4. Their positions are t 1 = [−5000, 6000]Tm, t 2 = in [12]. In Fig. 5, we see that the performance of our proposed

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 9

2 5

10 10

l2−norm LPNN

l −norm−like LPNN

1

4

10 l −norm LPNN LCA

1

l0−norm LPNN LCA

1

10

MCC

RMSE (m)

RMSE(m)

10

CRLB 2

l2−norm LPNN 10

0

10

l1−norm−like LPNN

l1−norm LPNN LCA 1

10

l0−norm LPNN LCA

MCC

−1 0

10 10

0 1 2 2 3 4 5

10 10 10 10 10 10 10

Standard deviation of the Gaussian noise (m) Standard deviation of the Laplace noise (m)

Fig. 5: The RMSE results of different algorithms. The standard Fig. 6: The RMSE results of different algorithms. The standard

deviation of the Gaussian noise ranges from 1 to 102 . deviation of the exponential distribution (outlier level) is varied

from 102 m to 105 m.

two robust algorithms is closed to the CRLB in Gaussian noise

environment. They are better than the robust algorithm given 10

5

which is based on the Gaussian noise model. 4

10

NLOS Outliers 3

RMSE(m)

10

the first one, but we fix the variance of Gaussian noise to 10

2

l2−norm LPNN

100, and introduce outliers into the measurement matrix. We

l1−norm−like LPNN

assume that there exists non-line-of-sight (NLOS) propagation

1 l −norm LPNN LCA

between one transmitter and the target or between the target 10 1

l −norm LPNN LCA

and one receiver. Thus all measurements associated with this 0

MCC

transmitter or receiver include NLOS outliers [17], [18]. In 0

10

this experiment, we randomly choose one of the transmitters 10

2

10

3

10

4

10

5

or receives and add NLOS outliers into its relevant measured Standard deviation of the Laplace noise (m)

values. In other words, the measurements can be seen as a Fig. 7: The RMSE results of different algorithms. The standard

4 × 4 matrix, one of columns or rows is influenced by NLOS deviation of the exponential distribution (outlier level) is varied

outliers. The outliers are generated by exponential distribution. from 102 m to 105 m.

Then we conduct the experiments under different outlier levels.

For each different outlier level, we also repeat the experiment

100 times. The results are shown in Fig. 6.

From Fig. 6, we see that our first method and second method From Fig. 7, we see that the robust target localization

with l1 -norm objective have decent performance when the algorithm given by [12] and our proposed algorithms can

outlier level (standard deviation of the exponential distribution) reduce the influence of outliers in this case. However, due

is less than 104.5 . Both of them break down when the outlier to the high proportion of outliers, all mentioned algorithms

level is 105 . While the second method with l0 -norm objective cannot give satisfied results. But among them, the performance

function may not handle the low level outliers very well, but of l0 -norm LPNN LCA is still the best. Then, we further add

it is very effective to reduce the influence high level outliers. one transmitter t 5 = [−6000, −5000]Tm and one receiver

Conversely, the algorithm given by [2] is very sensitive with r 5 = [8000, 600]Tm. After that, the geometry of transmitters

outliers. For the robust target localization algorithm [12], even and receivers is depicted in Fig. 8.

though it can also effectively reduce the influence of outliers, Then, we still randomly choose one transmitter and one

its performance is worse than our proposed methods. receiver, add NLOS outliers into their relevant measurements.

In the following experiment, we randomly choose one Thus 9 elements in the 5 × 5 measurement matrix are with

transmitter and one receiver, and then add NLOS outliers into outliers. The results are shown in Fig. 9.

their relevant measurements. Thus 7 elements in the 4 × 4 From Fig. 9, we see the performance of our proposed

measurement matrix include outliers. The results are shown algorithms is better than others. And, generally speaking, l0 -

in Fig. 7. norm LPNN LCA is the best.

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 10

4 5

x 10 10

l −norm LPNN

2

Transmitters

1 l −norm−like LPNN

Receivers 4

1

Target 10 l1−norm LPNN LCA

l0−norm LPNN LCA

0.5 MCC

3

RMSE(m)

10

y−axis (m)

0

2

10

−0.5

1

10

−1 0

10

2 3 4 5 6

10 10 10 10 10

−1.5 −1 −0.5 0 0.5 1 1.5

x−axis (m) 4 Standard deviation of the Laplace noise (m)

x 10

Fig. 8: The configuration of transmitters, receivers and target. Fig. 10: The RMSE results of different algorithms under SINR

outliers. The variance of Gaussian noise is equal to 100. The

standard deviation of the Laplace noise ranges from 2 × 102

5

10

l −norm LPNN

m to 2 × 105 m.

2

l1−norm−like LPNN

4

10 l1−norm LPNN LCA

best.

l0−norm LPNN LCA

3

MCC

RMSE(m)

10

VI. C ONCLUSION

2

In this paper, two algorithms for solving target localization

10

problem in a distributed MIMO radar system were developed.

To alleviate the influence of outliers, we utilize the property of

10

1 lp -norm (p = 1, or p = 0) and redesign the objective function

of the original optimization problem. To achieve a real-time

solution, both two proposed algorithms are based on the

0

10

2 3 4 5 concept of LPNN. Since the objective function of the proposed

10 10 10 10

Standard deviation of the Laplace noise (m) mathematic model is non-differentiable. Two approaches are

devised for solving this problem. In first method, we use a

Fig. 9: The RMSE results of different algorithms. The standard differentiable function to approximate the l1 -norm in objective

deviation of the exponential distribution (outlier level) is varied function. While, in the second method, we combine the LCA

from 102 m to 105 m. with LPNN framework, and propose an algorithm which not

only solves the non-differentiable problem of l1 -norm but

also l0 -norm. The experiments demonstrate that the proposed

C. Experiment 3: Target Localization in Gaussian Noise with algorithms can effectively reduce the influence of outliers and

Some SINR Outliers they are superior to several state-of-the-art MIMO radar target

In the third experiment, we evaluate the performance of our localization methods.

proposed algorithms under low signal-to-interference-noise

ratio (SINR) environment. This experiment is implemented R EFERENCES

based on the MIMO radar system given by Fig. 4. First, we set

the variance of Gaussian noise to 100 and introduce outliers, [1] H. Godrich, A. M. Haimovich, and R. S. Blum, “Target localization

accuracy gain in MIMO radar-based systems,” IEEE Transactions on

which are generated to model the low SINR environment. Information Theory, vol. 56, no. 6, pp. 2783–2803, 2010.

Assume that 5 measurement values dˆ1,2 , dˆ2,3 , dˆ3,4 , dˆ4,1 , and [2] J. Liang, C. S. Leung, and H. C. So, “Lagrange programming neural

dˆ1,4 , are influenced by SINR outliers. The SINR outliers can network approach for target localization in distributed MIMO radar,”

IEEE Trans. Signal Process., vol. 64, no. 6, pp. 1574–1585, Mar. 2016.

be both negative and positive, they are generated by Laplace [3] O. Bar-Shalom and A. J. Weiss, “Direct positioning of stationary targets

distribution in this experiment. The standard deviation of using MIMO radar,” Signal Processing, vol. 91, no. 10, pp. 2345–2358,

Laplace distribution ranges from 2 × 102 (m) to 2 × 105 (m). 2011.

[4] B. K. Chalise, Y. D. Zhang, M. G. Amin, and B. Himed, “Target localiza-

At each outlier level, we still repeat the experiment 100 times. tion in a multi-static passive radar system through convex optimization,”

The results are shown in Fig. 10. Signal Processing, vol. 102, pp. 207–215, 2014.

From Fig. 10, we see that under low SINR environment, [5] Y. Huang, J. Benesty, G. W. Elko, and R. M. Mersereati, “Real-time

passive source localization: A practical linear-correction least-squares

the performance of our proposed algorithms is also superior approach,” IEEE transactions on Speech and Audio Processing, vol. 9,

to others. The performance of l0 -norm LPNN LCA is still the no. 8, pp. 943–956, 2001.

IEEE TRANSACTIONS ON , VOL. 1, NO. , 2016 11

source localization problems,” in Parallel Processing Workshops, 2004.

ICPP 2004 Workshops. Proceedings. 2004 International Conference on.

IEEE, 2004, pp. 443–446.

[7] S. Zhang and A. Constantinides, “Lagrange programming neural net-

works,” IEEE Transactions on Circuits and Systems II: Analog and

Digital Signal Processing, vol. 39, no. 7, pp. 441–452, 1992.

[8] M. Nagamatu and T. Yanaru, “On the stability of Lagrange programming

neural networks for satisfiability problems of prepositional calculus,”

Neurocomputing, vol. 13, no. 2, pp. 119–133, 1996.

[9] X. Zhu, S.-W. Zhang, and A. G. Constantinides, “Lagrange neural

networks for linear programming,” J. Parallel Distrib. Comput., vol. 14,

no. 3, pp. 354–360, Mar. 1992.

[10] V. Sharma, R. Jha, and R. Naresh, “An augmented Lagrange program-

ming optimization neural network for short term hydroelectric generation

scheduling,” Engineering Optimization, vol. 37, pp. 479–497, Jul. 2005.

[11] J. Liang, H. C. So, C. S. Leung, J. Li, and A. Farina, “Waveform

design with unit modulus and spectral shape constraints via Lagrange

programming neural network,” IEEE Journal of Selected Topics in Signal

Processing, vol. 9, no. 8, pp. 1377–1386, 2015.

[12] J. Liang, D. Wang, L. Su, B. Chen, H. Chen, and H. So, “Robust mimo

radar target localization via nonconvex optimization,” Signal Processing,

vol. 122, pp. 33–38, 2016.

[13] C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen,

“Sparse coding via thresholding and local competition in neural circuits,”

Neural Computation, vol. 20, no. 10, pp. 2526–2563, 2008.

[14] A. Balavoine, C. J. Rozell, and J. Romberg, “Global convergence of the

locally competitive algorithm,” in Proc. IEEE Digital Signal Processing

Workshop and IEEE Signal Processing Education Workshop (DSP/SPE),

2011, pp. 431–436.

[15] A. Balavoine, J. Romberg, and C. J. Rozell, “Convergence and rate anal-

ysis of neural networks for sparse approximation,” IEEE Transactions on

Neural Networks and Learning Systems, vol. 23, no. 9, pp. 1377–1389,

2012.

[16] C.-S. Leung, J. Sum, and A. G. Constantinides, “Recurrent networks for

compressive sampling,” Neurocomputing, vol. 129, pp. 298–305, 2014.

[17] J.-F. Nouvel and M. Lesturgie, “Study of NLOS detection over urban

area at Ka band through multipath exploitation,” in 2014 International

Radar Conference. IEEE, 2014, pp. 1–5.

[18] P. Setlur, T. Negishi, N. Devroye, and D. Erricolo, “Multipath ex-

ploitation in non-LOS urban synthetic aperture radar,” IEEE Journal of

Selected Topics in Signal Processing, vol. 8, no. 1, pp. 137–152, 2014.

6

10

l2−norm LPNN

5

l1−like LPNN

10

l1−norm LPNN LCA

l0−norm LPNN LCA

4

10

MCC

RMSE(m)

3

10

2

10

1

10

0

10 2 3 4 5 6 7

10 10 10 10 10 10

Standard deviation of the Laplace noise (m)

5

10

l2−norm LPNN

4

l1−norm−like LPNN

10

l1−norm LPNN LCA

l0−norm LPNN LCA

3

10 MCC

RMSE(m)

2

10

1

10

0

10

−1

10

2 3 4 5 6

10 10 10 10 10

Standard deviation of the Laplace noise (m)

6

10

l2−norm LPNN

5 l1−norm−like LPNN

10

l1−norm LPNN LCA

4

l0−norm LPNN LCA

10

MCC

3

RMSE (m)

10

2

10

1

10

0

10

−1

10

2 3 4 5 6 7 8

10 10 10 10 10 10 10

Standard deviation of the Laplace noise (m)

- (Handbook of Environmental Engineering 16) Lawrence K. Wang, Chih Ted Yang, Mu-Hao S. Wang (Eds.)-Advances in Water Resources Management-Springer International Publishing (2016)Transféré parElhoub Ayoub
- Question 2Transféré parfrankofori
- Recommendation Reports 調查性與建議性報告(上)Transféré par柯泰德 (Ted Knoy)
- DARPA00 Technical Report Jul00Transféré parMuhammad Adil Khan
- Ahrens 2015Performance Comparison of SVD- and GMD-assisted MIMO SystemsTransféré parAli Alrhie
- 438471Transféré parMaiman Lato
- Bio Kinetics ModelTransféré parZee Shan
- MUSA MethodTransféré parRahmaniyah Dwi Astuti
- Nick.1113Transféré parrvsharma02
- Advanced Automation & RoboticsTransféré paraidimac
- Improved Algorithm for MIMO Antenna MeasurementTransféré parHenry Do
- LTE_MIMO_Pres_0903_Agilent_2Transféré parAntonio George
- 1-s2.0-S0020740303001723-mainTransféré parAnwar ALkurayshi
- 86999_prefsedTransféré parHimanshu Shivanand
- Mixed Integer Optimization in the CHemical Processes IndustriesTransféré parArley Nova
- 1 lot of inf.pdfTransféré pardesign_amrita
- MIMOTransféré parfarrukhmohammed
- Dethithudh2011 Co Dap AnTransféré paralice
- art%3A10.1007%2Fs40903-015-0019-4Transféré parJavier Maldonado
- 1. Formulation-Graphical SolTransféré parAnimesh Choudhary
- A DSS for Reservoirs Operation Based on the Execution of Formal M (1)Transféré parCoki hutagaol
- PSP for MacOS ManualTransféré parpolo2718
- 59.IJMPERDDEC201759Transféré parTJPRC Publications
- Cost Optimization Using GA for Structural Beams With Different and ConditionsTransféré parInternational Journal of Innovative Science and Research Technology
- Ieee Latam 2014Transféré parJuan Castillo
- docslide.net_wcdma-optimization-drive-test-analysis.pdfTransféré parAnonymous 8IDNRlwQB
- THE PERFORMANCE OF CONVOLUTIONAL CODING BASED COOPERATIVE COMMUNICATION: RELAY POSITION AND POWER ALLOCATION ANALYSISTransféré parAIRCC - IJCNC
- Articulo Garcia And Marin.pdfTransféré parRafaelMontenegro
- 110182Transféré parSantos Quezada
- 370757727-On-the-Design-of-MIMO-NOMA-Downlink-and-Uplink-Transmission.pdfTransféré parNguyễn Thuyên

- Active Radar Electronic Countermeasures - Chrzanowski [9780890062906] [1990]Transféré parpasargad135106
- ECSS-Q-ST-20-08CTransféré parpasargad135106
- Computationally efficient DOA estimation algorithm for MIMO radar with imperfect waveforms.pdfTransféré parpasargad135106
- A Peak Hold Energy Readout Circuit for Use With Pyroelectric Laser Energy MonitorsTransféré parpasargad135106
- A Load Balancing Surveillance Algorithm For Multifunctional Radar Resource Management.pdfTransféré parpasargad135106
- Avionics FundamentalsTransféré parSyed Iqmal
- Real-time Beacon Identification Using Linear and Kernel (Non-linear) Support Vector Machine, Multiple Kernel Learning (MKL), And Light Detection and Ranging (LIDAR) 3D DataTransféré parpasargad135106
- Quantum RadarTransféré parpasargad135106
- ECSS-Q-ST-20-08CTransféré parpasargad135106
- Analysis and Validation of an Improved Method for Measuring HF Surface Wave Radar Antenna PatternTransféré parpasargad135106
- A Robust Estimation Method of Ship Target Attitude Angle Based on Radar TrackTransféré parpasargad135106
- A Convex Hull Based Approach for MIMO Radar Waveform Design With Quantized PhasesTransféré parpasargad135106
- 550-W Ka-band Pulsed Helix TWT for Radar ApplicationsTransféré parpasargad135106
- TMTT.2017.2769049_f5Transféré parpasargad135106
- TMTT.2017.2769079_ad101ff216c79329bada5d9376dc5c53Transféré parpasargad135106
- Design of an Elliptical Planar Monopole Antenna for Using in Radar-Based and Ultra-Wideband Microwave Imaging System.pdfTransféré parpasargad135106
- Nonmetallic Crystals With High Thermal ConductivityTransféré parpasargad135106
- Effect of Circuit Parameters and Environment on Shock Waves Generated by Underwater Electrical Wire ExplosionTransféré parpasargad135106
- IJITCS-V3-N3-8Transféré parpasargad135106
- Janes 2016 LinkTransféré parpasargad135106
- TMTT.1955.1124975_490d3e00e571e928a37f5478d72449d3Transféré parpasargad135106
- Chuttur M.Y. (2009). Overview of the Technology Acceptance Model Origins, Developments and Future Directions , Indiana University, USA . Sprouts Working Papers on Information Systems, 9(37).Transféré parbewescrib
- NN-P-530Transféré parpasargad135106
- Microwave TubesTransféré parAllanki Sanyasi Rao
- Hyperbaric oxygen therapy in the treatment of radio-induced lesions in normal tissues a literature reviewTransféré parpasargad135106
- Automatic Estimation of Multiple Target Positions and Velocities Using Passive TDOA Measurements of TransientsTransféré parpasargad135106

- Electrochemical Oxidation of Textile Dye Wastewater Using Different Electrodes Rekha Et Al.Transféré parRashmi Garg
- Design of Cotter JointTransféré parKetanJShah
- IJEIT1412201406_65Transféré parAdelin Simtinica
- MODFLOW-NWTTransféré parIcizacky Ishaq
- 10.1.1.199.7444.pdfTransféré parnclenin
- Water-Level-Monitor-Control-and-Alerting-System-Using-GSM-In-Dams-and-Irrigation-System-based-on-Season.pdfTransféré parLuthfan Xetia
- NSCP 2010(2)Transféré parRalph Rodrigo
- Bilangan GelombangTransféré parYoga Nurwijaya
- 05 Filter Technique Construction Maintenance 2009 Gbp LrTransféré parRajaraman Dhasarathan
- Apparent vs Real Weight - BuyoncyTransféré parsireesha.green
- Two Bit Arithmetic Logic Unit (ALU) in QCATransféré paridescitation
- Articulo de Restauradora IITransféré parKatherine Chuchuca
- Turb-AireCentrifugal.pdfTransféré parBoBo Kyaw
- Water Shift Gas Rx in HysysTransféré parriloadd
- Study Problems 1Transféré parmurat_karavus
- 1512724342Module 14 Typesofmatrices1(2Transféré parVishal Sharma
- Sour Cream SpecTransféré parsadbad6
- Probability of Symbol Error of OFDM System With 3D Signal ConstellationsTransféré parkhajarasool_sk
- Digital Photography and the Ethics of Photograph AlterationTransféré parAli Erden Sizgek
- Mott - Applied Strength Materials 5th (Solutions Manual) - DocumentsTransféré parParmar Bhavin
- NDT- pptTransféré parVincent Vijayakumar
- ParabolaTransféré paralinahmohdnorazmie
- Experiment 3Transféré parcsnlai
- mtmathemagicTransféré parbhargav
- 22% - Efficiency Hit Solar CellTransféré parBouba Kar
- Directed Graphs, Boolean Matrices,And RelationsTransféré paramb03
- 5 - HVAC Handbook - New Edition - Part.5 - Water ConditioningTransféré parIsak Tao
- TOPIC 5 DESIGN OF CONCRETE BEAM.pdfTransféré parnasyahrah
- civil engineeringTransféré parWarren Sambile Madayag
- A 3D FDTD Code Implemented in MATLABTransféré parAli Mhd

## Bien plus que des documents.

Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.

Annulez à tout moment.