Vous êtes sur la page 1sur 4

11.

9 Least Squares With Equality Constraints


The idea underlying weighted least squares is that observations with large weight shall influence the solution more than observations with smaller weights. Extending this idea to
the extreme implies that giving an observation infinite weight will lead to that this observation equation approximately will be fulfilled. In other words: we have a way of introducing
linear constraints on the least-squares problem: Observation equations with large weights
act as linear constraints. This observation has been made by many geodesists.
Often constraints follow from a mathematical or physical model. In its simplest
form certain prescribed coordinate values shall be left unchanged by the least-squares procedure. Much litterature has been written about all possible conditions between the number
of observations, the number of constraints, the number of unknowns and the rank of the
involved matrices. However we restrict ourselves to the ordinary case with more observations than unknowns, and with as many constraints as the rank of the matrix B describing
the constraints.

Exact Solution
Our starting point is again the observation equations Ax = b e augmented with p
constraints described through B x = d. We assume that the system B x = d is consistent.
The problem is

2
min Ax b .
(11.1)
B x=d

Let

BT

have the QR factorization:


QT BT =

and set

 
R p
0 np



AQ = A1 A2
and
p np

QT x =

 
y p
.
z np

We get B = R T Q T and B x = R T Q T x = R T y; and (AQ)(Q T x) = Ax = A1 y + A2 z.


With these transformations (11.1) becomes


min A1 y + A2 z b .
(11.2)
R T y=d

Hence y is determined from R T y = d and z is obtained by solving the unconstrained


least-squares problem


min A2 z (b A1 y) .
(11.3)
z

With known y and z the solution x becomes


x=Q

 
y
.
z

404

Alternatively we want to derive a solution using SVD. We are facing a new issue as we
need to find a simultaneous SVD for both A and B. The problem is called a generalized
singular value decomposition.
We introduce the compound matrix
M=

 
A m
.
B p
n

???? proved that there exist an m by m orthogonal matrix U , a p by p orthogonal matrix


V , a (usually) square matrix X , and nonnegative diagonal matrices C and S so that
A = UC XT

(11.4)

(11.5)

B = V SX
2

C + S = I.

(11.6)

The matrix X is n by s where s = min{m + p, n}.


The diagonal entries are ordered as follows: 0 = C11 = = Cqq < Cq+1,q+1
C pp < C p+1, p+1 = = Cnn = 1, and S11 S pp > 0. For i = p + 1, . . . , n
Sii = 0 and from (11.6) follows that Cii = 1 for the same indices. The generalized
singular values are Cii /Sii . In MATLAB the general call is [U, V, X, C, S] = gsvd(A,B).
Here the nonzero entries of S are always on its main diagonal. If m n the nonzero
entries of C are also on its main diagonal. But if m < n, the nonzero diagonal of C
is diag(C, n m). This allows the diagonal entries to be ordered so that the generalized
singular values are nondecreasing. The number of singular values of X equals the nonzero
singular values of M. We apply this result to problem (11.1)
min kAx bk subject to
x

B x = d.

Multiplying by an orthogonal matrix U inside the norm sign does not change the length;
the constraints are multiplied by V T :
min kU T Ax U T bk subject to
x

V T B x = V T d.

From (11.4) and (11.5) follows U T A = C X T and V T B = S X T and we get


min kC X T x U T bk subject to
x

S X T x = V T d.

Introducing a column oriented notation of matrices U = [ u1 , . . . , um ], V = [ v 1 , . . . , v p ],


and X = [ x 1 , . . . , x n ] we obtain
T
vi d

for i = 1, . . . , p

S

ii
XTx i =

uT b

i
for i = p + 1, . . . , n.
Cii

11.9
Now let X T

1

Least Squares With Equality Constraints

405

= W = [ w 1 , . . . , w n ] then the solution x c to (11.1) is


xc =

p
X
vT d
i

i=1

Sii

n
X
uiT b
wi .
Cii

wi +

(11.7)

i= p+1

For reasons of comparison we quote the ordinary solution x without constraints


x=

p
n
X
X
uiT b
wi +
uiT bwi .
Cii

i=q+1

(11.8)

i= p+1

It is possible to compute how much the constraints increase the residuals. The constrained
minus unconstrained residuals are
ec e = A(x x c ).
Remember that Ax i = Cii ui for i = 1, . . . , n, then by using (11.7) and (11.8) we get
kec k2 = kek2 +

p 
X

uiT b

i=q+1

Cii T 2
v d .
Sii i

Both the QR decomposition and the generalized singular value decomposition solution are coded in the M-file clsq.

Approximate Solution
Next we solve the unconstrained least-squares problem
 
 
A
b


min
x
x
B
d
for large . We find the solution in terms of the GSVD and reuse some of the above
notation. We start by forming the normal equations
(AT A + 2 B T B)x = AT b + 2 B T d
and substitute (11.4) and (11.5):
(XC T U T U C X T + 2 X S T V T V S X T )x = XC T U T b + 2 X S T V T d
or
(XC T C X T + 2 X S T S X T )x = XC T U T b + 2 X S T V T d
(C T C + 2 S T S)X T x = C T U T b + 2 S T V T d.
The solution is
x() =

p
X
Cii uT b + 2 Sii v T d
i

i=1

Cii2 + 2 Sii2

wi +

n
X
uiT b
wi .
Cii

i= p+1

(11.9)

406

Subtracting (11.7) from (11.9) yields


x() x c =

p
X
Cii (Sii uT b Cii v T d)
i

i=1

Sii (Cii2 + 2 Sii2 )

wi .

(11.10)

For we have x() x c . The appeal of the method of weighting is that it does
not call for special subroutines: an ordinary least-squares solver will do.
We have implemented the procedure as M-file wlsq.
The present presentation relies heavily on Golub & van Loan and ke Bjrck.

Vous aimerez peut-être aussi