Vous êtes sur la page 1sur 23

Biometry 970

Linear Models
S. D. Kachman
Department of Biometry
University of NebraskaLincoln

http://www.ianr.unl.edu/ianr/biometry/faculty/steve/970/1999

Distributions

Outline

Linear Transformations, Quadratic Forms, Bilinear Forms


Positive (semi-)definite matrices
Distributions
Multivariate distributions
Linear Transformation
Normal Distribution
Central 2, F , and t
Non-Central
Distribution of Quadratic Forms
Moments
Independence

Biometry 970 Linear Models Fall 1999

22

Linear Transformations, Quadratic Forms,


Bilinear Forms

y (y , V yy )
x (x, V xx)
cov(x, y) = V xy
Linear Transformation, Ay
Quadratic Form, y 0Qy
Without loss of generality we can assume Q is
symmetric. If it is not we can replace Q with 12 (Q + Q0)
Bilinear Form, x0Qy
A bilinear form can be written as a quadratic form
x0


0

0
1 0
2Q

1
2Q

 
x
0
y

where

 
  
x
x
V xx

,
y
y
V yx

Biometry 970 Linear Models Fall 1999

V xy
V yy



23

Direct Products

a11B
a B
21

AB =

am1B

a12B
a22B

...
...
...
aij B
...
...

am2B

a1nB
a2nB

amnB

A:mn
B :rc
A B : mr nc
tr(A B) = tr(A) tr(B)
rank(A B) = rank(A) rank(B)
r

|A B| = |A| |B|

(A B) = A B
(A B) = A B

[A B][C D] = [AC] [BD]


Biometry 970 Linear Models Fall 1999

24

Sum of squares and quadratic forms

2 factor experiment (A, B)


with a = 2 levels of factor A and b = 2 levels of factor B
n = 3 replications of each of ab = 4 treatment
combinations
If we let yijk be the observed response, and y be the
vector of yijk s ordered y111, y112, y113, y121, . . . .
Then the total sum of squares is
Pa Pb Pn
2
2
SST =
i=1
j=1
k=1 yijk y... /(abn)
in matrix form this is
= y 0Cy .
y 0y y 0Jy
The sum of squares for A is
Pa
2
2
SSA =
i=1 yi.. /nb y... /(abn)
in matrix form this is
b J
ny y0J
a J
b J
ny
y0I a J
ny.
b J
= y0C a J
The sum of squares for B is
Pa Pb
2
2
SSB =
i=1
j=1 y.j. /an y... /(abn)
in matrix form this is
ny y0J
a J
b J
ny
y0I a I b J
a Cb J
ny.
= y0J
Biometry 970 Linear Models Fall 1999

25

The sum of squares for the AB interaction is


Pa Pb
Pa
2
2
SSAB =
y
/n

i=1
j=1 ij.
i=1 yi.. /nb
Pb
2
2
y
/na
+
y
/(abn)
.j.
...
j=1
in matrix form this is
ny y0I a J
b J
ny
y0I a I b J
a Ib J
ny + y0J
a J
b J
ny
y 0J
ny.
= y0C a C b J
The sum of squares for error is
Pa Pb Pn
Pa Pb
2
2
SSE =
i=1
j=1
k=1 yijk
i=1
j=1 yij. /n
in matrix form this is
y0I a I b C ny.

Biometry 970 Linear Models Fall 1999

26

Positive (semi-)definite matrices

Definition: A matrix P is said to be positive definite (p.d.) if


for all real x 6= 0; x0P x > 0.
Definition: A matrix P is said to be positive semi-definite
(p.s.d.) if for all real x 6= 0 x0P x 0 and for some x 6= 0;
x0 P x = 0 .
Definition: A matrix P is said to be non-negative definite
(n.n.d.) if for all real x; x0P x 0.

If Q is n.n.d., then SS = y 0Qy will be non-negative.

Biometry 970 Linear Models Fall 1999

27

Some useful results

1. A matrix is p.d. (n.n.d.) iff all its principal leading minors


are p.d. (n.n.d.).
In fact all the principal minors are p.d. (n.n.d.).
2. For P non-singular P 0AP is or is not p.(s.)d. iff A is or is
not p.(s.)d..
3. Eigen values of p.(s.)d. are all positive (non-negative).
4. A sum of n.n.d. matrices is n.n.d. and is p.d. if at least one
of the matrices is p.d..
5. A symmetric matrix A is p.(s.)d. iff it can be written as
P 0P for a non-singular (singular) P .
6. P 0P is p.d. if P has column rank and it is p.s.d.
otherwise.
7. If V is n.n.d.(), then P 0V P is also n.n.d.().
8. A symmetric matrix A, of order n and rank r , can be
written using L a n r matrix of rank r and D a
non-singular diagonal matrix of order r
(a) LL0
(b) LL0 with L real iff A is n.n.d.
(c) LDL0 with L and D being real matrices.
Because L has full column rank the matrix L0L is
non-singular.
Biometry 970 Linear Models Fall 1999

28

9. A symmetric matrix having eigen values equal to zero and


one is idempotent.
idempotent matrices are common when we calculate SSs.
10. If A and V are symmetric and V is positive definite, then
AV having eigen values 0 and 1 implies AV is
idempotent.

Biometry 970 Linear Models Fall 1999

29

A look ahead

Why do we care?
Consider the problem of testing HO : K 0 = 0, what are
some of the properties we would want a test statistic to have?

We would want to separate signal from noise.


Noise: (I X(X 0X)X 0)y = (I M X )y
Signal: X(X 0X)X 0y = M X y
We would want to separate from the signal the part
associated with the hypothesis.
Hyp: K 0(X 0X)X 0y = T 0M X y
We would want a scalar test statistic quadratic form
SSH = y 0M X T var(T 0M X y)1T 0M X y .
var(T 0M X y) = T 0M X T 2

Biometry 970 Linear Models Fall 1999

30

Multivariate distributions

CDF = F (x) = Pr(Xi xii = 1 . . . n)


R
R
Pr(X R) = f (x)dx
R R R
k = E(Xk ) = xif (x)dx
R
R
E(g(X)) = g(x)f (x)dx


= E(X) and V = E (X )(X )0
h 0 i
mgf(t) = E et X

Biometry 970 Linear Models Fall 1999

See B&E
ch. 1,2,5

31

Linear Transformation

E(T X) = E(tiX)i
Z
Z X
0
E(tiX) =

tij xj f (x)dx
j

tij

tij xj

xj f (x)dx

= ti x
E(T X)= T x

Similarly,

E(AXB) = AxB
Biometry 970 Linear Models Fall 1999

32

0

var(T X) = E (T X T x)(T X T x)

0 0
= E T (X X )(X X ) T

0
0
= T E (X X )(X X ) T
= T V xxT

Biometry 970 Linear Models Fall 1999

33

Normal Distribution

Standard Normal f (z) = (z) =

z2

1
e 2
2

Normal

B&E 118ff

f (x; , 2) =
mgf(t) = e

1 e
2
1
t+ 2 t2 2

(x)2
2 2

Multivariate Normal

B&E
185,520

Density

f (x; , V ) = (2)

n
2

|V

1
1
1
(x)0 V
(x)
2
2
| e

Biometry 970 Linear Models Fall 1999

34

Moment generating function


0

mgf(t) = E(t x)
Z
Z
1
n
=
(2) 2 |V | 2



1
0
1
0
exp (x ) V (x ) + t x dx
2
1
0
1
inside = [(x V t) V (. . . )]
2
1 0
0
+ t + t V t
2
0t+ 12 t0V t
mgf(t) = e
Marginal Distribution
x1 N(1, V 11)
Use mgf of x = (x01 x02)0 and set t2 = 0 and notice
that it is the mgf for N(1, V 11).

B&E 185

Conditional Distribution
1

x1|x2 N(1 + V 12V 22 (x2 2), V 11.2)


V 11.2 = V 11 V 12V 1
22 V 21
Biometry 970 Linear Models Fall 1999

35

Show by using f (x|y) = f (x, y)/f (y),


|V | = |V22||V 11.2| and the inverse of a partitioned
matrix (1.48)


0
=
0



0
+
V 1
22


I
1
V
I
11.2
V 1
V
21
22

V 12V 1
22

B&E
4.5,185

Independence: V ij = 0 i 6= j
Again use the mgf.

Biometry 970 Linear Models Fall 1999

36

Central 2, F, and t

If x N(0, I n), then u = x0x 2n.


E(u) = n
var(u) = 2n
Scaled SSs (SS/EMS) often have 2 distributions
If u1 2n1 , u2 2n2 , and u1 and u2 are independent,
u
u
then v = n1 / n2 Fn1,n2 .
1
2
F-test
1) Show SSs are scaled 2 random variables, 2) Show
that they are independent, 3) Ratio of the MSs will then
be an F random variable
If x N(0, p
1), u 2n, and x and u are independent,
then t = x/ u/n tn.
t-test
1) Standardize x, 2) Scaled standard error, 3) Show
independence.

Biometry 970 Linear Models Fall 1999

37

B&E 271

B&E 275ff

Non-Central

If x N(, I n), then u = x0x 20(n, = 12 0)


If u1 20(n1, ), u2 2n2 , and u1 and u2 are
u
u
independent, then v = n1 / n2 F 0(n1, n2, ).
1
2
Power!

Biometry 970 Linear Models Fall 1999

38

Distribution of Quadratic Forms

mgf of x0Ax when x N(, V )


1/2

mgf(t) = |I 2tAV |


i
1 0h
1
1
exp I (I 2tAV )
V
2
0

1. mgf X 0AX (t) = E(etX AX )


R
R
2. Use E[g(x)] = g(x)f (x; , V )dx
3. Complete the square (Lemma 10)
0

(x ) V

B&E 2.5

(x ) 2tx Ax =
0

(x ) V

(x )

+ [I (I 2tAV )

]V

V = V (I 2tAV )1
= (I 2tV A)1
If A = V = I the mgf of x0Ax reduces to

(1

1 )
0(1 12t
n/2 1
2
2t)
e

Biometry 970 Linear Models Fall 1999

39

which is the mgf of 20(n, = 12 0). Furthermore if = 0


this reduces to

(1 2t)

n/2

Theorem 2: When x N(, V ) then


x0Ax 20[r = rank(A), 21 0A] if and only if AV is
idempotent.
Proof. In the book they do it by equating the mgfs

Sufficiency, that is AV being idempotent implies 20


1. A = LL0 where L is a n rank(A) non-singular
matrix. Note A is n.n.d. because AV is idempotent
and V is p.d..
2. x0Ax = (L0x)0(L0x) with L0x N(L0, L0V L)
3. L0V L = I r

Biometry 970 Linear Models Fall 1999

40

Moments

Mean E(x0Ax) = tr(AV ) + 0A


var(x0Ax) = 2 tr(AV AV ) + 40AV A

1
ln[mgf(t)] = ln |I 2tAV |
2
i
1 0h
1
1
I (I 2tAV )
V
2

Biometry 970 Linear Models Fall 1999

41

Independence

If x N(, V ) then,
1. Ax and Bx are independent iff AV B 0 = 0
2. x0Ax and Bx are independent iff AV B 0 = 0
3. x0Ax and x0Bx are independent iff
AV B 0 = AV B = 0

Biometry 970 Linear Models Fall 1999

42

Summary

Linear transformations
E(Ay) = A
0

var(Ay) = AV A
Quadratic Form
0

E(y Qy) = tr(QV ) + Q


0

var(y Qy) = 2 tr(QV QV ) + 4 QV Q


Normal Distribution
Ay Normal
Conditional Distribution
Density
y 0Qy 2 if QV idempotent
F and t distributions
Completing a square
Independence (Normal)
AV B 0 = 0

Biometry 970 Linear Models Fall 1999

43

Vous aimerez peut-être aussi