Académique Documents
Professionnel Documents
Culture Documents
Matrices
2.1 Operations with Matrices
2.2 Properties of Matrix Operations
2.3 The Inverse of a Matrix
2.4 Elementary Matrices
2.5 Markov Chains
2.6 Applications of Matrix Operations
2.7 Homework 1 and 2
※ Since any real number can be expressed as a 11 matrix, all rules, operations, or
theorems for matrices can be applied to real numbers, but not vice versa
※ Just memorize those rules that are different for matrices and real numbers 2.1
2.1 Operations with Matrices
Matrix:
a11 a12 a13 a1n
a a a a
21 22 23 2n
Square matrix: m = n
2.2
Equal matrices: two matrices are equal if they have the same size
(m × n) and entries corresponding to the same position are equal
If A B, then a 1, b 2, c 3, and d 4
2.3
Matrix addition:
If A [aij ]m n , B [bij ]m n ,
then A B [aij ]mn [bij ]mn [aij bij ]mn [cij ]mn C
Ex 2: Matrix addition
1 2 1 3 1 1 2 3 0 5
0 1 1 2 0 1 1 2 1 3
1 1 1 1 0
3 3 3 3 0
2 2 2 2 0
2.4
Scalar (純量) multiplication:
If A [aij ]m n and c is a constant scalar,
then cA [caij ]m n
Matrix subtraction:
A B A (1) B
2.6
Matrix multiplication:
If A [aij ]m n and B [bij ]n p ,
then AB [aij ]m n [bij ]n p [cij ]m p C ,
If equal, A and B are multipliable (可乘的)
size of C=AB is m × p
n
where cij aik bkj ai1b1 j ai 2b2 j ainbnj
k 1
a11 a12 a1n x1 b1
a a a x b
21 2n 2 single matrix equation
22
2
A xb
m n n 1 m 1
m1
a a m2 a mn xn bm
=
A x b
2.9
Partitioned matrices (分割矩陣):
row vector (向量)
a11 a12 a13 a14 r1
A a21 a22 a23 a24 r2
a31 a32 a33 a34 r3
submatrix
a11 a12 a13 a14
A11 A12 ※ Partitioned matrices can be
A a21 a22 a23
a24
A21 A22
used to simplify equations or
to obtain new interpretation of
a31 a32 a33 a34 equations (see the next slide)
2.10
Ax is a linear combination (線性組合) of the column vectors
of matrix A:
a11 a12 a1n x1
a a22 a2 n x
A 21
c1 c2 cn and x 2
am1 am 2 amn xn
am1 x1 am 2 x2 amn xn m1 m1
a m2
a amn
=
c1 c2 cn
x1c1 x2c2 xncn Ax can be viewed as the linear combination of the
x1 column vectors of A with coefficients x1, x2,…, xn
x
c1 c 2 c n 2 ← You can derive the same result if you perform
the matrix multiplication for matrix A
expressed in column vectors and x directly
xn 2.11
To practice the exercises in Sec. 2.1, we need to know the
trace operation and the notion of diagonal matrices
d1 0 0
0 d2 0
A diag (d1 , d 2 ,, d n ) M nn
0 d n
0
※ It is the usual notation for a diagonal matrix 2.12
Keywords in Section 2.1:
equality of matrices: 相等矩陣
matrix addition: 矩陣相加
scalar multiplication: 純量積
matrix multiplication: 矩陣相乘
partitioned matrix: 分割矩陣
row vector: 列向量
column vector: 行向量
trace: 跡數
diagonal matrix: 對角矩陣
2.13
2.2 Properties of Matrix Operations
Three basic matrix operators, introduced in Sec. 2.1:
(1) matrix addition
(2) scalar multiplication
(3) matrix multiplication
0 0 0
0 0 0
Zero matrix (零矩陣): 0mn
0 0 0 mn
1 0 0
0 1 0
Identity matrix of order n (單位矩陣): I n
0 0 1 n n
2.14
Properties of matrix addition and scalar multiplication:
If A, B, C M mn , and c, d are scalars,
then (1) A+B = B+A (Commutative property (交換律) of matrix addition)
Notes:
All above properties are very similar to the counterpart
properties for real numbers
2.15
Properties of zero matrices:
If A M mn , and c is a scalar,
then (1) A 0mn A
※ So, 0m×n is also called the additive identity (加法單位元素) for the set of
all m×n matrices
(2) A ( A) 0mn
※ Thus , –A is called the additive inverse (加法反元素) of A
Notes:
All above properties are very similar to the counterpart
properties for the real number 0
2.16
Properties of matrix multiplication:
(1) A(BC) = (AB)C (Associative property of matrix multiplication)
(Distributive property of LHS matrix multiplication
(2) A(B+C) = AB + AC over matrix addition)
(3) (A+B)C = AC + BC (Distributive property of RHS matrix multiplication
over matrix addition)
(4) c(AB) = (cA)B = A(cB) (Associative property of scalar and matrix
multiplication)
※ For real numbers, the properties (2) and (3) are the same since the order for the
multiplication of real numbers is irrelevant
※ The real-number multiplication can satisfy above properties and there is a
commutative property for the real-number multiplication, i.e., cd = dc
Properties of the identity matrix:
If A M mn , then (1) AI n A
(2) I m A A
※ The role of the real number 1 is similar to the identity matrix. However, 1 is unique
in real numbers and there could be many identity matrices with different sizes 2.17
Ex 3: Matrix Multiplication is Associative
Calculate (AB)C and A(BC) for
1 0
1 2 1 0 2
A , B , and C 3 1 .
2 1 3 2 1 2 4
Sol:
1 0
1 2 1 0 2
( AB)C 3 1
2 1 3 2 1
2 4
1 0
5 4 0 17 4
3 1
1 2 3
13 14
2 4
2.18
1 0
1 2 1 0 2
A( BC )
1 3 2 1
3 1
2 2 4
1 2 3 8 17 4
2 1 7 2 13 14
2.19
Equipped with the four properties of matrix multiplication,
we can prove a statement on Slide 1.35: if a homogeneous
system has any nontrivial solution, this system must have
infinitely many nontrivial solutions
Finally, since t can be any real number, it can be concluded that there are
infinitely many solutions for this homogeneous system
2.20
Definition of Ak : repeated multiplication of a square matrix:
A1 A, A2 AA, , Ak AA A
k matrices of A
2.24
Ex 9: Verify that (AB)T and BTAT are equal
2 1 2 3 1
A 1 0 3 B 2 1
0 2 1 3 0
Sol:
T
2 1 2 3 1
T
2 1
2 6 1
( AB) 1 0
T
3 2 1 6 1
0 2 1 1 2
1
3 0
1 2
2 1 0
3 2 3 2 6 1
B A
T T
1 0 2
1 1 0 2 3 1 1 2
1
2.25
Symmetric matrix (對稱矩陣):
A square matrix A is symmetric if A = AT
Skew-symmetric matrix (反對稱矩陣):
A square matrix A is skew-symmetric if AT = –A
Ex:
1 2 3
If A a 4 5 is symmetric, find a, b, c?
b c 6
Sol:
1 2 3 1 a b
T
A a 4 5 AT 2 4 c
A A
a 2, b 3, c 5
b c 6 3 5 6
2.26
Ex:
0 1 2
If A a 0 3 is a skew-symmetric, find a, b, c?
b c 0
Sol:
0 1 2 0 a b
A a 0 3 AT 1 0 c
b c 0
2 3 0
A AT a 1, b 2, c 3
Note: AAT must be symmetric ※ The matrix A could be with any size,
i.e., it is not necessary for A to be a
Pf: ( AAT )T ( AT )T AT AAT square matrix.
※ In fact, AAT must be a square matrix.
AA is symmetric
T
2.27
Before finishing this section, two properties will be discussed,
which are held for real numbers, but not for matrices: the first is
the commutative property of matrix multiplication and the second
is the cancellation law
Real number:
ab = ba (Commutative property of real-number multiplication)
Matrix:
AB BA
m n n p n p m n
2 1 1 3 0 7
BA
0 2 2 1 4 2
2.29
Notes:
2.30
Real number:
ac bc and c 0
a b (Cancellation law for real numbers)
Matrix:
AC BC and C 0 (C is not a zero matrix)
(1) If C is invertible, then A = B
(2) If C is not invertible, then A B (Cancellation law is not
necessary to be valid)
2.31
Ex 5: An example in which cancellation is not valid
Show that AC=BC
1 3 2 4 1 2
A , B , C
0 1 2 3 1 2
Sol:
1 3 1 2 2 4
AC
0 1 1
2 1 2
2 4 1 2 2 4
BC
2
3 1 2 1 2
So, although AC BC , A B
2.32
Keywords in Section 2.2:
zero matrix: 零矩陣
identity matrix: 單位矩陣
commutative property: 交換律
associative property: 結合律
distributive property: 分配律
cancellation law: 消去法則
transpose matrix: 轉置矩陣
symmetric matrix: 對稱矩陣
skew-symmetric matrix: 反對稱矩陣
2.33
2.3 The Inverse of a Matrix
Inverse matrix (反矩陣):
Consider A M nn ,
if there exists a matrix B M nn such that AB BA I n ,
then (1) A is invertible (可逆) (or nonsingular (非奇異))
(2) B is the inverse of A
Note:
A square matrix that does not have an inverse is called
noninvertible (or singular (奇異))
※ The definition of the inverse of a matrix is similar to that of the inverse of a
scalar, i.e., c · (1/c) = 1
※ Since there is no inverse (or said multiplicative inverse (倒數)) for the real
number 0, you can “imagine” that noninvertible matrices act a similar role to
the real number 0 in some sense
2.34
Theorem 2.7: The inverse of a matrix is unique
If B and C are both inverses of the matrix A, then B = C.
Pf: AB I
C ( AB) CI
(CA) B C (associative property of matrix multiplication and the property
for the identity matrix)
IB C
BC
Consequently, the inverse of a matrix is unique.
Notes:
(1) The inverse of A is denoted by A1
(2) AA1 A1 A I
2.35
Find the inverse of a matrix by the Gauss-Jordan elimination:
A | I
Gauss-Jordan elimination
I | A1
1 4 1 0 Gauss-Jordan elimination 1 0 3 4
1 3 ( 4 ) 0 1 1
(1)
0 1 A1,2 , A2,1
1
A I I A1
x x
solution for 11 solution for 12
x21 x22
2.38
Ex 3: Find the inverse of the following matrix
1 1 0
A 1 0 1
6 2 3
Sol:
1 1 0 1 0 0
A I 1 0 1 0 1 0
6 2 3 0 0 1
1 1 0 1 0 0 1 1 0 1 0 0
0 1 1 1 1 0 0 1 1 0
( 1) ( 6)
1 1
A1,3
A1,2
2.39
1 1 0 1 0 0 1 0 0 2 3 1
0 1 0 3 3 1 0 1 0 3 3 1
(1) (1)
A2,1
A3,2
0 0 1 2 4 1 0 0 1 1 4 1
[ I A1 ]
Check it by yourselves:
AA1 A1 A I
2.40
Matrix Operations in Excel
TRANSPOSE: calculate the transpose of a matrix
MMULT: matrix multiplication
MINVERSE: calculate the inverse of a matrix
MDETERM: calculate the determinant of a matrix
SUMPRODUCT: calculate the inner product of two vectors
※ For TRANSPOSE, MMULT, and MINVSRSE, since the output should
be a matrix or a vector, we need to input the formula in a cell first, then
identify the output range (the cell with the input formula located at the
position a11), and finally put focus on the formula description cell and
press “Ctrl+Shift+Enter” to obtain the desired result
※ See “Matrix operations in Excel for Ch2.xlsx” downloaded from my
website
2.41
Theorem 2.8: Properties of inverse matrices
If A is an invertible matrix, k is a positive integer, and c is a scalar,
then
(1) A1 is invertible and ( A1 ) 1 A
Pf:
1 1
(1) A A I ; (2) A ( A ) I ; (3) (cA)( A ) I ; (4) AT ( A1 )T I
1 k 1 k
c
2.42
Theorem 2.9: The inverse of a product
If A and B are invertible matrices of order n, then AB is invertible
and
( AB) 1 B 1 A1
Pf:
( AB)( B1 A1 ) A( BB 1 ) A1 A( I ) A1 ( AI ) A1 AA1 I
(associative property of matrix multiplication)
Ix A1b
x A1b
2.48
2.4 Elementary Matrices
Elementary matrix (列基本矩陣):
An nn matrix is called an elementary matrix if it can be obtained
from the identity matrix I by a single elementary row operation
Three row elementary matrices:
(1) Ei , j I i , j ( I ) (interchange two rows)
(2) Ei( k ) M i( k ) ( I ) (k 0) (multiply a row by a nonzero constant)
(3) Ei(,kj) Ai(,kj) ( I ) (add a multiple of a row to another row)
Note:
1. Perform only a single elementary row operation on the identity
matrix
2. Since the identity matrix is a square matrix, elementary matrices
must be square matrices 2.49
Ex 1: Elementary matrices and nonelementary matrices
1 0 0 1 0 0
(a) 0 3 0
1 0 0
(b) (c) 0 1 0
0 1 0 0 0 0
0 0 1
Yes (M 2(3) (I 3 )) No (row multiplication must
No (not square)
be with a nonzero constant)
1 0 0 1 0 0
(d) 0 0 1 1 0 (f ) 0 2 0
(e)
0 1 0 2 1 0 0 1
1
1 0 0
( )
E3 M 3 2 ( I 3 ) 0 1 0
0 0 12
2.52
0 1 0 0 1 3 5 1 3 0 2
A1 I1,2 ( A) E1 A 1 0 0 1 3 0 2 0 1 3 5
0 0 1 2 6 2 0 2 6 2 0
1 0 0 1 3 0 2 1 3 0 2
( 2)
A2 A1,3 ( A1 ) E2 A1 0 1 0 0 1 3 5 0 1 3 5
2 0 1 2 6 2 0 0 0 2 4
1 0 0 1 3 0 2 1 3 0 2
1
A3 M 3 2 ( A2 ) E3 A2 0 1 0 0 1 3 5 0 1 3 5 B
( )
1 0 0 2 4 0 0 1 2
0 0
2 same row-echelon form
1
( )
( 2)
B E3 E2 E1 A or B M 3
2
( A1,3 ( I1,2 ( A)))
2.54
Ex: Elementary Matrix Inverse Matrix
0 1 0 0 1 0
E1 1 0 0 I1,2 ( I ) E11 1 0 0 I1,2 ( I )
E11 still corresponds to
I1,2(I)
0 0 1 0 0 1
The corresponding
element row operations
1
1 0 0 1 0 0 for E2 is still to add a
I
M22
0 1 0 1
1
( )
( 2) ( 3) ( 1)
Therefore E2,1 E
2
2
E1,2 E1 A I
2.57
1
( )
Thus A ( E1( 1) ) 1 ( E1,2
( 3) 1
) (E 2
2 1 ( 2) 1
) ( E2,1 )
1 0 1 0 1 0 1 2
3 1 0 2 0 1
0 1
※ Statements (2) and (3) are from Theorem 2.11, and Statements (4) and (5) are
from Theorem 2.14
2.59
LU-factorization (or LU-decomposition) (LU分解):
If the nn matrix A can be written as the product of a lower
triangular matrix L and an upper triangular matrix U, then
A LU is a LU -factorization of A
a11 0 0
a 3 3 lower triangular matrix (下三角矩陣):
21 a22 0 all entries above the principal diagonal are
a31 a32 a33 zero
2.60
Note:
If a square matrix A can be row reduced to an upper triangular
matrix U using only the elementary row operations of adding a
(𝑘)
multiple of one row to another row below it (i.e., 𝐴𝑖,𝑗 for 𝑗 >
𝑖 ), then A has an LU-factorization
Ek E2 E1 A U
(U is similar to a row-echelon form matrix, expcet that the
leading coffieicnets may not be 1)
1 1 1 ※ If only Ai(,kj) for 𝑗 > 𝑖 is applied, then E11 E21 Ek1
A E E 1 2 E U
k will be lower triangular matrices
A LU (L E11 E21 Ek1 )
2.61
Ex 5 and 6: LU-factorization
1 3 0
1 2 0
(a) A (b) A 1 3
1 0
2 10 2
Sol: (a)
1 2 A1,2(-1) 1 2
A U
1 0 0 2
( 1)
E1,2 A U
( 1) 1
A ( E1,2 ) U LU
1 0
( 1) 1
L (E 1,2 )
1 1
2.62
(b)
1 3 0 1 3 0 1 3 0
A 0 ( 2)
1 3
A1,3
0 ( 4)
1 3
A2,3
0 1 3 U
2 10 2 0 4 2 0 0 14
※ Note that we do not perform the G.-J. E. here
(4) ( 2) ※ Instead, only Ai(,kj) for 𝑗 > 𝑖 is used to derive the
E2,3 E1,3 A U upper triangular matrix U
※ In addition, the inverse of these elementary
( 2) 1 (4) 1
A ( E1,3 ) ( E2,3 ) U LU matrices should be lower triangular matrices
※ Together with the fact that the product of lower
triangular matrices is still a lower triangular
matrix, we can derive the lower triangular matrix
( 2) 1 (4) 1
L ( E1,3 ) ( E2,3 ) L in this way
1 0 0 1 0 0 1 0 0
0 1 0 0 1 0 0 1 0
2 0 1 0 4 1 2 4 1
2.63
Solving Ax=b with an LU-factorization of A (an important
application of the LU-factorization)
Ax b If A LU , then LUx b
Let y Ux, then Ly b
2.64
Ex 7: Solving a linear system using LU-factorization
x1 3 x2 5
x2 3 x3 1
2 x1 10 x2 2 x3 20
Sol:
1 3 0 1 0 0 1 3 0
A 0 1 3 0 1 0 0 1 3 LU
2 10 2 2 4 1 0 0 14
(1) Let y Ux, and solve Ly b (solved by the forward substitution)
1 0 0 y1 5 y1 5
0 1 0 y2 1 y2 1
2 4 1 y3 20 y3 20 2 y1 4 y2
20 2(5) 4(1) 14
2.65
(2) Solve the following system Ux y (solved by the back substitution)
1 3 0 x1 5 ※ Similar to the method using A –1 to
0 1 3 x2 1 solve the systems, the LU-
0 0 14 x3 14 factorization is useful when you
need to solve many systems with
the same coefficient matrix. In
So x3 1 such scenario, the LU-factorization
is performed once, and can be used
x2 1 3x3 1 (3)(1) 2 many times.
x1 5 3x2 5 3(2) 1 ※ You can find that the
computational effort for LU-
factorization is almost the same as
Thus, the solution is that for the Gaussian elimination,
so if you need to solve one system
1 of linear equations, just use the
x2 Gaussian elimination plus the back
substitution or the Gauss-Jordan
1 elimination directly.
2.66
Keywords in Section 2.4:
row elementary matrix: 列基本矩陣
row equivalent: 列等價
lower triangular matrix: 下三角矩陣
upper triangular matrix: 上三角矩陣
LU-factorization: LU分解
2.67
2.5 Markov Chains
Stochastic Matrices (隨機矩陣)
{S1, S2, …, Sn} is a finite set of state (狀態集合) of a given
population, based on which a stochastic matrix P is defined as
1. The entry 0 pij 1 represents the probability that a
member of a population will change from j-th state to the i-th
state
2. The sum of the entries in the same column is 1
Form
※ A stochastic matrix must
S1 S2 … Sn be a square matrix
p11 p12 p1n S
※ The stochastic matrix is
p+21 + +
p22 p2 n S2 1
P To
also known as the matrix
…
of transition probabilities
p+n1 p+n 2 p+nn Sn (移轉機率矩陣)
=1 =1 =1 2.68
Ex 1: Stochastic and nonstochastic matrices
1 1 1 1 1 1
2 3 4 2 4 4
1 3 1 2
0 0
4 4 3 3
1 2
0 1 3
0
4 3 4 4
stochastic not stochastic
2.69
Ex 2: A consumer preference model
A B None
0.70 0.15 0.15 A
P 0.20 0.80 0.15 B
None
0.10 0.05 0.70
0.3329 A
X 10 P10 X 0 0.4715 B After 10 year
0.1957 None
0.3333 A
X P X 0 0.4762
B After ∞ year
0.1905 None
2.71
Property of a steady state matrix (穩定狀態矩陣) 𝑋ത ≡ 𝑋∞
ത 𝑋ത
𝑃𝑋=
The matrix 𝑋𝑛 eventually reaches a steady state. The limit is the
steady state matrix. Left multiplying the steady state matrix with
the stochastic matrix generates the steady state matrix itself
1 0 1
13
(c) P 3 1 0
1 0 0
3
P is not regular because every power of P has two zeros
in its second column 2.73
Ex 5: Finding a steady state matrix
Find the steady state matrix 𝑋ത of the Markov chain whose
matrix of transition probabilities is the regular matrix
Check: PX X
2.75
Absorbing state (吸收狀態):
Consider a Markov chain with n different states {S1, S2, . . . , Sn}.
The i-th state Si is an absorbing state when, in the matrix of
transition probabilities P, pii = 1, i.e., this entry on the main
diagonal of P is 1 and all other entries in the i-th column of P are 0
From
S1 S2 S3 S4
0.5 0 0 0 S1
0.5 1 0 0 S2
P To
0 0 0.4 0.5 S3
0 0 0.6 0.5 S4
2.78
Ex 7: Finding steady state matrices of absorbing Markov chains
0.4 0 0
(a ) P 0 1 0.5 ※ Note that P is not regular
0.6 0 0.5
Use the matrix equation PX X
0.4 0 0 x1 x1
0 1 0.5 x x
2 2
0.6 0 0.5 x3 x3
along with the equation x1 + x2 + x3 = 1 to derive
0.6 x1 0
※ The solution coincides
0.5 x3 0 x1 0 with the second column
x2 1 of the transition
0.6 x1 0.5 x3 0 x 0
3 probability matrix P
x1 x2 x3 1 2.79
0.5 0 0.2 0
0.2 1 0.3 0
( b) P ※ Note that P is not regular
0.1 0 0.4 0
0.2 0 0.1 1
0.5 0 x1 x1
0 0.2
0.2 0 x2 x2
1 0.3
Use the matrix equation PX X
0.1 0 x3 x3
0 0.4
0.2 1 x4 x4
0 0.1
along with the equation x1 + x2 + x3 + x4 = 1 to derive
※ The Markov
0.5 x1 0.2 x3 0 chain has
x1 0
0.2 x1 0.3x3 0 x 1 t
infinitely many
steady state
0.6 x3 0 x 0
2
0.1x1 , where t R matrices
0.2 x1 0.1x3 0 3
(depending on
x4 t
x1 x2 x3 x4 1 different initial
state vectors) 2.80
Keywords in Section 2.5:
stochastic matrix: 隨機矩陣
transition probability matrix: 移轉機率矩陣
Markov chain: 馬可夫鏈
steady state matrix: 穩定狀態矩陣
regular stochastic matrix: 正規化隨機矩陣
absorbing state and Markov chain: 吸收狀態與吸收馬可夫鏈
2.81
2.6 Applications of Matrix Operations
Least Squares Regression Analysis (Example 10 in the text book
vs. the linear regression in EXCEL)
Read the text book or “Applications in Ch2.pdf”
downloaded from my website
See “Example 10 for regression in Ch 2.xlsx” downloaded
from my website for example
2.82
2.7 Homework 1 and 2
Homework 1: Implement the matrix multiplication and the
inverse of a matrix with VBA
For matrix multiplication, input two matrices and output the
resulting matrix (with one CommandButton)
For matrix inversion, input a matrix and output the inverse
of the matrix (with another CommandButton)
※ Learning goals:
1. How to handle input (輸入) and output (輸出) with VBA
2. Practice using If-Then-Else and For-Next statements
3. How to write and call functions (or subroutine)
※ Total points for this homework is 10. A program that can generate
correct results can earn 10 points
※ CANNOT use any functions, for example, SUM(), TRANSPOSE(),
SUMPRODUCT(), MMULT(), MINVERSE(), or MDETERM(), etc.,
provided by EXCEL
2.83
Homework 2: Find betas for three component stocks in S&P 500
(10 points) Solve the least squares regression problem as follows
ri,t – rf,t = αi + βi(rM,t – rf,t) + et
– ri,t and rM,t are total returns of the asset i and the market index on each
trading day t, which include both capital gains and dividend income
Adjusted closing prices of S&P 500 components can be downloaded from
the finance page of U.S. Yahoo (Use adjusted closing prices (rather than
closing prices) to compute the adjusted (or said total) return)
S&P 500 total return index (with tick symbol XX:SPXT) can be
downloaded from the market data page of The Wall Street Journal
Convert daily returns, ri,t and rM,t, into annualized returns, i.e., ri,t ×252 and
rM,t ×252, where 252 approximates the number of trading days per year
– The risk free rate rf,t is approximated by 1-month Treasury yields
U.S. Department of the Treasury: http://www.treasury.gov/resource-
center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yield
– Today is assumed to be July 1st, 2018 and it is required to employ the
prior two-year historical daily returns to solve αi and βi for three firms
2.84
Bonus: construct a portfolio rp = w1r1 + w2r2 + w3r3 to replicate
the expected return and beta of the market index in the prior two
years, i.e., solving w1, w2, and w3 in the following system (1 point)
𝐸 𝑟𝑝 = 𝑤1 𝐸 𝑟1 + 𝑤2 𝐸 𝑟2 + 𝑤3 𝐸 𝑟3 = 𝐸(𝑟𝑀 )
൞ 𝛽𝑝 = 𝑤1 𝛽1 + 𝑤2 𝛽2 + 𝑤3 𝛽3 = 𝛽𝑀 (= 1)
𝑤1 + 𝑤2 + 𝑤3 = 1
(Note that the expectations are approximated by the sample averages)
– In-sample test (樣本內測試) (1 point): analyze the time series of rp
2.86