Vous êtes sur la page 1sur 2

CHAPTER 6

 Maximum likelihood estimation


o Once we differentiate the log-likelihood, we can denote them as U() where this is the
score equation
 To prove whether the stated parameter fit the data that we have, we can use the following test:
Using AML data (assuming no censored observations), test :
AML 𝑇𝑖 : 9,13,13,18,23,28,31,34,45,48,161 (∑𝑛𝑖=1 𝑇𝑖 = 423)
𝐻0 : 𝜆 = 0.033
𝐻1 : 𝜆 ≠ 0.033
o Wald test (2-tail test)
𝑛
∗ 2
𝑇 = 2𝜆0 ∑ 𝑇𝑖 ~𝜒2𝑛
𝑖=1
= 2(0.033)(423)
2
= 28.2~𝜒2(11)
2 2
𝜒22;0.975 (= 10.98) < 𝑇 ∗ < 𝜒22;0.025 (= 36.78)
∴ Since the true parameter lies in the 95% CI, we fail to reject 𝐻0 . We have sufficient
evidence to conclude that AML data follows exponential distribution with parameter
𝜆 = 0.033.

o Likelihood ratio test


𝛼
𝐿(𝜆0 )
𝐿𝑅𝑇(𝜃) = −2 log ( ) ~𝜒12
𝐿(𝜆̂)
= −2(log 𝐿(𝜆0 ) − log 𝐿(𝜆̂))
= −2 log 𝐿(𝜆0 ) + 2 log 𝐿(𝜆̂)
𝑛 𝑛

= −2 (𝑛 log 𝜆0 − 𝜆0 ∑ 𝑇𝑖 ) + 2 (𝑛 log 𝜆̂ − 𝜆̂ ∑ 𝑇𝑖 )
𝑖=1 𝑖=1
= −2(11 log 0.033 − 0.033(423) ) + 2(11 log 0.026 − 0.026(423) )
= 0.7379
2
𝜒1;0.05 = 3.84146

2
∴ Since 𝐿𝑅𝑇(𝜃) < 𝜒1;0.05 , we fail to reject 𝐻0 . We have sufficient evidence to conclude
that AML data follows exponential distribution with parameter 𝜆 = 0.033.

o Score test (1-tail test)


Suppose that the AML data follows a Weibull distribution with 𝛼 = 1,
𝐻0 : 𝛼 = 1
𝐻1 : 𝛼 ≠ 1
General likelihood function is,
𝑛

𝐿(𝛼, 𝜆; 𝑡) = ∏ 𝑓(𝛼, 𝜆; 𝑡)𝛿𝑖 𝑆(𝛼, 𝜆; 𝑡)1−𝛿𝑖


𝑖=1
Since the AML data (maintained) is assumed to be uncensored, so the likelihood will be,
𝑛

𝐿(𝛼, 𝜆; 𝑡) = ∏ 𝑓(𝛼, 𝜆; 𝑡)
𝑖=1
Likelihood function of from Weibull PDF,
𝑛
𝛼
𝐿(𝛼, 𝜆; 𝑡) = ∏ 𝜆𝛼(𝜆𝑡𝑖 )𝛼−1 𝑒 −(𝜆𝑡𝑖)
𝑖=1
)𝛼−1 𝑙𝑛(𝜆𝑡𝑖 )𝛼−1 (𝛼−1)𝑙𝑛(𝜆𝑡𝑖 )
Since (𝜆𝑡𝑖 =𝑒 =𝑒 ,
𝑛
𝛼
𝐿(𝛼, 𝜆; 𝑡) = ∏ 𝜆𝛼𝑒 (𝛼−1)𝑙𝑛(𝜆𝑡𝑖) 𝑒 −(𝜆𝑡𝑖)
𝑖=1
𝑛
𝛼 +(𝛼−1)𝑙𝑛(𝜆𝑡
= ∏ 𝜆𝛼𝑒 −(𝜆𝑡𝑖) 𝑖)

𝑖=1
𝑛 𝑛 ∑𝑛 𝛼
𝑖=1(−(𝜆𝑡𝑖 ) +(𝛼−1)𝑙𝑛(𝜆𝑡𝑖 ))
=𝜆 𝛼 𝑒
𝑛 𝛼 𝑛
= 𝜆𝑛 𝛼 𝑛 𝑒 − ∑𝑖=1(𝜆𝑡𝑖) +(𝛼−1) ∑𝑖=1 𝑙𝑛(𝜆𝑡𝑖)
Then, the log-likelihood will be,
𝑙(𝛼, 𝜆) = 𝑛 ln 𝜆 + 𝑛 ln 𝛼 − ∑𝑛𝑖=1(𝜆𝑡𝑖 )𝛼 + (𝛼 − 1) ∑𝑛𝑖=1 𝑙𝑛(𝜆𝑡𝑖 )
To get the parameter estimates, we need to differentiate once with respect to our
parameter of interest which is 𝛼 and 𝜆,
𝜕𝑙 𝑛 𝑛 𝑛
= − ∑ (𝜆𝑡𝑖 )𝛼 ln(𝜆𝑡𝑖 ) + ∑ 𝑙𝑛(𝜆𝑡𝑖 )
𝜕𝛼 𝛼 𝑖=1 𝑖=1
𝜕𝑙 𝑛 𝑛 𝑛
= − ∑ (𝜆𝑡𝑖 )𝛼−1 𝑡𝑖 + (𝛼 − 1)
𝜕𝜆 𝜆 𝑖=1 𝜆
To get the component of Fisher’s information matrix, we need to differentiate the log-
likelihood twice so that we can get
𝐼 = −𝐸[𝑙 ′′ (𝛼, 𝜆)]
For the covariance at the off-diagonal of the var-cov matrix and
𝐼 = −𝐸[𝑙 ′′ (𝛼)] ; 𝐼 = −𝐸[𝑙 ′′ (𝜆)]
For the variance at the diagonal of the var-cov matrix.

CHAPTER 7

CHAPTER 8

Vous aimerez peut-être aussi