Vous êtes sur la page 1sur 26

EC3303 Econometrics I

Department of Economics, NUS
Spring, 2010
JongHoon Kim

Spring 2010 EC3303 1
EC3303 Econometrics I Contact via emails please…
Lecturer: JongHoon KIM,  AS2, 04‐40
Class: Thu, 14.00 – 16.00,  LT11 Starts on Week3 (Jan 25‐29)

Tutorials: Mon – Fri,  10.00 – 12.00, AS4, 01‐17


Office Hours: Mon,  13.00 – 15.00
Assessment:   Final Exam 60% + Continuous Assessment 40%
Problem sets 20% + Midterm test 20%
Textbook: 2 ‐ 3 After the Midterm break
Stock, J.H. and M.W. Watson (2006): Introduction to Econometrics, Second edition.
Boston: Pearson Addison Wesley. (HB 139 Sto 2006, CL, HSSML)
Supplementary reading:
Wooldridge, J.M. (2005): Introductory Econometrics: A Modern Approach, Third edition.
Gujarati, Damodar N. (2003): Basic Econometrics, Fourth edition. NY: McGraw-Hill.
Any statistics textbook…
Spring 2010 EC3303 2
Chapter I. Introduction

1. Overview - What is Econometrics? (S-W Ch1)


Global warming/Chinese economy in 10 yrs time?
Reasoning 
Effectiveness of “caning” in SG penal system?
and 
conjecture High time to buy a car or a HDB flat?

A medical case in SG 2008:  
Observed and  An upsurge(150 or so over 5 months) of  low blood
stylized facts pressure shock cases  7 in coma, 4 death…?

data + statistical tools/methods

Theory (Model)

Economics + Metric(Measure) = Econometrics


Definitive/quantitative questions with definitive/quantitative answers
Spring 2010 EC3303 3
Examples:
(a) Effect of reducing class size on elementary school education
meaningful effect? How large?
test scorei class sizei pure(distinguishable) effect?
student i
(b) Effect of cigarette taxes on reducing smoking
price elasticity?
cigarette consumptioni cigarette sales pricei other factors?
reverse “causality”?
How much can Apple price‐gouge SG customers on its 4G i‐phones?

i-phone salesi i-phone retail pricei


(c) Forecasting future inflation rates – SG’s inflation rate2010?
Benefits of the Casinos/Universal Theme Park at Sentosa/Marina Bay?
How many will survive EC3303 through to Final Exam?

Spring 2010 EC3303 4
(d) Explaining abrupt crime drop in 1990s in US (Levitt, Freakonomics…)
1. Innovative policing strategy
2. Increased reliance on prisons
3. Changes in crack and other drug markets
4. Aging of the population
5. Tough gun-control laws
6. Strong economy
7. Increased number of police
8. All other explanations
(increased used of capital punishment, gun buybacks, and etc.)

Spring 2010 EC3303 5
(d) Explaining abrupt crime drop in 1990s in US (Levitt, Freakonomics…)
1. Innovative policing strategy
2. Increased reliance on prisons
3. Changes in crack and other drug markets
4. Aging of the population
5. Tough gun-control laws
6. Strong economy
7. Increased number of police
8. All other explanations
(increased used of capital punishment, gun buybacks, and etc.)

Legalization of abortion -1973, US Supreme Court Ruling on Roe v. Wade

(d’) Seeking determinants of crime rates (Levitt(1996))
other factors?
crime ratet incarceration ratet reverse “causality”?
year t
Spring 2010 EC3303 6
(e) Understanding global warming (the effect of CO2 emission)
?
reversed “causality”?
Vol NorPoleIcet CO2 emissiont true scale of the effect?
? …“global warming hoax”?

(f) And many, many more interesting issues awaiting…
“H1N1 Flu pandemic hoax(scam)”,
“Renewal of the contract hosting the F1 race in SG”,

Spring 2010 EC3303 7
Econometrics = Economics + Metric
theory data
Sources of Data:
(controlled) experiment
observation
Typical Economic Dataset: Individuals (person, firm,…), localities (city, states,…),…

cross‐sectional data multiple entities at a given point in time 
time series data a single entity over multiple peroids in time 
panel data multiple entities over multiple peroids in time 
(longitudinal data)

“Devils are in the detail(s).”
“Data is the least deceiving window toward truth.”
(provided you know how to tease them without bungling)
“Why? …Why?... Why?”
Spring 2010 EC3303 8
2. Review of Probabilities (S-W Ch2) why do we care?
2.1 Probability Space

Ω Probability Space (Sample Space), Ω
the (imaginary) collection of the whole “outcomes” 
E in life
Outcome, ω
. a specific happening(realization) 
ω
Event, E
a collection of certain outcomes 
a subset of  Ω (basic unit to assign probability!)
Examples: i) the event  E of tossing a coin to “head”
ii) the event  E of finishing today’s lecture at 3.35pm sharp
iii) the event  E of STI index “gaining” tomorrow
Events are the subsets resulting from introducing division(“partition”) of Ω.
Here, for example, E and Ec.
Spring 2010 EC3303 9
More than two events occurring when…
“disjoint” 
(i) a partition w/ multiple cuts (into mutually exclusive events)

E1 E2 E3 F1 F2 F3 … Fn
Ω Ω

An example: Rolling a dice into An example: … ?


{1,2}, {3,4}, {5,6}

Probalibity: A relative measure of the likelihoods of events, satisfying


(a) 0 ≤ P(E) ≤ 1,
(b) P(Ω) = 1 for any E Ω, and
(c) P(E1 E2 ) = P(E1) + P (E2) + , for disjoint E1, E2,
∞ ∞
Spring 2010
P( E)
i=1 i ∑i=1P(Ei) EC3303 10
More than two events occurring when…
“disjoint” 
(i) a partition w/ multiple cuts (into mutually exclusive events)

E1 E2 E3 F1 F2 F3 … Fn
Ω Ω

An example: Rolling a dice into An example: … ?


{1,2}, {3,4}, {5,6}

Probalibity: A relative measure of the likelihoods of events, satisfying


(a) 0 ≤ P(E) ≤ 1,
E F P(E) ≤ P(F),
(b) P(Ω) = 1 for any E Ω, and P(Ec) = 1 – P(E),
(c) P(E1 E2 ) = P(E1) + P (E2) + , for F) = P(E)
P(Edisjoint E1, E+P(F)
2,
– P(E∩F),…
∞ ∞
Spring 2010
P( E)
i=1 i ∑i=1P(Ei) EC3303 11
(ii) multiple (overlapping) partitions (each w/ multiple cuts)

E1 E2
Joint probability
P(E1∩F1), P(E1∩F2), P(E2∩F1), P(E2∩F2)
F1 Ω

F2

Spring 2010 EC3303 12
(ii) multiple (overlapping) partitions (each w/ multiple cuts)

E1 E2
Joint probability
P(E1∩F1), P(E1∩F2), P(E2∩F1), P(E2∩F2)
F1 Ω Marginal probability
P(E1) and P(E2) (or likewise, P(F1), P(F2) )

F2

Spring 2010 EC3303 13
(ii) multiple (overlapping) partitions (each w/ multiple cuts)

E1 E2
Joint probability
P(E1∩F1), P(E1∩F2), P(E2∩F1), P(E2∩F2)
F1 Ω Marginal probability
P(E1) and P(E2) (or likewise, P(F1), P(F2) )
Conditional probability
F2 P(E1∩F1)
P(E1|F1) =
P(F1)
(likewise, P(E2|F1), P(E1|F2), P(E2|F2))
From these…
“how likely E1 to happen is oblivious of F1”
Statistical Independence P(E1) = P(E1|F1) ( P(E1∩F1) = P(E1)P(F1))
betn E1 and F1
Statistical Independence P(E2) = P(E2|F1), P(E1) = P(E1|F2), P(E2) = P(E2|F2)
betn the two paritions
In general, with much finer partitions…?
Spring 2010 EC3303 14
How useful is it?
An example (from a German biostatistics text):
A recently found contagious (and deadly) disease!
You were tested positive (and diagnosed as so). Am I really infected? The prob.? 
The test’s known to detect  99 out of 100 true infected cases = P(E1|F1)
98 out of 100 true uninfected cases = P(E2|F2)
There is 1/1000 chance of getting infected. = P(F1)
tested positive tested negative P(F1|E1) ?
E1 E2
infected If in a population of 100,000…
F1 99 1 99+1= 100
Ω 99,900 = 1,998 + 97,902
not‐infeced

F2 1,998 97,902 P(E1∩F1) 99


P(F1|E1) = = 0.047
P(E1) 2,097

Spring 2010 EC3303 15
2.2 Random Variables and Probability Distributions
A “random variable”, Y “a numerical summary of a random outcome” (S-W)
a collection of (possibly infinitely many) numbers, which  ??
takes on(“realizes to”) one of them when a certain event happens
A “partition”  a random variable
Y Y Bernoulli(p)
Ω where p = P(E)
E Ec 1 0
if E happens if Ec happens

A “partition”  a random variable
Rolling a dice  n = 6
Y Daily SGD vs. USD  n = ∞

F1 F2 F3 … Fn Ω y1 … … yn
with F1 with Fn
Spring 2010 EC3303 16
The “probability distribution of Y,” PY
the list(ing) of all probabilities attached to all possible outcomes of Y
( the wholesome of all probabilities of the events induced by Y)
( the full knowledge of Pr{a ≤ Y ≤ b} for any a,b)

An example: Y Bernoulli(p) with p = P(E) Y 0 1


1–p p
c
P(E ) P(E)
Discrete random variable finite(or countably many) values(“events”)
(Discrete distn of a r.v.)
Continuous random variable uncountably infinitely many values(“events”)
(Continuous distn of a r.v.)
Expressing/Describing a prob. distribution (of a r.v. Y) :
(i) tabulation feasible only in finite cases!
(ii) p.m.f.(probability mass function) “pointwise probability (expression)”
p.d.f.(probability density function)
(iii) c.d.f.(cumulative distribution function)“range‐wise probability (expression)”
Spring 2010 EC3303 17
Expressing/Describing a prob. distribution (of a r.v. Y) :

p.m.f.(probability mass function)
“pointwise probability (expression)”
For each possible value x of Y p
1–p
fY(x) = Pr{Y = x}
only for discrete Y!
0 1
pmf of Bernoulli(p)
p.d.f.(probability density function)
“continuous version of pointwise probability”
For each possible value x of Y
fY(x) (≠ Pr{Y = x})
The height of pdf ≠ prob. why?

Spring 2010 EC3303 μ pdf of N(μ,1) 18


Expressing/Describing a prob. distribution (of a r.v. Y) :

p.m.f.(probability mass function)
“pointwise probability (expression)”
For each possible value x of Y p
1–p
fY(x) = Pr{Y = x}
only for discrete Y!
0 1
pmf of Bernoulli(p)
p.d.f.(probability density function)
“continuous version of pointwise probability”
For each possible value x of Y
fY(x) (≠ Pr{Y = x})
The height of pdf ≠ prob. why?
b
Rather, Pr{a ≤ Y ≤ b} = ∫a fY(x)dx

Spring 2010 EC3303 a b μ pdf of N(μ,1) 19


c.d.f.(cumulative distribution function)
“range‐wise probability (expression)”
1
For each possible value x of Y
p
FY(x) = Pr{Y ≤ x}
= ∑y ≤ x fY(x) (= fY(x)+ fY(x – 1)+ )
1–p
discrete Y  case
| |
0 1
x
= ∫–∞ fY(x)dy continuous Y  case
1

less intuitive than pmf /pdf, but
more convenience b/c always well‐defined

Spring 2010 EC3303 20
Expectations/Moments (of a r.v. Y)
Often, we focus only on certain characteristics of the distn PY, e.g.,
“the middle value of all Y outcomes”,
“the most likely value of Y”,
“how scattered the range of all Y propable values are”,…

μY the mean of Y  (= the expected value of Y)


A measure of the center(ing) (counting in “prob”) of the distn PY
μY = EY = y1fY(y1) + y2fY(y2) + + ykfY(yk) discrete Y  with k outcomes
prob.’s as proper weights  

= ∑y yfY(y) ( ∫–∞ yfY(y)dy continuous version)

p
1–p

Spring 20100 1 EC3303 μ 21
Expectations/Moments (of a r.v. Y)
Often, we focus only on certain characteristics of the distn PY, e.g.,
“the middle value of all Y outcomes”,
“the most likely value of Y”,
“how scattered the range of all Y propable values are”,…

μY the mean of Y  (= the expected value of Y)


A measure of the center(ing) (counting in “prob”) of the distn PY
μY = EY = y1fY(y1) + y2fY(y2) + + ykfY(yk) discrete Y  with k outcomes
prob.’s as proper weights  

= ∑y yfY(y) ( ∫–∞ yfY(y)dy continuous version)

σ²Y the variance of Y (= the expected value of the“squared-deviations of Y from μY”)


A measure of the dispersion (counting in “prob”) of the distn PY
σ²Y = Var(Y) = (y1 – μY)2fY(y1) + + (yk – μY)2fY(yk) discrete Y  with k outcomes

= ∑y (y – μY) fY(y) (
2
∫ (y – μY)2fY(y)dy
–∞ continuous version)

Spring 2010 = E(Y– μY)2 EC3303 22


Expectations/Moments (of a r.v. Y)
μY the mean of Y  (= the expected value of Y)
A measure of the center(ing) (counting in “prob”) of the distn PY
μY = EY = y1fY(y1) + y2fY(y2) + + ykfY(yk) discrete Y  with k outcomes
prob.’s as proper weights  

= ∑y yfY(y) ( ∫ yfY(y)dy
–∞ continuous version)

σ²Y the variance of Y (= the expected value of the“squared-deviations of Y from μY”)


A measure of the dispersion (counting in “prob”) of the distn PY
σ²Y = Var(Y) = (y1 – μY)2fY(y1) + + (yk – μY)2fY(yk) discrete Y  with k outcomes

= ∑y (y – μY)2fY(y) ( ∫–∞ (y – μY)2fY(y)dy continuous version)
= E(Y– μY)2
σY = √Var(Y)
the standard deviation of Y 

Spring 2010 EC3303 23
μ
Recall! (Important properties of μY and σ²Y ): Given a r.v. Y with μY and σ²Y
If X = aY + b for any non-random numbers a, b,
(a) EX = E(aY + b) = aEY + b = aμY + b
(b) Var(X) = Var(aY + b) = a² Var(Y) = a²σ²Y
Other useful (higher) moments (of a r.v. Y)
the skewness of Y  A measure of the asymmetry(inclination) of the distn PY
E(Y– μY)3
σ3Y
the kurtosis of Y  A measure of the tail thinckness of the distn PY
E(Y– μY)4
σ4Y

Spring 2010 EC3303 24
Recall! (Important properties of μY and σ²Y ): Given a r.v. Y with μY and σ²Y
If X = aY + b for any non-random numbers a, b,
(a) EX = E(aY + b) = aEY + b = aμY + b
(b) Var(X) = Var(aY + b) = a² Var(Y) = a²σ²Y
Other useful (higher) moments (of a r.v. Y)
the skewness of Y  A measure of the asymmetry(inclination) of the distn PY
E(Y– μY)3
σ3Y
the kurtosis of Y  A measure of the tail thinckness of the distn PY
E(Y– μY)4
σ4Y

Spring 2010 EC3303 25
2.3 Multiple RandomVariables (“More than one r.v.?”)
Remember! A “random variable”, Y
A “partition”  a random variable
Y Y Bernoulli(p)
Ω where p = P(E)
E Ec 1 0
if E happens if Ec happens

A “partition”  a random variable
Rolling a dice  n = 6
Y Daily SGD vs. USD  n = ∞

F1 F2 F3 … Fn Ω y1 … … yn
with F1 with Fn

Spring 2010 EC3303 26

Vous aimerez peut-être aussi