Services
Discover
Homeschooling
Ask a Question
Log in
Sign up
Filters
Done
Question type:
Essay
Multiple Choice
Short Answer
True False
Matching
Topic
Business
Study Set
Introduction to Econometrics Study Set 2
Quiz 17: The Theory of Linear Regression With One Regressor
Path 4
Access For Free
Share
All types
Filters
Study Flashcards
Question 21
Essay
For this question you may assume that linear combinations of normal variates are themselves normally distributed.Let a, b, and c be non-zero constants. (a)
X
and
Y
are independently distributed as
N
(
a
,
σ
2
)
. What is the distribution of
(
b
X
+
c
Y
)
?
X \text { and } Y \text { are independently distributed as } N \left( a , \sigma ^ { 2 } \right) \text {. What is the distribution of } ( b X + c Y ) \text { ? }
X
and
Y
are independently distributed as
N
(
a
,
σ
2
)
. What is the distribution of
(
b
X
+
c
Y
)
?
Question 22
Essay
(Requires Appendix material) Your textbook considers various distributions such as the standard normal,
t
,
χ
2
t, \chi^{2}
t
,
χ
2
, and
F
F
F
distribution, and relationships between them. (a)
Using statistical tables, give examples that the following relationship holds:
F
n
1
,
∞
=
χ
n
1
2
n
1
.
\text { Using statistical tables, give examples that the following relationship holds: } F _ { n _ { 1 } , \infty } = \frac { \chi _ { n _ { 1 } } ^ { 2 } } { n _ { 1 } } \text {. }
Using statistical tables, give examples that the following relationship holds:
F
n
1
,
∞
=
n
1
χ
n
1
2
.
Question 23
Essay
Consider the model
Y
i
=
β
1
X
i
+
u
i
Y _ { i } = \beta _ { 1 } X _ { i } + u _ { i }
Y
i
=
β
1
X
i
+
u
i
, where the
X
i
X _ { i }
X
i
and the
u
i
u _ { i }
u
i
are mutually independent i.i.d. random variables with finite fourth moment and
E
(
u
i
)
=
0
E \left( u _ { i } \right) = 0
E
(
u
i
)
=
0
. Let
β
^
1
\widehat { \beta } _ { 1 }
β
1
denote the OLS estimator of
β
1
\beta _ { 1 }
β
1
. Show that
n
(
β
^
1
−
β
1
)
=
∑
i
=
1
n
X
i
u
i
n
∑
i
=
1
n
X
i
2
n
.
\sqrt { n } \left( \widehat { \beta } _ { 1 } - \beta _ { 1 } \right) = \frac { \frac { \sum _ { i = 1 } ^ { n } X _ { i } u _ { i } } { \sqrt { n } } } { \frac { \sum _ { i = 1 } ^ { n } X _ { i } ^ { 2 } } { n } } .
n
(
β
1
−
β
1
)
=
n
∑
i
=
1
n
X
i
2
n
∑
i
=
1
n
X
i
u
i
.
Question 24
Multiple Choice
Feasible WLS does not rely on the following condition:
Question 25
Essay
Consider the model
Y
i
=
β
1
X
i
+
u
i
Y _ { i } = \beta _ { 1 } X _ { i } + u _ { i }
Y
i
=
β
1
X
i
+
u
i
, where
u
i
=
c
X
i
2
e
i
u _ { i } = c X _ { i } ^ { 2 } e _ { i }
u
i
=
c
X
i
2
e
i
and all of the
X
X
X
's and
e
e
e
's are i.i.d. and distributed
N
(
0
,
1
)
N ( 0,1 )
N
(
0
,
1
)
. (a)Which of the Extended Least Squares Assumptions are satisfied here? Prove your assertions.
Question 26
Essay
Your textbook states that an implication of the Gauss-Markov theorem is that the sample average,
Y
ˉ
\bar { Y }
Y
ˉ
, is the most efficient linear estimator of
E
(
Y
i
)
E \left( Y _ { i } \right)
E
(
Y
i
)
when
Y
1
,
…
,
Y
n
Y _ { 1 } , \ldots , Y _ { n }
Y
1
,
…
,
Y
n
are i.i.d. with
E
(
Y
i
)
=
μ
Y
E \left( Y _ { i } \right) = \mu _ { Y }
E
(
Y
i
)
=
μ
Y
and
var
(
Y
i
)
=
σ
Y
2
\operatorname { var } \left( Y _ { i } \right) = \sigma _ { Y } ^ { 2 }
var
(
Y
i
)
=
σ
Y
2
. This follows from the regression model with no slope and the fact that the OLS estimator is BLUE. Provide a proof by assuming a linear estimator in the
Y
Y
Y
's,
μ
~
=
∑
i
=
1
n
a
i
Y
i
\tilde { \mu } = \sum _ { i = 1 } ^ { n } a _ { i } Y _ { i }
μ
~
=
∑
i
=
1
n
a
i
Y
i
. (a)State the condition under which this estimator is unbiased.
Question 27
Essay
What does the Gauss-Markov theorem prove? Without giving mathematical details, explain how the proof proceeds.What is its importance?
Question 28
Multiple Choice
The advantage of using heteroskedasticity-robust standard errors is that
Question 29
Multiple Choice
The WLS estimator is called infeasible WLS estimator when
Question 30
Essay
One of the earlier textbooks in econometrics, first published in 1971, compared "estimation of a parameter to shooting at a target with a rifle.The bull's-eye can be taken to represent the true value of the parameter, the rifle the estimator, and each shot a particular estimate." Use this analogy to discuss small and large sample properties of estimators.How do you think the author approached the n → ∞ condition? (Dependent on your view of the world, feel free to substitute guns with bow and arrow, or missile.)
Question 31
Essay
Consider the simple regression model
Y
i
=
β
0
+
β
1
X
i
+
u
i
Y _ { i } = \beta _ { 0 } + \beta _ { 1 } X _ { i } + u _ { i }
Y
i
=
β
0
+
β
1
X
i
+
u
i
where
X
i
>
0
X _ { \mathrm { i } } > 0
X
i
>
0
for all
i
i
i
, and the conditional variance is
var
(
u
i
∣
X
i
)
=
θ
X
i
2
\operatorname { var } \left( u _ { i } \mid X _ { i } \right) = \theta X _ { i } ^ { 2 }
var
(
u
i
∣
X
i
)
=
θ
X
i
2
where
θ
\theta
θ
is a known constant with
θ
>
0
\theta > 0
θ
>
0
. (a) Write the weighted regression as
Y
~
i
=
β
0
X
~
0
i
+
β
1
X
~
1
i
+
u
~
i
\tilde { Y } _ { i } = \beta _ { 0 } \tilde { X } _ { 0 i } + \beta _ { 1 } \tilde { X } _ { 1 i } + \tilde { u } _ { i }
Y
~
i
=
β
0
X
~
0
i
+
β
1
X
~
1
i
+
u
~
i
. How would you construct
Y
~
i
\tilde { Y } _ { i }
Y
~
i
,
X
~
0
i
\tilde { X } _ { 0 i }
X
~
0
i
and
X
~
1
i
?
\tilde { X } _ { 1 i } ?
X
~
1
i
?
Question 32
Multiple Choice
In practice, the most difficult aspect of feasible WLS estimation is
Question 33
Essay
(Requires Appendix material)State and prove the Cauchy-Schwarz Inequality.
Question 34
Essay
(Requires Appendix Material)This question requires you to work with Chebychev's Inequality. (a)State Chebychev's Inequality.
Question 35
Essay
"One should never bother with WLS.Using OLS with robust standard errors gives correct inference, at least asymptotically." True, false, or a bit of both? Explain carefully what the quote means and evaluate it critically.
Question 36
Essay
"I am an applied econometrician and therefore should not have to deal with econometric theory.There will be others who I leave that to.I am more interested in interpreting the estimation results." Evaluate.