Deck 18: The Theory of Multiple Regression

Full screen (f)
exit full mode
Question
The multiple regression model in matrix form Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } can also be written as

A) Yi=β0+Xiβ+ui,i=1,,nY _ { i } = \boldsymbol { \beta } _ { 0 } + \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } + u _ { i } , i = 1 , \ldots , n
B) Yi=Xiβi,i=1,,nY _ { i } = \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } _ { i } , i = 1 , \ldots , n
C) Yi=βXi+ui,i=1,,nY _ { i } = \boldsymbol { \beta } \boldsymbol { X } _ { i } ^ { \prime } + u _ { i } , i = 1 , \ldots , n
D) Yi=Xiβ+ui,i=1,,nY _ { i } = \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } + u _ { i } , i = 1 , \ldots , n
Use Space or
up arrow
down arrow
to flip the card.
Question
Minimization of i=1n(Yib0b1X1ibkXki)2\sum _ { i = 1 } ^ { n } \left( Y _ { i } - b _ { 0 } - b _ { 1 } X _ { 1 i } - \cdots - b _ { k } X _ { k i } \right) ^ { 2 } results in

A) XY=Xβ^X ^ { \prime } \boldsymbol { Y } = \boldsymbol { X } \hat { \boldsymbol { \beta } }
B) Xβ^=0k+1\boldsymbol { X } \hat { \boldsymbol { \beta } } = \mathbf { 0 } _ { k + 1 }
C) X(YXβ^)=0k+1\boldsymbol { X } ^ { \prime } ( \boldsymbol { Y } - \boldsymbol { X } \hat { \boldsymbol { \beta } } ) = \mathbf { 0 } _ { k + 1 }
D) Rβ=r\boldsymbol { R } \boldsymbol { \beta } = \boldsymbol { r } \text {. }
Question
The multiple regression model can be written in matrix form as follows: The multiple regression model can be written in matrix form as follows:  <div style=padding-top: 35px>
Question
The GLS estimator is defined as The GLS estimator is defined as  <div style=padding-top: 35px>
Question
The difference between the central limit theorems for a scalar and vector-valued random variables is

A)that n approaches infinity in the central limit theorem for scalars only.
B)the conditions on the variances.
C)that single random variables can have an expected value but vectors cannot.
D)the homoskedasticity assumption in the former but not the latter.
Question
The OLS estimator

A)has the multivariate normal asymptotic distribution in large samples.
B)is t-distributed.
C)has the multivariate normal distribution regardless of the sample size.
D)is F-distributed.
Question
The extended least squares assumptions in the multiple regression model include four assumptions from Chapter 6 (ui has conditional mean zero; (Xi,Yj),i=1,,n\left( u _ { i } \text { has conditional mean zero; } \left( \boldsymbol { X } _ { i } , Y _ { j } \right) , i = 1 , \ldots , n \right. are i.i.d. draws from their joint distribution; Xiand ui\boldsymbol { X } _ { i } \text {and } u _ { i } have nonzero finite fourth moments; there is no perfect multicollinearity). In addition, there are two further assumptions, one of which is

A) heteroskedasticity of the error term.
B) serial correlation of the error term.
C) homoskedasticity of the error term.
D) invertibility of the matrix of regressors.
Question
The assumption that X has full column rank implies that

A)the number of observations equals the number of regressors.
B)binary variables are absent from the list of regressors.
C)there is no perfect multicollinearity.
D)none of the regressors appear in natural logarithm form.
Question
The Gauss-Markov theorem for multiple regressions states that the OLS estimator

A)has the smallest variance possible for any linear estimator.
B)is BLUE if the Gauss-Markov conditions for multiple regression hold.
C)is identical to the maximum likelihood estimator.
D)is the most commonly used estimator.
Question
The heteroskedasticity-robust estimator of n(β^β)\sum _ { \sqrt { n } ( \hat { \beta } - \beta ) } is obtained

A)  from (XX)1XU\text { from } \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U }
B) by replacing the population moments in its definition by the identity matrix.
C) from feasible GLS estimation.
D) by replacing the population moments in its definition by sample moments.
Question
One implication of the extended least squares assumptions in the multiple regression model is that One implication of the extended least squares assumptions in the multiple regression model is that  <div style=padding-top: 35px>
Question
The formulation Rβ=r\boldsymbol { R } \boldsymbol { \beta } = \boldsymbol { r } to test a hypotheses

A) allows for restrictions involving both multiple regression coefficients and single regression coefficients.
B) is F -distributed in large samples.
C) allows only for restrictions involving multiple regression coefficients.
D) allows for testing linear as well as nonlinear hypotheses.
Question
In the case when the errors are homoskedastic and normally distributed, conditional on X, then In the case when the errors are homoskedastic and normally distributed, conditional on X, then  <div style=padding-top: 35px>
Question
A joint hypothesis that is linear in the coefficients and imposes a number of restrictions can be written as A joint hypothesis that is linear in the coefficients and imposes a number of restrictions can be written as  <div style=padding-top: 35px>
Question
Let there be q joint hypothesis to be tested. Then the dimension of r in the expression Rβ=rR \beta = r is

A) q×1q \times 1
B) q×(k+1)q \times ( k + 1 )
C) (k+1)×1( k + 1 ) \times 1
D) q.q .
Question
The GLS assumptions include all of the following, with the exception of The GLS assumptions include all of the following, with the exception of  <div style=padding-top: 35px>
Question
The linear multiple regression model can be represented in matrix notation as Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } where X is of order n x(k+1) . k represents the number of

A) regressors.
B) observations.
C) regressors excluding the "constant" regressor for the intercept.
D) unknown regression coefficients
Question
The Gauss-Markov theorem for multiple regression proves that The Gauss-Markov theorem for multiple regression proves that  <div style=padding-top: 35px>
Question
One of the properties of the OLS estimator is One of the properties of the OLS estimator is  <div style=padding-top: 35px>
Question
β^β\hat { \beta } - \beta

A) cannot be calculated since the population parameter is unknown.
B) =(XX)1XU= \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U }
C) =YY^= \boldsymbol { Y } - \hat { Y }
D) =β+(XX)1XU= \boldsymbol { \beta } + \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U }
Question
Define the GLS estimator and discuss its properties when Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px> is known. Why is this estimator sometimes called infeasible GLS? What happens when Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px> is unknown? What would the Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px> matrix look like for the case of independent sampling with heteroskedastic errors, where Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px>
Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px> . The textbook shows that the original model Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px>
will be transformed into Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px> where Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px>
Find Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.<div style=padding-top: 35px> in the above case, and describe what effect the transformation has on the original data.
Question
Write the following four restrictions in the form Write the following four restrictions in the form   where the hypotheses are to be tested simultaneously.   Can you write the following restriction   in the same format? Why not?<div style=padding-top: 35px> where the hypotheses are to be tested simultaneously.
Write the following four restrictions in the form   where the hypotheses are to be tested simultaneously.   Can you write the following restriction   in the same format? Why not?<div style=padding-top: 35px>
Can you write the following restriction Write the following four restrictions in the form   where the hypotheses are to be tested simultaneously.   Can you write the following restriction   in the same format? Why not?<div style=padding-top: 35px> in the same format? Why not?
Question
In Chapter 10 of your textbook, panel data estimation was introduced.Panel data consist
of observations on the same n entities at two or more time periods T.For two variables,
you have In Chapter 10 of your textbook, panel data estimation was introduced.Panel data consist of observations on the same n entities at two or more time periods T.For two variables, you have   where n could be the U.S.states.The example in Chapter 10 used annual data from 1982 to 1988 for the fatality rate and beer taxes.Estimation by OLS, in essence, involved stacking the data. (a)What would the variance-covariance matrix of the errors look like in this case if you allowed for homoskedasticity-only standard errors? What is its order? Use an example of a linear regression with one regressor of 4 U.S.states and 3 time periods.<div style=padding-top: 35px> where n could be the U.S.states.The example in Chapter 10 used annual data from 1982
to 1988 for the fatality rate and beer taxes.Estimation by OLS, in essence, involved
"stacking" the data.
(a)What would the variance-covariance matrix of the errors look like in this case if you
allowed for homoskedasticity-only standard errors? What is its order? Use an example of
a linear regression with one regressor of 4 U.S.states and 3 time periods.
Question
  q = 5 +3 p - 2 y q = 10 - p + 10 y p = 6 y<div style=padding-top: 35px> q = 5 +3 p - 2 y
q = 10 - p + 10 y
p = 6 y
Question
The leading example of sampling schemes in econometrics that do not result in independent observations is

A)cross-sectional data.
B)experimental data.
C)the Current Population Survey.
D)when the data are sampled over time for the same entity.
Question
Using the model Using the model   and the extended least squares assumptions, derive the OLS estimator  https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . Discuss the conditions under which   is invertible.<div style=padding-top: 35px> and the extended least squares assumptions, derive the OLS estimator https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/Using the model   and the extended least squares assumptions, derive the OLS estimator  https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . Discuss the conditions under which   is invertible.<div style=padding-top: 35px> . Discuss the conditions under which Using the model   and the extended least squares assumptions, derive the OLS estimator  https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . Discuss the conditions under which   is invertible.<div style=padding-top: 35px> is invertible.
Question
Consider the multiple regression model from Chapter 5, where k = 2 and the assumptions
of the multiple regression model hold.
(a)Show what the X matrix and the Consider the multiple regression model from Chapter 5, where k = 2 and the assumptions of the multiple regression model hold. (a)Show what the X matrix and the   vector would look like in this case.<div style=padding-top: 35px> vector would look like in this case.
Question
Prove that under the extended least squares assumptions the OLS estimator Prove that under the extended least squares assumptions the OLS estimator   is unbiased and that its variance-covariance matrix is  <div style=padding-top: 35px> is unbiased and that its variance-covariance matrix is Prove that under the extended least squares assumptions the OLS estimator   is unbiased and that its variance-covariance matrix is  <div style=padding-top: 35px>
Question
Assume that the data looks as follows: Assume that the data looks as follows:    <div style=padding-top: 35px> Assume that the data looks as follows:    <div style=padding-top: 35px>
Question
The presence of correlated error terms creates problems for inference based on OLS. These can be overcome by

A)using HAC standard errors.
B)using heteroskedasticity-robust standard errors.
C)reordering the observations until the correlation disappears.
D)using homoskedasticity-only standard errors.
Question
The GLS estimator

A)is always the more efficient estimator when compared to OLS.
B)is the OLS estimator of the coefficients in a transformed model, where the errors of the transformed model satisfy the Gauss-Markov conditions.
C)cannot handle binary variables, since some of the transformations require division by one of the regressors.
D)produces identical estimates for the coefficients, but different standard errors.
Question
In order for a matrix A to have an inverse, its determinant cannot be zero.Derive the
determinant of the following matrices: In order for a matrix A to have an inverse, its determinant cannot be zero.Derive the determinant of the following matrices:  <div style=padding-top: 35px>
Question
Your textbook derives the OLS estimator as Your textbook derives the OLS estimator as   Show that the estimator does not exist if there are fewer observations than the number of explanatory variables, including the constant.What is the rank of X′X in this case?<div style=padding-top: 35px> Show that the estimator does not exist if there are fewer observations than the number of
explanatory variables, including the constant.What is the rank of X′X in this case?
Question
Given the following matrices Given the following matrices  <div style=padding-top: 35px>
Question
For the OLS estimator For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation.<div style=padding-top: 35px> to exist, For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation.<div style=padding-top: 35px> must be invertible. This is the case when For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation.<div style=padding-top: 35px> has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation.<div style=padding-top: 35px> could have rank n ? What would be the rank of For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation.<div style=padding-top: 35px> in the case For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation.<div style=padding-top: 35px> Explain intuitively why the OLS estimator does not exist in that situation.
Question
An estimator of β\beta is said to be linear if

A) it can be estimated by least squares.
B) it is a linear function of Y1,,YnY _ { 1 } , \ldots , Y _ { n }
C) there are homoskedasticity-only errors.
D) it is a linear function of X1,,XnX _ { 1 } , \ldots , X _ { n }
Question
Give several economic examples of how to test various joint linear hypotheses using matrix notation. Include specifications of Give several economic examples of how to test various joint linear hypotheses using matrix notation. Include specifications of   where you test for (i) all coefficients other than the constant being zero, (ii) a subset of coefficients being zero, and (iii) equality of coefficients. Talk about the possible distributions involved in finding critical values for your hypotheses.<div style=padding-top: 35px> where you test for (i) all coefficients other than the constant being zero, (ii) a subset of coefficients being zero, and (iii) equality of coefficients. Talk about the possible distributions involved in finding critical values for your hypotheses.
Question
Write an essay on the difference between the OLS estimator and the GLS estimator.
Unlock Deck
Sign up to unlock the cards in this deck!
Unlock Deck
Unlock Deck
1/38
auto play flashcards
Play
simple tutorial
Full screen (f)
exit full mode
Deck 18: The Theory of Multiple Regression
1
The multiple regression model in matrix form Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } can also be written as

A) Yi=β0+Xiβ+ui,i=1,,nY _ { i } = \boldsymbol { \beta } _ { 0 } + \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } + u _ { i } , i = 1 , \ldots , n
B) Yi=Xiβi,i=1,,nY _ { i } = \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } _ { i } , i = 1 , \ldots , n
C) Yi=βXi+ui,i=1,,nY _ { i } = \boldsymbol { \beta } \boldsymbol { X } _ { i } ^ { \prime } + u _ { i } , i = 1 , \ldots , n
D) Yi=Xiβ+ui,i=1,,nY _ { i } = \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } + u _ { i } , i = 1 , \ldots , n
Yi=Xiβ+ui,i=1,,nY _ { i } = \boldsymbol { X } _ { i } ^ { \prime } \boldsymbol { \beta } + u _ { i } , i = 1 , \ldots , n
2
Minimization of i=1n(Yib0b1X1ibkXki)2\sum _ { i = 1 } ^ { n } \left( Y _ { i } - b _ { 0 } - b _ { 1 } X _ { 1 i } - \cdots - b _ { k } X _ { k i } \right) ^ { 2 } results in

A) XY=Xβ^X ^ { \prime } \boldsymbol { Y } = \boldsymbol { X } \hat { \boldsymbol { \beta } }
B) Xβ^=0k+1\boldsymbol { X } \hat { \boldsymbol { \beta } } = \mathbf { 0 } _ { k + 1 }
C) X(YXβ^)=0k+1\boldsymbol { X } ^ { \prime } ( \boldsymbol { Y } - \boldsymbol { X } \hat { \boldsymbol { \beta } } ) = \mathbf { 0 } _ { k + 1 }
D) Rβ=r\boldsymbol { R } \boldsymbol { \beta } = \boldsymbol { r } \text {. }
X(YXβ^)=0k+1\boldsymbol { X } ^ { \prime } ( \boldsymbol { Y } - \boldsymbol { X } \hat { \boldsymbol { \beta } } ) = \mathbf { 0 } _ { k + 1 }
3
The multiple regression model can be written in matrix form as follows: The multiple regression model can be written in matrix form as follows:
D
4
The GLS estimator is defined as The GLS estimator is defined as
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
5
The difference between the central limit theorems for a scalar and vector-valued random variables is

A)that n approaches infinity in the central limit theorem for scalars only.
B)the conditions on the variances.
C)that single random variables can have an expected value but vectors cannot.
D)the homoskedasticity assumption in the former but not the latter.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
6
The OLS estimator

A)has the multivariate normal asymptotic distribution in large samples.
B)is t-distributed.
C)has the multivariate normal distribution regardless of the sample size.
D)is F-distributed.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
7
The extended least squares assumptions in the multiple regression model include four assumptions from Chapter 6 (ui has conditional mean zero; (Xi,Yj),i=1,,n\left( u _ { i } \text { has conditional mean zero; } \left( \boldsymbol { X } _ { i } , Y _ { j } \right) , i = 1 , \ldots , n \right. are i.i.d. draws from their joint distribution; Xiand ui\boldsymbol { X } _ { i } \text {and } u _ { i } have nonzero finite fourth moments; there is no perfect multicollinearity). In addition, there are two further assumptions, one of which is

A) heteroskedasticity of the error term.
B) serial correlation of the error term.
C) homoskedasticity of the error term.
D) invertibility of the matrix of regressors.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
8
The assumption that X has full column rank implies that

A)the number of observations equals the number of regressors.
B)binary variables are absent from the list of regressors.
C)there is no perfect multicollinearity.
D)none of the regressors appear in natural logarithm form.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
9
The Gauss-Markov theorem for multiple regressions states that the OLS estimator

A)has the smallest variance possible for any linear estimator.
B)is BLUE if the Gauss-Markov conditions for multiple regression hold.
C)is identical to the maximum likelihood estimator.
D)is the most commonly used estimator.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
10
The heteroskedasticity-robust estimator of n(β^β)\sum _ { \sqrt { n } ( \hat { \beta } - \beta ) } is obtained

A)  from (XX)1XU\text { from } \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U }
B) by replacing the population moments in its definition by the identity matrix.
C) from feasible GLS estimation.
D) by replacing the population moments in its definition by sample moments.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
11
One implication of the extended least squares assumptions in the multiple regression model is that One implication of the extended least squares assumptions in the multiple regression model is that
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
12
The formulation Rβ=r\boldsymbol { R } \boldsymbol { \beta } = \boldsymbol { r } to test a hypotheses

A) allows for restrictions involving both multiple regression coefficients and single regression coefficients.
B) is F -distributed in large samples.
C) allows only for restrictions involving multiple regression coefficients.
D) allows for testing linear as well as nonlinear hypotheses.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
13
In the case when the errors are homoskedastic and normally distributed, conditional on X, then In the case when the errors are homoskedastic and normally distributed, conditional on X, then
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
14
A joint hypothesis that is linear in the coefficients and imposes a number of restrictions can be written as A joint hypothesis that is linear in the coefficients and imposes a number of restrictions can be written as
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
15
Let there be q joint hypothesis to be tested. Then the dimension of r in the expression Rβ=rR \beta = r is

A) q×1q \times 1
B) q×(k+1)q \times ( k + 1 )
C) (k+1)×1( k + 1 ) \times 1
D) q.q .
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
16
The GLS assumptions include all of the following, with the exception of The GLS assumptions include all of the following, with the exception of
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
17
The linear multiple regression model can be represented in matrix notation as Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } where X is of order n x(k+1) . k represents the number of

A) regressors.
B) observations.
C) regressors excluding the "constant" regressor for the intercept.
D) unknown regression coefficients
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
18
The Gauss-Markov theorem for multiple regression proves that The Gauss-Markov theorem for multiple regression proves that
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
19
One of the properties of the OLS estimator is One of the properties of the OLS estimator is
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
20
β^β\hat { \beta } - \beta

A) cannot be calculated since the population parameter is unknown.
B) =(XX)1XU= \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U }
C) =YY^= \boldsymbol { Y } - \hat { Y }
D) =β+(XX)1XU= \boldsymbol { \beta } + \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U }
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
21
Define the GLS estimator and discuss its properties when Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data. is known. Why is this estimator sometimes called infeasible GLS? What happens when Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data. is unknown? What would the Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data. matrix look like for the case of independent sampling with heteroskedastic errors, where Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.
Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.. The textbook shows that the original model Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.
will be transformed into Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data. where Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data.
Find Define the GLS estimator and discuss its properties when     is known. Why is this estimator sometimes called infeasible GLS? What happens when   is unknown? What would the    matrix look like for the case of independent sampling with heteroskedastic errors, where   Since the inverse of the error variancecovariance matrix is needed to compute the GLS estimator, find https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . The textbook shows that the original model   will be transformed into   where   Find    in the above case, and describe what effect the transformation has on the original data. in the above case, and describe what effect the transformation has on the original data.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
22
Write the following four restrictions in the form Write the following four restrictions in the form   where the hypotheses are to be tested simultaneously.   Can you write the following restriction   in the same format? Why not? where the hypotheses are to be tested simultaneously.
Write the following four restrictions in the form   where the hypotheses are to be tested simultaneously.   Can you write the following restriction   in the same format? Why not?
Can you write the following restriction Write the following four restrictions in the form   where the hypotheses are to be tested simultaneously.   Can you write the following restriction   in the same format? Why not? in the same format? Why not?
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
23
In Chapter 10 of your textbook, panel data estimation was introduced.Panel data consist
of observations on the same n entities at two or more time periods T.For two variables,
you have In Chapter 10 of your textbook, panel data estimation was introduced.Panel data consist of observations on the same n entities at two or more time periods T.For two variables, you have   where n could be the U.S.states.The example in Chapter 10 used annual data from 1982 to 1988 for the fatality rate and beer taxes.Estimation by OLS, in essence, involved stacking the data. (a)What would the variance-covariance matrix of the errors look like in this case if you allowed for homoskedasticity-only standard errors? What is its order? Use an example of a linear regression with one regressor of 4 U.S.states and 3 time periods. where n could be the U.S.states.The example in Chapter 10 used annual data from 1982
to 1988 for the fatality rate and beer taxes.Estimation by OLS, in essence, involved
"stacking" the data.
(a)What would the variance-covariance matrix of the errors look like in this case if you
allowed for homoskedasticity-only standard errors? What is its order? Use an example of
a linear regression with one regressor of 4 U.S.states and 3 time periods.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
24
  q = 5 +3 p - 2 y q = 10 - p + 10 y p = 6 y q = 5 +3 p - 2 y
q = 10 - p + 10 y
p = 6 y
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
25
The leading example of sampling schemes in econometrics that do not result in independent observations is

A)cross-sectional data.
B)experimental data.
C)the Current Population Survey.
D)when the data are sampled over time for the same entity.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
26
Using the model Using the model   and the extended least squares assumptions, derive the OLS estimator  https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . Discuss the conditions under which   is invertible. and the extended least squares assumptions, derive the OLS estimator https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/Using the model   and the extended least squares assumptions, derive the OLS estimator  https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . Discuss the conditions under which   is invertible.. Discuss the conditions under which Using the model   and the extended least squares assumptions, derive the OLS estimator  https://d2lvgg3v3hfg70.cloudfront.net/TB34225555/ . Discuss the conditions under which   is invertible. is invertible.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
27
Consider the multiple regression model from Chapter 5, where k = 2 and the assumptions
of the multiple regression model hold.
(a)Show what the X matrix and the Consider the multiple regression model from Chapter 5, where k = 2 and the assumptions of the multiple regression model hold. (a)Show what the X matrix and the   vector would look like in this case. vector would look like in this case.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
28
Prove that under the extended least squares assumptions the OLS estimator Prove that under the extended least squares assumptions the OLS estimator   is unbiased and that its variance-covariance matrix is  is unbiased and that its variance-covariance matrix is Prove that under the extended least squares assumptions the OLS estimator   is unbiased and that its variance-covariance matrix is
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
29
Assume that the data looks as follows: Assume that the data looks as follows:    Assume that the data looks as follows:
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
30
The presence of correlated error terms creates problems for inference based on OLS. These can be overcome by

A)using HAC standard errors.
B)using heteroskedasticity-robust standard errors.
C)reordering the observations until the correlation disappears.
D)using homoskedasticity-only standard errors.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
31
The GLS estimator

A)is always the more efficient estimator when compared to OLS.
B)is the OLS estimator of the coefficients in a transformed model, where the errors of the transformed model satisfy the Gauss-Markov conditions.
C)cannot handle binary variables, since some of the transformations require division by one of the regressors.
D)produces identical estimates for the coefficients, but different standard errors.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
32
In order for a matrix A to have an inverse, its determinant cannot be zero.Derive the
determinant of the following matrices: In order for a matrix A to have an inverse, its determinant cannot be zero.Derive the determinant of the following matrices:
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
33
Your textbook derives the OLS estimator as Your textbook derives the OLS estimator as   Show that the estimator does not exist if there are fewer observations than the number of explanatory variables, including the constant.What is the rank of X′X in this case? Show that the estimator does not exist if there are fewer observations than the number of
explanatory variables, including the constant.What is the rank of X′X in this case?
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
34
Given the following matrices Given the following matrices
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
35
For the OLS estimator For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation. to exist, For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation. must be invertible. This is the case when For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation. has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation. could have rank n ? What would be the rank of For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation. in the case For the OLS estimator   to exist,   must be invertible. This is the case when     has full rank. What is the rank of a matrix? What is the rank of the product of two matrices? Is it possible that    could have rank  n  ? What would be the rank of   in the case   Explain intuitively why the OLS estimator does not exist in that situation. Explain intuitively why the OLS estimator does not exist in that situation.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
36
An estimator of β\beta is said to be linear if

A) it can be estimated by least squares.
B) it is a linear function of Y1,,YnY _ { 1 } , \ldots , Y _ { n }
C) there are homoskedasticity-only errors.
D) it is a linear function of X1,,XnX _ { 1 } , \ldots , X _ { n }
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
37
Give several economic examples of how to test various joint linear hypotheses using matrix notation. Include specifications of Give several economic examples of how to test various joint linear hypotheses using matrix notation. Include specifications of   where you test for (i) all coefficients other than the constant being zero, (ii) a subset of coefficients being zero, and (iii) equality of coefficients. Talk about the possible distributions involved in finding critical values for your hypotheses. where you test for (i) all coefficients other than the constant being zero, (ii) a subset of coefficients being zero, and (iii) equality of coefficients. Talk about the possible distributions involved in finding critical values for your hypotheses.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
38
Write an essay on the difference between the OLS estimator and the GLS estimator.
Unlock Deck
Unlock for access to all 38 flashcards in this deck.
Unlock Deck
k this deck
locked card icon
Unlock Deck
Unlock for access to all 38 flashcards in this deck.