expand icon
book Introduction to Econometrics 3rd Edition by James Stock, Mark Watson cover

Introduction to Econometrics 3rd Edition by James Stock, Mark Watson

Edition 3ISBN: 978-9352863501
book Introduction to Econometrics 3rd Edition by James Stock, Mark Watson cover

Introduction to Econometrics 3rd Edition by James Stock, Mark Watson

Edition 3ISBN: 978-9352863501
Exercise 7
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as     where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that     Let      a. Show that      b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let     denote the vector of two stage least squares residuals. i. Show that      ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.] where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as     where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that     Let      a. Show that      b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let     denote the vector of two stage least squares residuals. i. Show that      ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.] Let
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as     where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that     Let      a. Show that      b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let     denote the vector of two stage least squares residuals. i. Show that      ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
a. Show that
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as     where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that     Let      a. Show that      b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let     denote the vector of two stage least squares residuals. i. Show that      ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as     where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that     Let      a. Show that      b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let     denote the vector of two stage least squares residuals. i. Show that      ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.] denote the vector of two stage least squares residuals.
i. Show that
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as     where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that     Let      a. Show that      b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let     denote the vector of two stage least squares residuals. i. Show that      ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
Explanation
Verified
like image
like image

a) The regression is
blured image blured image has are matrices...

close menu
Introduction to Econometrics 3rd Edition by James Stock, Mark Watson
cross icon