
Introduction to Econometrics 3rd Edition by James Stock, Mark Watson
Edition 3ISBN: 978-9352863501
Introduction to Econometrics 3rd Edition by James Stock, Mark Watson
Edition 3ISBN: 978-9352863501 Exercise 7
Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as
where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that
Let
a. Show that
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let
denote the vector of two stage least squares residuals.
i. Show that
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
![Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2685/11eb817c_781a_ec42_84e6_c944d1fd1c62_SM2685_11.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2685/11eb817c_781a_ec43_84e6_81fb05f143b0_SM2685_11.jpg)
![Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2685/11eb817c_781a_ec44_84e6_99bdafe13e64_SM2685_11.jpg)
a. Show that
![Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2685/11eb817c_781a_ec45_84e6_6d7d2a6ec3fd_SM2685_00.jpg)
b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let
![Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2685/11eb817c_781b_1356_84e6_3f4dd12d3463_SM2685_11.jpg)
i. Show that
![Consider the regression model Y = Xß + U. Partition X as [ X 1 X 2 ] and ß as where X 1 has k 1 columns and X 2 has k 2 columns. Suppose that Let a. Show that b. Consider the regression described in Equation (12.17). Let W = [1 W 1 W 2 … W r ], where 1 is an n × 1 vector of ones, W l is the n X 1 vector with i th element W 1 i and so forth. Let denote the vector of two stage least squares residuals. i. Show that ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]](https://d2lvgg3v3hfg70.cloudfront.net/SM2685/11eb817c_781b_1357_84e6_0d40d73367c8_SM2685_11.jpg)
ii. Show that the method for computing the J -statistic described in Key Concept 12.6 (using a homoskedasiticity-only F-statistic) and the formula in Equation (18.63) produce the same value for the J -statistic. [Hint: Use the results in (a), (b, i), and Exercise 18.13.]
Explanation
a) The regression is
has are matrices...
Introduction to Econometrics 3rd Edition by James Stock, Mark Watson
Why don’t you like this exercise?
Other Minimum 8 character and maximum 255 character
Character 255