Kinh tế lượngTrắc nghiệm

180 câu trắc nghiệm Kinh tế lượng – Phần 2

Chapter 6: Linear Regression with Multiple Regressors

KTL_001_C6_1: In the multiple regression model, the adjusted \({R^2}\) or \({\bar R^2}\)
○ cannot be negative.
● will never be greater than the regression \({R^2}\).
○ equals the square of the correlation coefficient r.
○ cannot decrease when an additional explanatory variable is added.

KTL_001_C6_2: If you had a two regressor regression model, then omitting one variable which is relevant
○ will have no effect on the coefficient of the included variable if the correlation between the excluded and the included variable is negative.
● will always bias the coefficient of the included variable upwards.
○ can result in a negative value for the coefficient of the included variable, even though the coefficient will have a significant positive effect on Y if the omitted variable were included.
○ makes the sum of the product between the included variable and the residuals different from 0.

KTL_001_C6_3: Under the least squares assumptions for the multiple regression problem (zero conditional mean for the error term, all \({X_i}\) and \({Y_i}\) being i.i.d., all \({X_i}\) and \({u_i}\) having finite fourth moments, no perfect multicollinearity), the OLS estimators for the slopes and intercept
○ have an exact normal distribution for n > 25.
○ are BLUE.
○ have a normal distribution in small samples as long as the errors are homoskedastic.
● are unbiased and consistent.

KTL_001_C6_4: The following OLS assumption is most likely violated by omitted variables bias:
● \(E\left( {{u_i}|{X_i}} \right) = 0\)
○ \({X_i},{Y_i}\) with I = 1,…, n are i.i.d draws from their joint distribution
○ there are no outliers for \({X_i},{u_i}\)
○ there is heteroskedasticity

KTL_001_C6_5: The adjusted \({R^2}\) or \({\bar R^2}\), is given by
○ \(1 – \frac{{n – 2}}{{n – k – 1}}\frac{{SSR}}{{TSS}}\)
○ \(1 – \frac{{n – 1}}{{n – k – 1}}\frac{{ESS}}{{TSS}}\)
● \(1 – \frac{{n – 1}}{{n – k – 1}}\frac{{SSR}}{{TSS}}\)
○ \(\frac{{ESS}}{{TSS}}\)

KTL_001_C6_6: Consider the multiple regression model with two regressors \({X_1},{X_2}\), where both variables are determinants of the dependent variable. When omitting \({X_2}\) from the regression, there will be omitted variable bias for \({{\hat \beta }_1}\)
● if \({X_1},{X_2}\) are correlated
○ always
○ if \({X_2}\) is measured in percentages
○ only if \({X_2}\) is a dummy variable

KTL_001_C6_7: The dummy variable trap is an example of
○ imperfect multicollinearity
○ something that is of theoretical interest only
● perfect multicollinearity
○ something that does not happen to university or college students

KTL_001_C6_8: Imperfect multicollinearity
○ is not relevant to the field of economics and business administration
○ only occurs in the study of finance
● means that the least squares estimator of the slope is biased
○ means that two or more of the regressors are highly correlated

KTL_001_C6_9: Consider the multiple regression model with two regressors \({X_1},{X_2}\), where both variables are determinants of the dependent variable. You first regress Y on \({X_1}\) only and find no relationship. However when regressing Y on \({X_1},{X_2}\), the slope coefficient \({{\hat \beta }_1}\) changes by a large amount. This suggests that your first regression suffers from
○ heteroskedasticity
○ perfect multicollinearity
● omitted variable bias
○ dummy variable trap

KTL_001_C6_10: Imperfect multicollinearity
● implies that it will be difficult to estimate precisely one or more of the partial effects using the data at hand
○ violates one of the four Least Squares assumptions in the multiple regression model
○ means that you cannot estimate the effect of at least one of the Xs on Y
○ suggests that a standard spreadsheet program does not have enough power to estimate the multiple regression model

Previous page 1 2 3 4
Back to top button