$$
\newcommand{\mybold}[1]{\boldsymbol{#1}}
\newcommand{\trans}{\intercal}
\newcommand{\norm}[1]{\left\Vert#1\right\Vert}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\bbr}{\mathbb{R}}
\newcommand{\bbz}{\mathbb{Z}}
\newcommand{\bbc}{\mathbb{C}}
\newcommand{\gauss}[1]{\mathcal{N}\left(#1\right)}
\newcommand{\chisq}[1]{\mathcal{\chi}^2_{#1}}
\newcommand{\studentt}[1]{\mathrm{StudentT}_{#1}}
\newcommand{\fdist}[2]{\mathrm{FDist}_{#1,#2}}
\newcommand{\argmin}[1]{\underset{#1}{\mathrm{argmin}}\,}
\newcommand{\projop}[1]{\underset{#1}{\mathrm{Proj}}\,}
\newcommand{\proj}[1]{\underset{#1}{\mybold{P}}}
\newcommand{\expect}[1]{\mathbb{E}\left[#1\right]}
\newcommand{\prob}[1]{\mathbb{P}\left(#1\right)}
\newcommand{\dens}[1]{\mathit{p}\left(#1\right)}
\newcommand{\var}[1]{\mathrm{Var}\left(#1\right)}
\newcommand{\cov}[1]{\mathrm{Cov}\left(#1\right)}
\newcommand{\sumn}{\sum_{n=1}^N}
\newcommand{\meann}{\frac{1}{N} \sumn}
\newcommand{\cltn}{\frac{1}{\sqrt{N}} \sumn}
\newcommand{\trace}[1]{\mathrm{trace}\left(#1\right)}
\newcommand{\diag}[1]{\mathrm{Diag}\left(#1\right)}
\newcommand{\grad}[2]{\nabla_{#1} \left. #2 \right.}
\newcommand{\gradat}[3]{\nabla_{#1} \left. #2 \right|_{#3}}
\newcommand{\fracat}[3]{\left. \frac{#1}{#2} \right|_{#3}}
\newcommand{\W}{\mybold{W}}
\newcommand{\w}{w}
\newcommand{\wbar}{\bar{w}}
\newcommand{\wv}{\mybold{w}}
\newcommand{\X}{\mybold{X}}
\newcommand{\x}{x}
\newcommand{\xbar}{\bar{x}}
\newcommand{\xv}{\mybold{x}}
\newcommand{\Xcov}{\Sigmam_{\X}}
\newcommand{\Xcovhat}{\hat{\Sigmam}_{\X}}
\newcommand{\Covsand}{\Sigmam_{\mathrm{sand}}}
\newcommand{\Covsandhat}{\hat{\Sigmam}_{\mathrm{sand}}}
\newcommand{\Z}{\mybold{Z}}
\newcommand{\z}{z}
\newcommand{\zv}{\mybold{z}}
\newcommand{\zbar}{\bar{z}}
\newcommand{\Y}{\mybold{Y}}
\newcommand{\Yhat}{\hat{\Y}}
\newcommand{\y}{y}
\newcommand{\yv}{\mybold{y}}
\newcommand{\yhat}{\hat{\y}}
\newcommand{\ybar}{\bar{y}}
\newcommand{\res}{\varepsilon}
\newcommand{\resv}{\mybold{\res}}
\newcommand{\resvhat}{\hat{\mybold{\res}}}
\newcommand{\reshat}{\hat{\res}}
\newcommand{\betav}{\mybold{\beta}}
\newcommand{\betavhat}{\hat{\betav}}
\newcommand{\betahat}{\hat{\beta}}
\newcommand{\betastar}{{\beta^{*}}}
\newcommand{\bv}{\mybold{\b}}
\newcommand{\bvhat}{\hat{\bv}}
\newcommand{\alphav}{\mybold{\alpha}}
\newcommand{\alphavhat}{\hat{\av}}
\newcommand{\alphahat}{\hat{\alpha}}
\newcommand{\omegav}{\mybold{\omega}}
\newcommand{\gv}{\mybold{\gamma}}
\newcommand{\gvhat}{\hat{\gv}}
\newcommand{\ghat}{\hat{\gamma}}
\newcommand{\hv}{\mybold{\h}}
\newcommand{\hvhat}{\hat{\hv}}
\newcommand{\hhat}{\hat{\h}}
\newcommand{\gammav}{\mybold{\gamma}}
\newcommand{\gammavhat}{\hat{\gammav}}
\newcommand{\gammahat}{\hat{\gamma}}
\newcommand{\new}{\mathrm{new}}
\newcommand{\zerov}{\mybold{0}}
\newcommand{\onev}{\mybold{1}}
\newcommand{\id}{\mybold{I}}
\newcommand{\sigmahat}{\hat{\sigma}}
\newcommand{\etav}{\mybold{\eta}}
\newcommand{\muv}{\mybold{\mu}}
\newcommand{\Sigmam}{\mybold{\Sigma}}
\newcommand{\rdom}[1]{\mathbb{R}^{#1}}
\newcommand{\RV}[1]{\tilde{#1}}
\def\A{\mybold{A}}
\def\A{\mybold{A}}
\def\av{\mybold{a}}
\def\a{a}
\def\B{\mybold{B}}
\def\S{\mybold{S}}
\def\sv{\mybold{s}}
\def\s{s}
\def\R{\mybold{R}}
\def\rv{\mybold{r}}
\def\r{r}
\def\V{\mybold{V}}
\def\vv{\mybold{v}}
\def\v{v}
\def\U{\mybold{U}}
\def\uv{\mybold{u}}
\def\u{u}
\def\W{\mybold{W}}
\def\wv{\mybold{w}}
\def\w{w}
\def\tv{\mybold{t}}
\def\t{t}
\def\Sc{\mathcal{S}}
\def\ev{\mybold{e}}
\def\Lammat{\mybold{\Lambda}}
$$
Please write your full name and email address:
\[\\[1in]\]
For this quiz, we’ll consider the linear models
\[
\begin{aligned}
y_n ={} \betav^\trans \xv_n + \res_n
&\quad\textrm{and}\quad
y_n ={} \gammav^\trans \zv_n + \eta_n
\end{aligned}
\]
with
\[
\begin{aligned}
\xv_n ={} (1, \x_n)^\trans
&\quad\textrm{and}\quad
\zv_n ={} (1, \z_n)^\trans \textrm{ where}
\\
\overline{\x} :={} \meann \x_n
&\quad\textrm{and}\quad
\z_n :={} \x_n - \overline{\x}.
\end{aligned}
\]
Assume that \(\x_n\) is not a constant (i.e., for at least one pair \(n\) and \(m\), \(\x_n \ne \x_m\).).
Let \(\X\) denote the \(N \times 2\) matrix whose \(n\)–th row is \(\xv_n^\trans\), and \(\Z\) denote the \(N \times 2\) matrix whose \(n\)–th row is \(\zv_n^\trans\).
Recall that the inverse of a 2x2 matrix is given by
\[
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}^{-1} =
\frac{1}{ad - bc}
\begin{pmatrix}
d & -b \\
-c & a
\end{pmatrix}.
\]
You have 20 minutes for this quiz.
There are three parts, (a), (b), and (c), each weighted equally..
(a)
Find a \(2 \times 2\) matrix \(\A\) such that \(\Z = \X \A\).
(b)
Suppose I tell you that the OLS estimate of \(\beta\) is given by \(\betahat = (2, 3)\), and that \(\overline{x} = 4\). What is the value of \(\gammahat\), the OLS estimate of \(\gamma\)?
(c)
In general, can you say whether one regression will provide a better fit than the other? That is, can you say which of \(\meann (\y_n - \zv_n^\trans\gammahat)^2\) and \(\meann (\y_n - \xv_n^\trans\betahat)^2\) is smaller? Argue why or why not.