Implications of Gaussianity (and deviations from it)
\(\,\)
Goals
- Leave the normal assumption behind
- Derive limiting distributions of \(\betahat\) using the CLT
- Show implications for the predictive distribution
- Derive an assumption-free version of what OLS is estimating
Leaving the normal assumption
Up to now, we’ve been assuming that
- \(\y_n = \betav^\trans \xv_n + \res_n\) for some \(\betav\)
- The regeressors \(\xv_n\) are fixed, \(\X\) is full-rank, and \(\meann \xv_n \xv_n^\trans \rightarrow \Xcov\) for positive definite \(\Xcov\)
- The residuals are distributed \(\res_n \sim \gauss{0, \sigma^2}\) IID
Under these assumptions, we were able to derive closed-form, finite-sample distributions for \(\betahat\) and \(\sigmahat^2\). We also showed that the behavior of these closed-form distributions matched what you’d expect from the LLN as \(N\) gets large.
Unfortunately, the normal assumption is unreasonable in practice. So today we will modify the final assumption to
- The residuals \(\res_n\) are IID, independent of \(\xv_n\), with \(\expect{\res_n} = 0\) and \(\var{\res_n} = \sigma^2\).
That is, we no longer assume that we know the full distribution of the \(\res_n\), but rather that the mean is zero and the variance finite and constant.
What changes under this more realistic assumption?
Distribution of the OLS coefficients
We still have that
\[ \betahat = (\X^\trans \X)^{-1} \X^\trans \Y = (\X^\trans \X)^{-1} \X^\trans (\X \beta + \resv) = \beta + (\X^\trans \X)^{-1} \X^\trans \resv. \]
That means that \(\expect{\betahat} = \beta\), so our estimator is still unbiased. But the term \((\X^\trans \X)^{-1} \X^\trans \resv\) is no longer normal. It remains the case that
\[ \cov{(\X^\trans \X)^{-1} \X^\trans \resv} = (\X^\trans \X)^{-1} \X^\trans \expect{\resv\resv^\trans} \X (\X^\trans \X)^{-1} = \sigma^2 (\X^\trans \X)^{-1} \rightarrow \zerov. \]
This means that \(\betahat \rightarrow \beta\). This is expected, since we can recall our LLN proof of the consistency of \(\betahat\):
\[ \begin{aligned} \betahat - \beta ={}& (\X^\trans \X)^{-1} \X^\trans \resv \\ ={}& (\frac{1}{N} \X^\trans \X)^{-1} \frac{1}{N} \X^\trans \resv \\ ={}& (\meann \xv_n \xv_n^\trans)^{-1} \meann \xv_n \res_n. \end{aligned} \]
Now, \((\meann \xv_n \xv_n^\trans)^{-1} \rightarrow \Xcov^{-1}\) by the LLN and the continuous mapping theorem, and
\[ \meann \xv_n \res_n \rightarrow \expect{\xv_n \res_n} = \xv_n \expect{\res_n} = \zerov, \]
simply using the fact that \(\xv_n\) and \(\res_n\) are independent (\(\xv_n\) is still fixed) and \(\expect{\res_n} = 0\).
Although we don’t know the finite-sample distribution of \(\betahat - \beta\), the LLN points to a way to approximation the asymptotic distribution of \(\betahat - \beta\) via the CLT. Specifically, note that \(\xv_n \res_n\) are not IID, but \(\expect{\xv_n \res_n} = 0\), and \(\cov{\xv_n \res_n} = \xv_n \xv_n^\trans \sigma^2\). Noting that
\[ \meann \cov{\xv_n \res_n} = \sigma^2 \meann \xv_n \xv_n^\trans \rightarrow \sigma^2 \Xcov, \]
by the multivariate CLT, \[ \frac{1}{\sqrt{N}} \sumn \xv_n \res_n \rightarrow \gauss{0, \sigma^2 \Xcov}. \]
Thus, by the continuous mapping theorem,
\[ \sqrt{N}(\betahat - \beta) = (\meann \xv_n \xv_n^\trans)^{-1} \frac{1}{\sqrt{N}} \sumn \xv_n \res_n \rightarrow \Xcov^{-1} \RV{z} \quad\textrm{where}\quad \RV{\z} \sim \gauss{0, \sigma^2 \Xcov}. \]
Now, by properties of the multivariate normal,
\[ \Xcov^{-1} \RV{z} \sim \gauss{0, \sigma^2 \Xcov^{-1} \Xcov \Xcov^{-1}} = \gauss{0, \sigma^2 \Xcov^{-1}}, \]
so
\[ \sqrt{N}(\betahat - \beta) \rightarrow \gauss{0, \sigma^2 \Xcov^{-1}}. \]
Plug-in estimators for the variance
Of course, in practice, we do not observe the terms in the variance \(\sigma^2 \Xcov^{-1}\). A natural solution is to plug in their consistent estimators,
\[ \begin{aligned} \sigmahat^2 \rightarrow \sigma^2 \quad\textrm{and}\quad \frac{1}{N} \X^\trans \X \rightarrow \Xcov. \end{aligned} \]
We thus say that
\[ \betahat - \beta \sim \gauss{0, \frac{1}{N} \sigmahat^2 \left( \frac{1}{N} \X^\trans \X \right)^{-1}} \quad\textrm{approximately for large }N. \]
Recall that, under normality, we had
\[ \betahat - \beta \sim \gauss{0, \frac{1}{N} \sigma^2 \left(\frac{1}{N} \X^\trans \X\right)^{-1}} \quad\textrm{exactly, under the normal assumption, for all }N. \]
We see that the CLT gives the matching distribution for large \(N\) — the difference is that the Normal distribution is justified for large \(N\) by the CLT rather than by exact normality. In this sense, the normal assumptions are not essential for approximating the sampling distribution of \(\betahat - \beta\).
Using the limiting distribution in the predictive distribution
Unfortunately, the normal assumption plays a much more important role in the predictive distribution. To see this, we can write as usual
\[ \y_new - \yhat_\new = (\beta - \betahat)^\trans \xv_\new + \res_n. \]
We can say that \((\beta - \betahat)^\trans \xv_\new\) is approximately normal for large \(N\) using the CLT. However, since the distribution of \(\res_n\) is unknown, the distribution of \(\y_new - \yhat_\new\) is unknown, even for large \(N\).
As a simple example, we could take
\[ \res_n = \begin{cases} 1 & \textrm{with probability }1/2\\ -1 & \textrm{with probability }1/2\\ \end{cases}. \]
These residuals satisfy the assumptions, but are very non-normal and, normal intervals will in general have poor coverage.
There are good solutions to produce well-calibrate predictive intervals even in the case of severe non-normality, using only the assumption that \((\xv_n, \y_n)\) are IID. For interested students, I recommend starting with A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification by Anastasios N. Angelopoulos, Stephen Bates. If we have time, we will cover conformal inference towards the end of the course.
Limiting distribution of the variance estimator (bonus content)
We can apply the same limiting distribution trick to \(\sigmahat^2\) as well, though it is not particularly useful. Recall that
\[ \begin{aligned} \sigmahat^2 ={}& \meann \reshat_n^2 \\ ={}& \meann \left((\beta - \betahat)^\trans \xv_n + \res_n \right)^2 \\ ={}& \meann \res_n^2 + 2 (\beta - \betahat)^\trans \meann \res_n \xv_n + (\beta - \betahat)^\trans \meann \xv_n\xv_n^\trans (\beta - \betahat). \end{aligned} \]
We thus can write
\[ \begin{aligned} \sqrt{N} \sigmahat^2 ={}& \frac{1}{\sqrt{N}} \sumn \res_n^2 + 2 (\beta - \betahat)^\trans \frac{1}{\sqrt{N}} \sumn \res_n \xv_n + \sqrt{N}(\beta - \betahat)^\trans \meann \xv_n\xv_n^\trans (\beta - \betahat). \end{aligned} \]
Applying the CLT to the final two terms, and using the fact that \(\beta - \betahat \rightarrow \zerov\), we can see that the only term that does not vanish as \(N\rightarrow\infty\) is the first. Applying a CLT to that gives,
\[ \frac{1}{\sqrt{N}} \sumn (\res_n^2 - \sigma^2) \rightarrow \gauss{0, \var{\res_n^2}}, \]
assuming that \(\var{\res_n^2} < \infty\). It follows that
\[ \sqrt{N} \left( \sigmahat^2 - \sigma^2\right) \rightarrow \gauss{0, \v_\sigma} \quad\textrm{ where }\quad \v_\sigma := \var{\res_n^2}. \]
However, this is not very useful because of how \(\sigmahat\) is used. Consider, for example, the problem of constructing intervals for the regression coefficients. Let
\[ \Xcovhat := \meann \xv_n\xv_n^\trans, \]
and consider estimating \[ \frac{1}{\sigmahat} \Xcovhat^{1/2} \sqrt{N}(\betahat - \beta) \approx \frac{1}{\sigma} \Xcov^{1/2} \sqrt{N} (\betahat - \beta) \rightarrow \gauss{\zerov, \id}. \]
Now,
\[ \frac{1}{\sigmahat} = \frac{1}{\sqrt{\sigmahat^2 - \sigma^2 + \sigma^2}} = \frac{1}{\sigma} \frac{1}{\sqrt{\left(\frac{\sigmahat^2}{\sigma^2} - 1 \right) + 1}}. \]
By the CLT, we know that \(\sqrt{N}\left(\frac{\sigmahat^2}{\sigma^2} - 1 \right)\) converges to a normal random variable. It follows that \(\left(\frac{\sigmahat^2}{\sigma^2} - 1 \right)\) is small for large \(N\) — roughtly as small as \(1 / \sqrt{N}\). How does its variability affect the variability of the preceding term? By series expanding the function \(\frac{1}{\sqrt{1 + z}} = (1 + z)^{-1/2}\) around \(z =0\), we see that
\[ \frac{1}{\sqrt{1 + z}} \approx 1 - \frac{1}{2} (1 + 0)^{-3/2} (z - 0) = 1 - \frac{1}{2} z \quad\textrm{for small }z, \]
\[ \frac{1}{\sigmahat} \approx \frac{1}{\sigma} \left( 1 + \left(\frac{\sigmahat^2}{\sigma^2} - 1 \right) \right) \approx \frac{1}{\sigma} \left( 1 + C / \sqrt{N} \right). \]
The variability induced by the randomness in \(\sigmahat\) is thus an order smaller than that induced by \(\betahat - \beta\), simply because \(\sigmahat\) has a nonzerm mean.
A similar argument shows why, for large \(N\), the difference between \(\Xcovhat\) and \(\Xcov\) is negligible. It is perhaps for this reason that, other than the use of student-t intervals motivated by the normal assumption, the variability of \(\sigmahat\) is not tyipcally incorporated into standard error calculations.