$$ \newcommand{\mybold}[1]{\boldsymbol{#1}} \newcommand{\trans}{\intercal} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\bbr}{\mathbb{R}} \newcommand{\bbz}{\mathbb{Z}} \newcommand{\bbc}{\mathbb{C}} \newcommand{\gauss}[1]{\mathcal{N}\left(#1\right)} \newcommand{\chisq}[1]{\mathcal{\chi}^2_{#1}} \newcommand{\studentt}[1]{\mathrm{StudentT}_{#1}} \newcommand{\fdist}[2]{\mathrm{FDist}_{#1,#2}} \newcommand{\argmin}[1]{\underset{#1}{\mathrm{argmin}}\,} \newcommand{\projop}[1]{\underset{#1}{\mathrm{Proj}}\,} \newcommand{\proj}[1]{\underset{#1}{\mybold{P}}} \newcommand{\expect}[1]{\mathbb{E}\left[#1\right]} \newcommand{\prob}[1]{\mathbb{P}\left(#1\right)} \newcommand{\dens}[1]{\mathit{p}\left(#1\right)} \newcommand{\var}[1]{\mathrm{Var}\left(#1\right)} \newcommand{\cov}[1]{\mathrm{Cov}\left(#1\right)} \newcommand{\sumn}{\sum_{n=1}^N} \newcommand{\meann}{\frac{1}{N} \sumn} \newcommand{\cltn}{\frac{1}{\sqrt{N}} \sumn} \newcommand{\trace}[1]{\mathrm{trace}\left(#1\right)} \newcommand{\diag}[1]{\mathrm{Diag}\left(#1\right)} \newcommand{\grad}[2]{\nabla_{#1} \left. #2 \right.} \newcommand{\gradat}[3]{\nabla_{#1} \left. #2 \right|_{#3}} \newcommand{\fracat}[3]{\left. \frac{#1}{#2} \right|_{#3}} \newcommand{\W}{\mybold{W}} \newcommand{\w}{w} \newcommand{\wbar}{\bar{w}} \newcommand{\wv}{\mybold{w}} \newcommand{\X}{\mybold{X}} \newcommand{\x}{x} \newcommand{\xbar}{\bar{x}} \newcommand{\xv}{\mybold{x}} \newcommand{\Xcov}{\Sigmam_{\X}} \newcommand{\Xcovhat}{\hat{\Sigmam}_{\X}} \newcommand{\Covsand}{\Sigmam_{\mathrm{sand}}} \newcommand{\Covsandhat}{\hat{\Sigmam}_{\mathrm{sand}}} \newcommand{\Z}{\mybold{Z}} \newcommand{\z}{z} \newcommand{\zv}{\mybold{z}} \newcommand{\zbar}{\bar{z}} \newcommand{\Y}{\mybold{Y}} \newcommand{\Yhat}{\hat{\Y}} \newcommand{\y}{y} \newcommand{\yv}{\mybold{y}} \newcommand{\yhat}{\hat{\y}} \newcommand{\ybar}{\bar{y}} \newcommand{\res}{\varepsilon} \newcommand{\resv}{\mybold{\res}} \newcommand{\resvhat}{\hat{\mybold{\res}}} \newcommand{\reshat}{\hat{\res}} \newcommand{\betav}{\mybold{\beta}} \newcommand{\betavhat}{\hat{\betav}} \newcommand{\betahat}{\hat{\beta}} \newcommand{\betastar}{{\beta^{*}}} \newcommand{\bv}{\mybold{\b}} \newcommand{\bvhat}{\hat{\bv}} \newcommand{\alphav}{\mybold{\alpha}} \newcommand{\alphavhat}{\hat{\av}} \newcommand{\alphahat}{\hat{\alpha}} \newcommand{\omegav}{\mybold{\omega}} \newcommand{\gv}{\mybold{\gamma}} \newcommand{\gvhat}{\hat{\gv}} \newcommand{\ghat}{\hat{\gamma}} \newcommand{\hv}{\mybold{\h}} \newcommand{\hvhat}{\hat{\hv}} \newcommand{\hhat}{\hat{\h}} \newcommand{\gammav}{\mybold{\gamma}} \newcommand{\gammavhat}{\hat{\gammav}} \newcommand{\gammahat}{\hat{\gamma}} \newcommand{\new}{\mathrm{new}} \newcommand{\zerov}{\mybold{0}} \newcommand{\onev}{\mybold{1}} \newcommand{\id}{\mybold{I}} \newcommand{\sigmahat}{\hat{\sigma}} \newcommand{\etav}{\mybold{\eta}} \newcommand{\muv}{\mybold{\mu}} \newcommand{\Sigmam}{\mybold{\Sigma}} \newcommand{\rdom}[1]{\mathbb{R}^{#1}} \newcommand{\RV}[1]{\tilde{#1}} \def\A{\mybold{A}} \def\A{\mybold{A}} \def\av{\mybold{a}} \def\a{a} \def\B{\mybold{B}} \def\S{\mybold{S}} \def\sv{\mybold{s}} \def\s{s} \def\R{\mybold{R}} \def\rv{\mybold{r}} \def\r{r} \def\V{\mybold{V}} \def\vv{\mybold{v}} \def\v{v} \def\U{\mybold{U}} \def\uv{\mybold{u}} \def\u{u} \def\W{\mybold{W}} \def\wv{\mybold{w}} \def\w{w} \def\tv{\mybold{t}} \def\t{t} \def\Sc{\mathcal{S}} \def\ev{\mybold{e}} \def\Lammat{\mybold{\Lambda}} $$

STAT151A Code homework 3: Due February 23rd

Author

Your name here

In this homework problem, we’ll simulate some data and check our predictions for a regression problem of the form \(\y_n = \xv_n^\trans \betav + \res_n\), where \(\res_n \sim \gauss{0, \sigma^2}\), independetly of \(\xv_n\).

For this problem, do not use lm() or other regression functions (except possibly to sanity check that you have done things correctly). You may (and are in fact encouraged) to use your solutions to past homeworks.

1 Implementation

Implement the following functions:

# Generate a random covariance matrix.
#
# Args:
#   dim: The dimension of the covariance matrix
#
# Returns:
#   A valid dim x dim covariance matrix
DrawCovMat <- function(dim) {
    # ... your code here ...
}
# Generate a random matrix of regressors.
#
# Args:
#   n_obs: The number of regression observations
#   cov_mat: A dim x dim valid covariance matrix
#
# Returns:
#   A n_obs x dim matrix of normally distributed random regressors
#   where the rows have covariance cov_mat
SimulateRegressors <- function(n_obs, cov_mat) {
    # ... your code here ...
}
# Generate the response for a linear model.
#
# Args:
#   x_mat: A n_obs x dim matrix of regressors
#   beta: A dim-length vector of true regression coefficients
#   sigma: The standard deviation of the residuals
#
# Returns:
#   A n_obs-vector of responses drawn from the regression
#   model y_n ~ x_n^T \beta + \epsilon_n, where \epsilon_n
#   is distributed N(0, sigma^2),
SimulateResponses <- function(x_mat, beta, sigma) {
    # ... your code here ...
}
# Estimate the regression coefficient.
#
# Args:
#   x: A n_obs x dim matrix of regressors
#   y: A n_obs-length vector of responses
#
# Returns:
#   A dim-length vector estimating the coefficient 
#   for the least squares regression y ~ x.
GetBetahat <- function(x, y) {
    # ... your code here ...
}
# Estimate the residual standard deviation.
#
# Args:
#   x: A n_obs x dim matrix of regressors
#   y: A n_obs-length vector of responses
#
# Returns:
#   An estimate of the residual standard deviation 
#   for the least squares regression y ~ x.
GetSigmahat <- function(x, y) {
    # ... your code here ...
}

2 Draw and check

Choose a dimension. Using a large \(N\), check that your functions are working correctly.

3 Draw a training set and test set

Simulate a training set \(\X\) and \(\Y\) with \(N = 1000\) and \(P = 3\), and values of \(\beta\) and \(\sigma\) that you choose. Use the training set to form estimates \(\betavhat\) and \(\sigmahat\). Then, draw a new set of \(500\) test data points, \(\X_\new\) and \(\Y_\new\).

Use \(\betavhat\), \(\sigmahat\), and \(\X_\new\) to form an approximate 80% predictive interval for each response \(\Y_\new\). Compute what proportion of the time the \(\Y_\new\) actually lie in the interval. Is your prediction interval performing as expected? Why or why not?

4 Vary the setting

Choose three of the following questions to answer.

Explore how the coverage of the test set varies when:

  • \(N\) increases or decreases, all else fixed
  • The value of \(\sigma\) increase and decreases, all else fixed
  • The values in \(\beta\) increase and decreases, all else fixed
  • \(P\) increase and decreases (and \(\beta\) changes with it)
  • The distribution of the residuals changes (i.e., try something other than a normal)
  • You draw new values for the training set (but keep the test set fixed)
  • You draw new values for the test set (but keep the training set fixed)
  • Pick something else that you find interesting to vary!