Least squares linear regression is one of the oldest and widely used data
analysis tools. Although the theoretical analysis of the ordinary least squares
(OLS) estimator is as old, several fundamental questions are yet to be
answered. Suppose regression observations
$(X_1,Y_1),łdots,(X_n,Y_n)ınR^d\timesR$ (not necessarily
independent) are available. Some of the questions we deal with are as follows:
under what conditions, does the OLS estimator converge and what is the limit?
What happens if the dimension is allowed to grow with $n$? What happens if the
observations are dependent with dependence possibly strengthening with $n$? How
to do statistical inference under these kinds of misspecification? What happens
to the OLS estimator under variable selection? How to do inference under
misspecification and variable selection?
We answer all the questions raised above with one simple deterministic
inequality which holds for any set of observations and any sample size. This
implies that all our results are a finite sample (non-asymptotic) in nature. In
the end, one only needs to bound certain random quantities under specific
settings of interest to get concrete rates and we derive these bounds for the
case of independent observations. In particular, the problem of inference after
variable selection is studied, for the first time, when $d$, the number of
covariates increases (almost exponentially) with sample size $n$. We provide
comments on the ``right'' statistic to consider for inference under variable
selection and efficient computation of quantiles.
%0 Journal Article
%1 kuchibhotla2019linear
%A Kuchibhotla, Arun K.
%A Brown, Lawrence D.
%A Buja, Andreas
%A Cai, Junhui
%D 2019
%K asymptotics bounds readings theory
%T All of Linear Regression
%U http://arxiv.org/abs/1910.06386
%X Least squares linear regression is one of the oldest and widely used data
analysis tools. Although the theoretical analysis of the ordinary least squares
(OLS) estimator is as old, several fundamental questions are yet to be
answered. Suppose regression observations
$(X_1,Y_1),łdots,(X_n,Y_n)ınR^d\timesR$ (not necessarily
independent) are available. Some of the questions we deal with are as follows:
under what conditions, does the OLS estimator converge and what is the limit?
What happens if the dimension is allowed to grow with $n$? What happens if the
observations are dependent with dependence possibly strengthening with $n$? How
to do statistical inference under these kinds of misspecification? What happens
to the OLS estimator under variable selection? How to do inference under
misspecification and variable selection?
We answer all the questions raised above with one simple deterministic
inequality which holds for any set of observations and any sample size. This
implies that all our results are a finite sample (non-asymptotic) in nature. In
the end, one only needs to bound certain random quantities under specific
settings of interest to get concrete rates and we derive these bounds for the
case of independent observations. In particular, the problem of inference after
variable selection is studied, for the first time, when $d$, the number of
covariates increases (almost exponentially) with sample size $n$. We provide
comments on the ``right'' statistic to consider for inference under variable
selection and efficient computation of quantiles.
@article{kuchibhotla2019linear,
abstract = {Least squares linear regression is one of the oldest and widely used data
analysis tools. Although the theoretical analysis of the ordinary least squares
(OLS) estimator is as old, several fundamental questions are yet to be
answered. Suppose regression observations
$(X_1,Y_1),\ldots,(X_n,Y_n)\in\mathbb{R}^d\times\mathbb{R}$ (not necessarily
independent) are available. Some of the questions we deal with are as follows:
under what conditions, does the OLS estimator converge and what is the limit?
What happens if the dimension is allowed to grow with $n$? What happens if the
observations are dependent with dependence possibly strengthening with $n$? How
to do statistical inference under these kinds of misspecification? What happens
to the OLS estimator under variable selection? How to do inference under
misspecification and variable selection?
We answer all the questions raised above with one simple deterministic
inequality which holds for any set of observations and any sample size. This
implies that all our results are a finite sample (non-asymptotic) in nature. In
the end, one only needs to bound certain random quantities under specific
settings of interest to get concrete rates and we derive these bounds for the
case of independent observations. In particular, the problem of inference after
variable selection is studied, for the first time, when $d$, the number of
covariates increases (almost exponentially) with sample size $n$. We provide
comments on the ``right'' statistic to consider for inference under variable
selection and efficient computation of quantiles.},
added-at = {2019-10-16T16:22:16.000+0200},
author = {Kuchibhotla, Arun K. and Brown, Lawrence D. and Buja, Andreas and Cai, Junhui},
biburl = {https://www.bibsonomy.org/bibtex/27e640c1fbe3f960e923aa5eabefb582d/kirk86},
description = {[1910.06386] All of Linear Regression},
interhash = {5af37228fbf27deb290b34f81a9032a6},
intrahash = {7e640c1fbe3f960e923aa5eabefb582d},
keywords = {asymptotics bounds readings theory},
note = {cite arxiv:1910.06386},
timestamp = {2019-10-16T16:22:16.000+0200},
title = {All of Linear Regression},
url = {http://arxiv.org/abs/1910.06386},
year = 2019
}