@yourwelcome

Linear Model selection by cross-validation

. Journal of the American Statistical Association, 88 (422): 486--494 (1993)
DOI: 10.1080/01621459.1993.10476299

Abstract

We consider the problem of selecting a model having the best predictive ability among a class of linear models. The popular leave- one-out cross-validation method, which is asymptotically equivalent to many other model selection methods such as the Akaike information criterion (AIC), the Cp, and the bootstrap, is asymptotically in the sense that the probability inconsistent of selecting the model with the best predictive ability does not converge to 1 as the total number of observations n -s o. We show that the of the leave-one-out cross-validation inconsistency can be rectified by using a leave-n,-out with nv, the number of cross-validation observations reserved for validation, satisfying no/n -1 I as n s* xoo. This is a somewhat shocking discovery, because ne/n -* 1 is totally opposite to the popular leave-one-out recipe in cross-validation. and discussions of some practical Motivations, justifications, aspects of the use of the leave-n,-out cross-validation method are provided, and results from a simulation study are presented

Links and resources

Tags

community

  • @yourwelcome
  • @mabdelaal86
  • @olivia.bluder
@yourwelcome's tags highlighted