Abstract
Large-scale data analysis poses both statistical and computational problems
which need to be addressed simultaneously. A solution is often straightforward
if the data are homogeneous: one can use classical ideas of subsampling and
mean aggregation to get a computationally efficient solution with acceptable
statistical accuracy, where the aggregation step simply averages the results
obtained on distinct subsets of the data. However, if the data exhibit
inhomogeneities (and typically they do), the same approach will be inadequate,
as it will be unduly influenced by effects that are not persistent across all
the data due to, for example, outliers or time-varying effects. We show that a
tweak to the aggregation step can produce an estimator of effects which are
common to all data, and hence interesting for interpretation and often leading
to better prediction than pooled effects.
Users
Please
log in to take part in the discussion (add own reviews or comments).