@malteschierholz

Magging: maximin aggregation for inhomogeneous large-scale data

, and . (2014)cite arxiv:1409.2638Comment: 15 pages, 3 figures.

Abstract

Large-scale data analysis poses both statistical and computational problems which need to be addressed simultaneously. A solution is often straightforward if the data are homogeneous: one can use classical ideas of subsampling and mean aggregation to get a computationally efficient solution with acceptable statistical accuracy, where the aggregation step simply averages the results obtained on distinct subsets of the data. However, if the data exhibit inhomogeneities (and typically they do), the same approach will be inadequate, as it will be unduly influenced by effects that are not persistent across all the data due to, for example, outliers or time-varying effects. We show that a tweak to the aggregation step can produce an estimator of effects which are common to all data, and hence interesting for interpretation and often leading to better prediction than pooled effects.

Description

main Ideas for large scale data: - need efficient algorithms - data is not iid. -> methods for inhomogenous data necessary => intuition: only keep effects that are similar in all groups (observed and unobserved), but group-specific effects are "averaged away"

Links and resources

Tags