Abstract
Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by
learning to map a context set of observed input-output pairs to a distribution
over regression functions. Each function models the distribution of the output
given an input, conditioned on the context. NPs have the benefit of fitting
observed data efficiently with linear complexity in the number of context
input-output pairs, and can learn a wide family of conditional distributions;
they learn predictive distributions conditioned on context sets of arbitrary
size. Nonetheless, we show that NPs suffer a fundamental drawback of
underfitting, giving inaccurate predictions at the inputs of the observed data
they condition on. We address this issue by incorporating attention into NPs,
allowing each input location to attend to the relevant context points for the
prediction. We show that this greatly improves the accuracy of predictions,
results in noticeably faster training, and expands the range of functions that
can be modelled.
Users
Please
log in to take part in the discussion (add own reviews or comments).