Abstract
We extend Neural Processes (NPs) to sequential data through Recurrent NPs or
RNPs, a family of conditional state space models. RNPs can learn dynamical
patterns from sequential data and deal with non-stationarity. Given time series
observed on fast real-world time scales but containing slow long-term
variabilities, RNPs may derive appropriate slow latent time scales. They do so
in an efficient manner by establishing conditional independence among
subsequences of the time series. Our theoretically grounded framework for
stochastic processes expands the applicability of NPs while retaining their
benefits of flexibility, uncertainty estimation and favourable runtime with
respect to Gaussian Processes. We demonstrate that state spaces learned by RNPs
benefit predictive performance on real-world time-series data and nonlinear
system identification, even in the case of limited data availability.
Users
Please
log in to take part in the discussion (add own reviews or comments).