Robust guarantees for learning an autoregressive filter
H. Lee, und C. Zhang. (2019)cite arxiv:1905.09897Comment: 27 pages.
Zusammenfassung
The optimal predictor for a linear dynamical system (with hidden state and
Gaussian noise) takes the form of an autoregressive linear filter, namely the
Kalman filter. However, a fundamental problem in reinforcement learning and
control theory is to make optimal predictions in an unknown dynamical system.
To this end, we take the approach of directly learning an autoregressive filter
for time-series prediction under unknown dynamics. Our analysis differs from
previous statistical analyses in that we regress not only on the inputs to the
dynamical system, but also the outputs, which is essential to dealing with
process noise. The main challenge is to estimate the filter under worst case
input (in $H_ınfty$ norm), for which we use an $L^ınfty$-based
objective rather than ordinary least-squares. For learning an autoregressive
model, our algorithm has optimal sample complexity in terms of the rollout
length, which does not seem to be attained by naive least-squares.
Beschreibung
[1905.09897] Robust guarantees for learning an autoregressive filter
%0 Conference Paper
%1 lee2019robust
%A Lee, Holden
%A Zhang, Cyril
%D 2019
%K alt2020 autoregressive readings regression robustness
%T Robust guarantees for learning an autoregressive filter
%U http://arxiv.org/abs/1905.09897
%X The optimal predictor for a linear dynamical system (with hidden state and
Gaussian noise) takes the form of an autoregressive linear filter, namely the
Kalman filter. However, a fundamental problem in reinforcement learning and
control theory is to make optimal predictions in an unknown dynamical system.
To this end, we take the approach of directly learning an autoregressive filter
for time-series prediction under unknown dynamics. Our analysis differs from
previous statistical analyses in that we regress not only on the inputs to the
dynamical system, but also the outputs, which is essential to dealing with
process noise. The main challenge is to estimate the filter under worst case
input (in $H_ınfty$ norm), for which we use an $L^ınfty$-based
objective rather than ordinary least-squares. For learning an autoregressive
model, our algorithm has optimal sample complexity in terms of the rollout
length, which does not seem to be attained by naive least-squares.
@inproceedings{lee2019robust,
abstract = {The optimal predictor for a linear dynamical system (with hidden state and
Gaussian noise) takes the form of an autoregressive linear filter, namely the
Kalman filter. However, a fundamental problem in reinforcement learning and
control theory is to make optimal predictions in an unknown dynamical system.
To this end, we take the approach of directly learning an autoregressive filter
for time-series prediction under unknown dynamics. Our analysis differs from
previous statistical analyses in that we regress not only on the inputs to the
dynamical system, but also the outputs, which is essential to dealing with
process noise. The main challenge is to estimate the filter under worst case
input (in $\mathcal H_\infty$ norm), for which we use an $L^\infty$-based
objective rather than ordinary least-squares. For learning an autoregressive
model, our algorithm has optimal sample complexity in terms of the rollout
length, which does not seem to be attained by naive least-squares.},
added-at = {2019-11-27T15:13:59.000+0100},
author = {Lee, Holden and Zhang, Cyril},
biburl = {https://www.bibsonomy.org/bibtex/2b6c9cefe28edb232e45dc506fd88c9d5/kirk86},
description = {[1905.09897] Robust guarantees for learning an autoregressive filter},
interhash = {79d153cf40d9874ed549aa9b9bf14002},
intrahash = {b6c9cefe28edb232e45dc506fd88c9d5},
keywords = {alt2020 autoregressive readings regression robustness},
note = {cite arxiv:1905.09897Comment: 27 pages},
timestamp = {2019-11-27T15:13:59.000+0100},
title = {Robust guarantees for learning an autoregressive filter},
url = {http://arxiv.org/abs/1905.09897},
year = 2019
}