Bayesian Optimization with Exponential Convergence
K. Kawaguchi, L. Kaelbling, and T. Lozano-Pérez. (2016)cite arxiv:1604.01348v1.pdfComment: In NIPS 2015 (Advances in Neural Information Processing Systems 2015).
Abstract
This paper presents a Bayesian optimization method with exponential
convergence without the need of auxiliary optimization and without the
delta-cover sampling. Most Bayesian optimization methods require auxiliary
optimization: an additional non-convex global optimization problem, which can
be time-consuming and hard to implement in practice. Also, the existing
Bayesian optimization method with exponential convergence requires access to
the delta-cover sampling, which was considered to be impractical. Our approach
eliminates both requirements and achieves an exponential convergence rate.
%0 Generic
%1 kawaguchi2016bayesian
%A Kawaguchi, Kenji
%A Kaelbling, Leslie Pack
%A Lozano-Pérez, Tomás
%D 2016
%K a_creuser bayesian optimization
%T Bayesian Optimization with Exponential Convergence
%U http://arxiv.org/abs/1604.01348
%X This paper presents a Bayesian optimization method with exponential
convergence without the need of auxiliary optimization and without the
delta-cover sampling. Most Bayesian optimization methods require auxiliary
optimization: an additional non-convex global optimization problem, which can
be time-consuming and hard to implement in practice. Also, the existing
Bayesian optimization method with exponential convergence requires access to
the delta-cover sampling, which was considered to be impractical. Our approach
eliminates both requirements and achieves an exponential convergence rate.
@misc{kawaguchi2016bayesian,
abstract = {This paper presents a Bayesian optimization method with exponential
convergence without the need of auxiliary optimization and without the
delta-cover sampling. Most Bayesian optimization methods require auxiliary
optimization: an additional non-convex global optimization problem, which can
be time-consuming and hard to implement in practice. Also, the existing
Bayesian optimization method with exponential convergence requires access to
the delta-cover sampling, which was considered to be impractical. Our approach
eliminates both requirements and achieves an exponential convergence rate.},
added-at = {2016-04-06T07:16:28.000+0200},
author = {Kawaguchi, Kenji and Kaelbling, Leslie Pack and Lozano-Pérez, Tomás},
biburl = {https://www.bibsonomy.org/bibtex/2aee65d5bce1fa12f1d15a24eb7338569/pixor},
description = {1604.01348v1.pdf},
interhash = {546ca98727adf6564debf833c85e21a3},
intrahash = {aee65d5bce1fa12f1d15a24eb7338569},
keywords = {a_creuser bayesian optimization},
note = {cite arxiv:1604.01348v1.pdfComment: In NIPS 2015 (Advances in Neural Information Processing Systems 2015)},
timestamp = {2016-04-06T07:16:28.000+0200},
title = {Bayesian Optimization with Exponential Convergence},
url = {http://arxiv.org/abs/1604.01348},
year = 2016
}