Author of the publication

Using trajectory data to improve bayesian optimization for reinforcement learning.

, , and . J. Mach. Learn. Res., 15 (1): 253-282 (2014)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Hierarchical Explanation-Based Reinforcement Learning., and . ICML, page 358-366. Morgan Kaufmann, (1997)Dependent Gated Reading for Cloze-Style Question Answering., , , and . COLING, page 3330-3345. Association for Computational Linguistics, (2018)A Bayesian Approach for Policy Learning from Trajectory Preference Queries., , and . NIPS, page 1142-1150. (2012)Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference., , and . EMNLP, page 4952-4957. Association for Computational Linguistics, (2018)Conservative Agency., , and . AISafety@IJCAI, volume 2419 of CEUR Workshop Proceedings, CEUR-WS.org, (2019)A Formal Framework for Speedup Learning from Problems and Solutions., and . J. Artif. Intell. Res., (1996)Exploiting Causal Independence in Markov Logic Networks: Combining Undirected and Directed Models., , , , , and . StarAI@AAAI, volume WS-10-06 of AAAI Technical Report, AAAI, (2010)Hindsight Optimization for Hybrid State and Action MDPs., , , , and . AAAI, page 3790-3796. AAAI Press, (2017)HC-Search for Multi-Label Prediction: An Empirical Study., , , , and . AAAI, page 1795-1801. AAAI Press, (2014)Learning Scripts as Hidden Markov Models., , , , and . AAAI, page 1565-1571. AAAI Press, (2014)