Author of the publication

Offline Reinforcement Learning with Pseudometric Learning.

, , , , , and . ICML, volume 139 of Proceedings of Machine Learning Research, page 2307-2318. PMLR, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Leverage the Average: an Analysis of Regularization in RL., , , , , and . CoRR, (2020)Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning., , , , , and . NeurIPS, (2020)Momentum in Reinforcement Learning., , , and . AISTATS, volume 108 of Proceedings of Machine Learning Research, page 2529-2538. PMLR, (2020)Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback., , , , , , , , , and 9 other author(s). ACL (1), page 6252-6272. Association for Computational Linguistics, (2023)On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes., , , , , , and . ICLR, OpenReview.net, (2024)Deep Conservative Policy Iteration., , and . AAAI, page 6070-6077. AAAI Press, (2020)Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice., , , , , , , , , and 5 other author(s). ICML, volume 202 of Proceedings of Machine Learning Research, page 17135-17175. PMLR, (2023)Implicitly Regularized RL with Implicit Q-values., , , , and . AISTATS, volume 151 of Proceedings of Machine Learning Research, page 1380-1402. PMLR, (2022)BOND: Aligning LLMs with Best-of-N Distillation., , , , , , , , , and 10 other author(s). CoRR, (2024)Offline Reinforcement Learning as Anti-Exploration., , , , , , and . CoRR, (2021)