Y. Zhao, I. Borovikov, J. Rupert, C. Somers, and A. Beirami. (2019)cite arxiv:1906.10124Comment: Presented at ICML 2019 Workshop on Imitation, Intent, and Interaction (I3). arXiv admin note: substantial text overlap with arXiv:1903.10545.
Abstract
In recent years, reinforcement learning has been successful in solving video
games from Atari to Star Craft II. However, the end-to-end model-free
reinforcement learning (RL) is not sample efficient and requires a significant
amount of computational resources to achieve superhuman level performance.
Model-free RL is also unlikely to produce human-like agents for playtesting and
gameplaying AI in the development cycle of complex video games. In this paper,
we present a hierarchical approach to training agents with the goal of
achieving human-like style and high skill level in team sports games. While
this is still work in progress, our preliminary results show that the presented
approach holds promise for solving the posed multi-agent learning problem.
Description
[1906.10124] On Multi-Agent Learning in Team Sports Games
cite arxiv:1906.10124Comment: Presented at ICML 2019 Workshop on Imitation, Intent, and Interaction (I3). arXiv admin note: substantial text overlap with arXiv:1903.10545
%0 Journal Article
%1 zhao2019multiagent
%A Zhao, Yunqi
%A Borovikov, Igor
%A Rupert, Jasonoptimal
%A Somers, Caedmon
%A Beirami, Ahmad
%D 2019
%K PPO dqn reinforcement_learning
%T On Multi-Agent Learning in Team Sports Games
%U http://arxiv.org/abs/1906.10124
%X In recent years, reinforcement learning has been successful in solving video
games from Atari to Star Craft II. However, the end-to-end model-free
reinforcement learning (RL) is not sample efficient and requires a significant
amount of computational resources to achieve superhuman level performance.
Model-free RL is also unlikely to produce human-like agents for playtesting and
gameplaying AI in the development cycle of complex video games. In this paper,
we present a hierarchical approach to training agents with the goal of
achieving human-like style and high skill level in team sports games. While
this is still work in progress, our preliminary results show that the presented
approach holds promise for solving the posed multi-agent learning problem.
@article{zhao2019multiagent,
abstract = {In recent years, reinforcement learning has been successful in solving video
games from Atari to Star Craft II. However, the end-to-end model-free
reinforcement learning (RL) is not sample efficient and requires a significant
amount of computational resources to achieve superhuman level performance.
Model-free RL is also unlikely to produce human-like agents for playtesting and
gameplaying AI in the development cycle of complex video games. In this paper,
we present a hierarchical approach to training agents with the goal of
achieving human-like style and high skill level in team sports games. While
this is still work in progress, our preliminary results show that the presented
approach holds promise for solving the posed multi-agent learning problem.},
added-at = {2020-01-24T08:43:34.000+0100},
author = {Zhao, Yunqi and Borovikov, Igor and Rupert, Jasonoptimal and Somers, Caedmon and Beirami, Ahmad},
biburl = {https://www.bibsonomy.org/bibtex/2b0a4ae444558c0ac608cb7fee85fde90/lanteunis},
description = {[1906.10124] On Multi-Agent Learning in Team Sports Games},
interhash = {144b6e82883d873a954d381bf9df8a84},
intrahash = {b0a4ae444558c0ac608cb7fee85fde90},
keywords = {PPO dqn reinforcement_learning},
note = {cite arxiv:1906.10124Comment: Presented at ICML 2019 Workshop on Imitation, Intent, and Interaction (I3). arXiv admin note: substantial text overlap with arXiv:1903.10545},
timestamp = {2020-01-25T13:15:59.000+0100},
title = {On Multi-Agent Learning in Team Sports Games},
url = {http://arxiv.org/abs/1906.10124},
year = 2019
}