Evolving Keepaway Soccer Players through Task Decomposition
S. Whiteson, N. Kohl, R. Miikkulainen, and P. Stone. GECCO'03: Proc. 5th Genetic and Evolutionary Computation Conf., volume 2723 of LNCS, page 201. Chicago, IL, Springer, (2003)
Abstract
In some complex control tasks, learning a direct mapping from an agent's
sensors to its actuators is very difficult. For such tasks, decomposing
the problem into more manageable components can make learning feasible.
In this paper, we provide a task decomposition, in the form of a
decision tree, for one such task. We investigate two different methods
of learning the resulting subtasks. The first approach, layered learning,
trains each component sequentially in its own training environment,
aggressively constraining the search. The second approach, coevolution,
learns all the subtasks simultaneously from the same experiences
and puts few restrictions on the learning algorithm. We empirically
compare these two training methodologies using neuro-evolution, a
machine learning algorithm that evolves neural networks. Our experiments,
conducted in the domain of simulated robotic soccer keepaway, indicate
that neuro-evolution can learn effective behaviors and that the less
constrained coevolutionary approach outperforms the sequential approach.
These results provide new evidence of coevolution's utility and suggest
that solution spaces should not be over-constrained when supplementing
the learning of complex tasks with human knowledge.
%0 Conference Paper
%1 Whiteson:2003:gecco
%A Whiteson, Shimon
%A Kohl, Nate
%A Miikkulainen, Risto Pekka
%A Stone, Peter Herald
%B GECCO'03: Proc. 5th Genetic and Evolutionary Computation Conf.
%C Chicago, IL
%D 2003
%E Cantú-Paz, Erick
%E Foster, James A.
%E Deb, Kalyanmoy
%E Davis, Lawrence
%E Roy, Rajkumar
%E O'Reilly, Una-May
%E Beyer, Hans-Georg
%E Standish, Russell K.
%E Kendall, Graham
%E Wilson, Stewart W.
%E Harman, Mark
%E Wegener, Joachim
%E Dasgupta, Dipankar
%E Potter, Mitchell A.
%E Schultz, Alan C.
%E Dowsland, Kathryn A.
%E Jonoska, Natasa
%E Miller, Julian F.
%I Springer
%K imported thesis
%P 201
%T Evolving Keepaway Soccer Players through Task Decomposition
%V 2723
%X In some complex control tasks, learning a direct mapping from an agent's
sensors to its actuators is very difficult. For such tasks, decomposing
the problem into more manageable components can make learning feasible.
In this paper, we provide a task decomposition, in the form of a
decision tree, for one such task. We investigate two different methods
of learning the resulting subtasks. The first approach, layered learning,
trains each component sequentially in its own training environment,
aggressively constraining the search. The second approach, coevolution,
learns all the subtasks simultaneously from the same experiences
and puts few restrictions on the learning algorithm. We empirically
compare these two training methodologies using neuro-evolution, a
machine learning algorithm that evolves neural networks. Our experiments,
conducted in the domain of simulated robotic soccer keepaway, indicate
that neuro-evolution can learn effective behaviors and that the less
constrained coevolutionary approach outperforms the sequential approach.
These results provide new evidence of coevolution's utility and suggest
that solution spaces should not be over-constrained when supplementing
the learning of complex tasks with human knowledge.
%@ 3-540-40602-6
@inproceedings{Whiteson:2003:gecco,
abstract = {In some complex control tasks, learning a direct mapping from an agent's
sensors to its actuators is very difficult. For such tasks, decomposing
the problem into more manageable components can make learning feasible.
In this paper, we provide a task decomposition, in the form of a
decision tree, for one such task. We investigate two different methods
of learning the resulting subtasks. The first approach, layered learning,
trains each component sequentially in its own training environment,
aggressively constraining the search. The second approach, coevolution,
learns all the subtasks simultaneously from the same experiences
and puts few restrictions on the learning algorithm. We empirically
compare these two training methodologies using neuro-evolution, a
machine learning algorithm that evolves neural networks. Our experiments,
conducted in the domain of simulated robotic soccer keepaway, indicate
that neuro-evolution can learn effective behaviors and that the less
constrained coevolutionary approach outperforms the sequential approach.
These results provide new evidence of coevolution's utility and suggest
that solution spaces should not be over-constrained when supplementing
the learning of complex tasks with human knowledge.},
added-at = {2017-03-16T11:50:55.000+0100},
address = {Chicago, IL},
author = {Whiteson, Shimon and Kohl, Nate and Miikkulainen, Risto Pekka and Stone, Peter Herald},
biburl = {https://www.bibsonomy.org/bibtex/294b4ef031b5b5d082a45268e03b8e378/krevelen},
booktitle = {GECCO'03: Proc. 5th Genetic and Evolutionary Computation Conf.},
citeseerurl = {citeseer.ist.psu.edu/whiteson03evolving.html},
crossref = {gecco:2003},
editor = {Cant{\'u}-Paz, Erick and Foster, James A. and Deb, Kalyanmoy and Davis, Lawrence and Roy, Rajkumar and O'Reilly, Una-May and Beyer, Hans-Georg and Standish, Russell K. and Kendall, Graham and Wilson, Stewart W. and Harman, Mark and Wegener, Joachim and Dasgupta, Dipankar and Potter, Mitchell A. and Schultz, Alan C. and Dowsland, Kathryn A. and Jonoska, Natasa and Miller, Julian F.},
interhash = {2bd8b1dfe6093a91d882a5c5c13eceba},
intrahash = {94b4ef031b5b5d082a45268e03b8e378},
isbn = {3-540-40602-6},
keywords = {imported thesis},
owner = {Rick},
pages = 201,
publisher = {Springer},
series = {LNCS},
timestamp = {2017-03-16T11:54:14.000+0100},
title = {Evolving Keepaway Soccer Players through Task Decomposition},
volume = 2723,
year = 2003
}