Many complex control problems require sophisticated solutions that
are not amenable to traditional controller design. Not only is it
difficult to model real world systems, but often it is unclear what
kind of behavior is required to solve the task. Reinforcement learning
(RL) approaches have made progress by using direct interaction with
the task environment, but have so far not scaled well to large state
spaces and environments that are not fully observable. In recent
years, neuroevolution, the artificial evolution of neural networks,
has had remarkable success in tasks that exhibit these two properties.
In this paper, we compare a neuroevolution method called Cooperative
Synapse Neuroevolution (CoSyNE), that uses cooperative coevolution
at the level of individual synaptic weights, to a broad range of
reinforcement learning algorithms on very difficult versions of the
pole balancing problem that involve large (continuous) state spaces
and hidden state. CoSyNE is shown to be significantly more efficient
and powerful than the other methods on these tasks.