Abstract
Through multi-agent competition, the simple objective of hide-and-seek, and
standard reinforcement learning algorithms at scale, we find that agents create
a self-supervised autocurriculum inducing multiple distinct rounds of emergent
strategy, many of which require sophisticated tool use and coordination. We
find clear evidence of six emergent phases in agent strategy in our
environment, each of which creates a new pressure for the opposing team to
adapt; for instance, agents learn to build multi-object shelters using moveable
boxes which in turn leads to agents discovering that they can overcome
obstacles using ramps. We further provide evidence that multi-agent competition
may scale better with increasing environment complexity and leads to behavior
that centers around far more human-relevant skills than other self-supervised
reinforcement learning methods such as intrinsic motivation. Finally, we
propose transfer and fine-tuning as a way to quantitatively evaluate targeted
capabilities, and we compare hide-and-seek agents to both intrinsic motivation
and random initialization baselines in a suite of domain-specific intelligence
tests.
Users
Please
log in to take part in the discussion (add own reviews or comments).