Constructing Temporal Abstractions Autonomously in Reinforcement Learning

, and . AI Magazine 39 (1): 39--50 (March 2018)


The idea of temporal abstraction, i.e. learning, planning and representing the world at multiple time scales, has been a constant thread in AI research, spanning sub-fields from classical planning and search to control and reinforcement learning. For example, programming a robot typically involves making decisions over a set of controllers, rather than working at the level of motor torques. While temporal abstraction is a very natural concept, learning such abstractions with no human input has proved quite daunting. In this paper, we present a general architecture, called option-critic, which allows learning temporal abstractions automatically, end-to-end, simply from the agent's experience. This approach allows continual learning and provides interesting qualitative and quantitative results in several tasks.

Links and resources

BibTeX key:
search on:

Comments and Reviews  

There is no review or comment yet. You can write one!


Cite this publication