Misc,

Functional Regularisation for Continual Learning

, , , , and .
(2019)cite arxiv:1901.11356Comment: 12 pages, 2 figures.

Abstract

We introduce a framework for continual learning based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for continual learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms, so that catastrophic forgetting of earlier tasks is avoided. We demonstrate our algorithm in classification datasets, such as Split-MNIST, Permuted-MNIST and Omniglot.

Tags

Users

  • @stuart10

Comments and Reviews