@jaeschke

A brief introduction to weakly supervised learning

. National Science Review, 5 (1): 44-53 (August 2017)
DOI: 10.1093/nsr/nwx106

Abstract

Supervised learning techniques construct predictive models by learning from a large number of training examples, where each training example has a label indicating its ground-truth output. Though current techniques have achieved great success, it is noteworthy that in many tasks it is difficult to get strong supervision information like fully ground-truth labels due to the high cost of the data-labeling process. Thus, it is desirable for machine-learning techniques to work with weak supervision. This article reviews some research progress of weakly supervised learning, focusing on three typical types of weak supervision: incomplete supervision, where only a subset of training data is given with labels; inexact supervision, where the training data are given with only coarse-grained labels; and inaccurate supervision, where the given labels are not always ground-truth.

Links and resources

Tags

community

  • @jaeschke
  • @nilsd
@jaeschke's tags highlighted