When Does Self-supervision Improve Few-shot Learning?
J. Su, S. Maji, and B. Hariharan. (2019)cite arxiv:1910.03560Comment: ECCV 2020 camera ready. This is an updated version of "Boosting Supervision with Self-Supervision for Few-shot Learning" arXiv:1906.07079.
Abstract
We investigate the role of self-supervised learning (SSL) in the context of
few-shot learning. Although recent research has shown the benefits of SSL on
large unlabeled datasets, its utility on small datasets is relatively
unexplored. We find that SSL reduces the relative error rate of few-shot
meta-learners by 4%-27%, even when the datasets are small and only utilizing
images within the datasets. The improvements are greater when the training set
is smaller or the task is more challenging. Although the benefits of SSL may
increase with larger training sets, we observe that SSL can hurt the
performance when the distributions of images used for meta-learning and SSL are
different. We conduct a systematic study by varying the degree of domain shift
and analyzing the performance of several meta-learners on a multitude of
domains. Based on this analysis we present a technique that automatically
selects images for SSL from a large, generic pool of unlabeled images for a
given dataset that provides further improvements.
Description
[1910.03560] When Does Self-supervision Improve Few-shot Learning?
cite arxiv:1910.03560Comment: ECCV 2020 camera ready. This is an updated version of "Boosting Supervision with Self-Supervision for Few-shot Learning" arXiv:1906.07079
%0 Generic
%1 su2019selfsupervision
%A Su, Jong-Chyi
%A Maji, Subhransu
%A Hariharan, Bharath
%D 2019
%K 2019 deep-learning few-shot self-supervised
%T When Does Self-supervision Improve Few-shot Learning?
%U http://arxiv.org/abs/1910.03560
%X We investigate the role of self-supervised learning (SSL) in the context of
few-shot learning. Although recent research has shown the benefits of SSL on
large unlabeled datasets, its utility on small datasets is relatively
unexplored. We find that SSL reduces the relative error rate of few-shot
meta-learners by 4%-27%, even when the datasets are small and only utilizing
images within the datasets. The improvements are greater when the training set
is smaller or the task is more challenging. Although the benefits of SSL may
increase with larger training sets, we observe that SSL can hurt the
performance when the distributions of images used for meta-learning and SSL are
different. We conduct a systematic study by varying the degree of domain shift
and analyzing the performance of several meta-learners on a multitude of
domains. Based on this analysis we present a technique that automatically
selects images for SSL from a large, generic pool of unlabeled images for a
given dataset that provides further improvements.
@misc{su2019selfsupervision,
abstract = {We investigate the role of self-supervised learning (SSL) in the context of
few-shot learning. Although recent research has shown the benefits of SSL on
large unlabeled datasets, its utility on small datasets is relatively
unexplored. We find that SSL reduces the relative error rate of few-shot
meta-learners by 4%-27%, even when the datasets are small and only utilizing
images within the datasets. The improvements are greater when the training set
is smaller or the task is more challenging. Although the benefits of SSL may
increase with larger training sets, we observe that SSL can hurt the
performance when the distributions of images used for meta-learning and SSL are
different. We conduct a systematic study by varying the degree of domain shift
and analyzing the performance of several meta-learners on a multitude of
domains. Based on this analysis we present a technique that automatically
selects images for SSL from a large, generic pool of unlabeled images for a
given dataset that provides further improvements.},
added-at = {2020-08-01T18:28:56.000+0200},
author = {Su, Jong-Chyi and Maji, Subhransu and Hariharan, Bharath},
biburl = {https://www.bibsonomy.org/bibtex/25d08d5354ec4cb52e17bc526c1d23fd9/analyst},
description = {[1910.03560] When Does Self-supervision Improve Few-shot Learning?},
interhash = {455e0a3415f46028b8e919727de6b91b},
intrahash = {5d08d5354ec4cb52e17bc526c1d23fd9},
keywords = {2019 deep-learning few-shot self-supervised},
note = {cite arxiv:1910.03560Comment: ECCV 2020 camera ready. This is an updated version of "Boosting Supervision with Self-Supervision for Few-shot Learning" arXiv:1906.07079},
timestamp = {2020-08-01T18:28:56.000+0200},
title = {When Does Self-supervision Improve Few-shot Learning?},
url = {http://arxiv.org/abs/1910.03560},
year = 2019
}