When Does Self-supervision Improve Few-shot Learning?

When Does Self-supervision Improve Few-shot Learning?
Jong-Chyi Su1, Subhransu Maji1, Bharath Hariharan2
UMass Amherst1, Cornell University2
Publication · Code · Presentation
SSL improves FSL · Effect of Domain Shift · Select images for SSL

We investigate the role of self-supervised learning (SSL) in the context of few-shot learning. Although recent research has shown the benefits of SSL on large unlabeled datasets, its utility on small datasets is relatively unexplored. We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets. The improvements are greater when the training set is smaller or the task is more challenging. Although the benefits of SSL may increase with larger training sets, we observe that SSL can hurt the performance when the distributions of images used for meta-learning and SSL are different. We conduct a systematic study by varying the degree of domain shift and analyzing the performance of several meta-learners on a multitude of domains. Based on this analysis we present a technique that automatically selects images for SSL from a large, generic pool of unlabeled images for a given dataset that provides further improvements.

   

Publication

When Does Self-supervision Improve Few-shot Learning?
Jong-Chyi Su, Subhransu Maji, Bharath Hariharan, ECCV, 2020.
arXiv link

   

Code

GitHub link

   

Presentation

Click here for slides (longer version)

   

Self-supervisision Improves Few-shot Learning

drawing
Combining supervised and self-supervised losses for few-shot learning. Self-supervised tasks such as jigsaw puzzle or rotation prediction act as a data-dependent regularizer for the shared feature backbone. Our work investigates how the performance on the target task domain (Ds) is impacted by the choice of the domain used for self-supervision (Dss).

drawing
Benefits of SSL for few-shot learning tasks. We show the accuracy of the ProtoNet baseline of using different SSL tasks. The jigsaw task results in an improvement of the 5-way 5-shot classification accuracy across datasets. Combining SSL tasks can be beneficial for some datasets. Here SSL was performed on images within the base classes only.

drawing
Benefits of SSL for harder few-shot learning tasks. SSL is effective even on smaller datasets and the relative benefits are higher.

   

Effect of Domain Shift

drawing
Effect of size and domain shift. Left: More unlabeled data from the same domain for SSL improves the performance of the meta-learner. Right: Replacing a fraction (x-axis) of the images with those from other domains makes SSL less effective.

   

Selecting Images for Self-supervision

drawing
Domain selection for self-supervision. We first train a domain classifier using Ds and (a subset of) Dp, then select images using the predictions from the domain classifier for self-supervision.

drawing
Effectiveness of selected images for SSL. With random selection, the extra unlabeled data often hurts the performance, while those sampled using the importance weights improve performance on all six datasets.

   

Acknowledgements

This project is supported in part by NSF #1749833 and a DARPA LwLL grant. Our experiments were performed on the University of Massachusetts Amherst GPU cluster obtained under the Collaborative Fund managed by the Massachusetts Technology Collaborative.

formatted by Markdeep 1.10