@mengcao

When Does Self-Supervision Help Graph Convolutional Networks?

, , , and . arXiv:2006.09136 cs, stat, (July 2020)arXiv: 2006.09136.

Abstract

Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images. Its introduction to graph convolutional networks (GCNs) operating on graph data is however rarely explored. In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs. We first elaborate three mechanisms to incorporate selfsupervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning. Moreover, we propose to investigate three novel selfsupervised learning tasks for GCNs with theoretical rationales and numerical comparisons. Lastly, we further integrate multi-task self-supervision into graph adversarial training. Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness. Our codes are available at https: //github.com/Shen-Lab/SS-GCNs.

Links and resources

Tags

community

  • @dblp
  • @mengcao
@mengcao's tags highlighted