We present a semi-supervised learning framework based on graph embeddings.
Given a graph between instances, we train an embedding for each instance to
jointly predict the class label and the neighborhood context in the graph. We
develop both transductive and inductive variants of our method. In the
transductive variant of our method, the class labels are determined by both the
learned embeddings and input feature vectors, while in the inductive variant,
the embeddings are defined as a parametric function of the feature vectors, so
predictions can be made on instances not seen during training. On a large and
diverse set of benchmark tasks, including text classification, distantly
supervised entity extraction, and entity classification, we show improved
performance over many of the existing models.
Description
[1603.08861] Revisiting Semi-Supervised Learning with Graph Embeddings
%0 Generic
%1 yang2016revisiting
%A Yang, Zhilin
%A Cohen, William W.
%A Salakhutdinov, Ruslan
%D 2016
%K ml proposal tau
%T Revisiting Semi-Supervised Learning with Graph Embeddings
%U http://arxiv.org/abs/1603.08861
%X We present a semi-supervised learning framework based on graph embeddings.
Given a graph between instances, we train an embedding for each instance to
jointly predict the class label and the neighborhood context in the graph. We
develop both transductive and inductive variants of our method. In the
transductive variant of our method, the class labels are determined by both the
learned embeddings and input feature vectors, while in the inductive variant,
the embeddings are defined as a parametric function of the feature vectors, so
predictions can be made on instances not seen during training. On a large and
diverse set of benchmark tasks, including text classification, distantly
supervised entity extraction, and entity classification, we show improved
performance over many of the existing models.
@misc{yang2016revisiting,
abstract = {We present a semi-supervised learning framework based on graph embeddings.
Given a graph between instances, we train an embedding for each instance to
jointly predict the class label and the neighborhood context in the graph. We
develop both transductive and inductive variants of our method. In the
transductive variant of our method, the class labels are determined by both the
learned embeddings and input feature vectors, while in the inductive variant,
the embeddings are defined as a parametric function of the feature vectors, so
predictions can be made on instances not seen during training. On a large and
diverse set of benchmark tasks, including text classification, distantly
supervised entity extraction, and entity classification, we show improved
performance over many of the existing models.},
added-at = {2016-11-28T14:29:13.000+0100},
author = {Yang, Zhilin and Cohen, William W. and Salakhutdinov, Ruslan},
biburl = {https://www.bibsonomy.org/bibtex/22f275687838f4ac93bab1e6cf96f8ca5/machinelearning},
description = {[1603.08861] Revisiting Semi-Supervised Learning with Graph Embeddings},
interhash = {cf8907a5a3b4e679d9e550d999d5f798},
intrahash = {2f275687838f4ac93bab1e6cf96f8ca5},
keywords = {ml proposal tau},
note = {cite arxiv:1603.08861Comment: ICML 2016},
timestamp = {2016-11-28T14:29:13.000+0100},
title = {Revisiting Semi-Supervised Learning with Graph Embeddings},
url = {http://arxiv.org/abs/1603.08861},
year = 2016
}