Abstract
The goal of network representation learning is to
learn low-dimensional node embeddings that capture
the graph structure and are useful for solving
downstream tasks. However, despite the proliferation
of such methods, there is currently no study
of their robustness to adversarial attacks. We provide
the first adversarial vulnerability analysis
on the widely used family of methods based on
random walks. We derive efficient adversarial perturbations
that poison the network structure and
have a negative effect on both the quality of the
embeddings and the downstream tasks. We further
show that our attacks are transferable since
they generalize to many models and are successful
even when the attacker is restricted.
Users
Please
log in to take part in the discussion (add own reviews or comments).