@kirk86

Adversarial Examples from Cryptographic Pseudo-Random Generators

, , , and . (2018)cite arxiv:1811.06418Comment: 4 pages, no figures.

Abstract

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem. More precisely, we constructed a binary classification task for which (i) a robust classifier exists; yet no non-trivial accuracy can be obtained with an efficient algorithm in (ii) the statistical query model. In the present paper we significantly strengthen both (i) and (ii): we now construct a task which admits (i') a maximally robust classifier (that is it can tolerate perturbations of size comparable to the size of the examples themselves); and moreover we prove computational hardness of learning this task under (ii') a standard cryptographic assumption.

Description

[1811.06418] Adversarial Examples from Cryptographic Pseudo-Random Generators

Links and resources

Tags

community

  • @kirk86
  • @dblp
@kirk86's tags highlighted