Author of the publication

Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks.

, , and . Int. J. Comput. Vis., 128 (4): 1028-1046 (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Fast Differentiable Clipping-Aware Normalization and Rescaling., and . CoRR, (2020)EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy., , and . CoRR, (2020)Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models., , and . CoRR, (2017)Towards the first adversarially robust neural network model on MNIST., , , and . ICLR (Poster), OpenReview.net, (2019)Foolbox: A Python toolbox to benchmark the robustness of machine learning models, , and . (2017)cite arxiv:1707.04131Comment: Code and examples available at https://github.com/bethgelab/foolbox and documentation available at http://foolbox.readthedocs.io.On Evaluating Adversarial Robustness., , , , , , , , and . CoRR, (2019)Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks., , and . CoRR, (2019)Robust Perception through Analysis by Synthesis., , , and . CoRR, (2018)Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models., , and . CoRR, (2017)Comparing deep neural networks against humans: object recognition when the signal gets weaker., , , , , and . CoRR, (2017)