Author of the publication

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation.

, and . NIPS, page 2266-2276. (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

SGD with Large Step Sizes Learns Sparse Features., , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 903-925. PMLR, (2023)RobustBench: a standardized adversarial robustness benchmark., , , , , , , and . NeurIPS Datasets and Benchmarks, (2021)Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks., , and . CoRR, (2024)Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs., , , , , , and . CoRR, (2024)Provably robust boosted decision stumps and trees against adversarial attacks., and . NeurIPS, page 12997-13008. (2019)ARIA: Adversarially Robust Image Attribution for Content Provenance., , , , , , and . CVPR Workshops, page 33-43. IEEE, (2022)The Effects of Overparameterization on Sharpness-aware Minimization: An Empirical and Theoretical Analysis., , , and . CoRR, (2023)Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning., , , and . ICML, OpenReview.net, (2024)Why Do We Need Weight Decay in Modern Deep Learning?, , , and . CoRR, (2023)Square Attack: a query-efficient black-box adversarial attack via random search., , , and . CoRR, (2019)