Author of the publication

Adversarial Example Defense: Ensembles of Weak Defenses are not Strong.

, , , , and . WOOT, USENIX Association, (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?, , , , , , , , and . CoRR, (2020)Poisoning Web-Scale Training Datasets is Practical., , , , , , , , and . CoRR, (2023)Publishing Efficient On-device Models Increases Adversarial Vulnerability., , and . CoRR, (2022)Initialization Matters for Adversarial Transfer Learning., , , , , and . CoRR, (2023)Identifying and Mitigating the Security Risks of Generative AI., , , , , , , , , and 13 other author(s). CoRR, (2023)Measuring Forgetting of Memorized Training Examples., , , , , , , , , and 1 other author(s). CoRR, (2022)Debugging Differential Privacy: A Case Study for Privacy Auditing., , , , , and . CoRR, (2022)Students Parrot Their Teachers: Membership Inference on Model Distillation., , , , and . CoRR, (2023)The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks., , , , and . USENIX Security Symposium, page 267-284. USENIX Association, (2019)Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples., , and . ICML, volume 80 of Proceedings of Machine Learning Research, page 274-283. PMLR, (2018)