Attacks on Machine Learning: Lurking Danger for Accountability
K. Auernhammer, R. Tavakoli Kolagari, and M. Zoppelt. Proceedings of the Association for the Advancement of Artificial Intelligence AAAI Workshop on Artificial Intelligence Safety 2019, (2019)
Abstract
It is well-known that there is no safety without security. That being said, a sound investigation of security breaches on Ma-chine Learning (ML) is a prerequisite for any safety concerns. Since attacks on ML systems and their impact on the security goals threaten the safety of an ML system, we discuss the im-pact attacks have on the ML models’ security goals, which are rarely considered in published scientific papers.
The contribution of this paper is a non-exhaustive list of pub-lished attacks on ML models and a categorization of attacks according to their phase (training, after-training) and their im-pact on security goals. Based on our categorization we show that not all security goals have yet been considered in the lit-erature, either because they were ignored or there are no pub-lications on attacks targeting those goals specifically, and that some are difficult to assess, such as accountability. This is probably due to some ML models being a black box.
%0 Conference Paper
%1 auernhammer2019attacks
%A Auernhammer, Katja
%A Tavakoli Kolagari, Ramin
%A Zoppelt, Markus
%B Proceedings of the Association for the Advancement of Artificial Intelligence AAAI Workshop on Artificial Intelligence Safety 2019
%D 2019
%K kauernh
%T Attacks on Machine Learning: Lurking Danger for Accountability
%X It is well-known that there is no safety without security. That being said, a sound investigation of security breaches on Ma-chine Learning (ML) is a prerequisite for any safety concerns. Since attacks on ML systems and their impact on the security goals threaten the safety of an ML system, we discuss the im-pact attacks have on the ML models’ security goals, which are rarely considered in published scientific papers.
The contribution of this paper is a non-exhaustive list of pub-lished attacks on ML models and a categorization of attacks according to their phase (training, after-training) and their im-pact on security goals. Based on our categorization we show that not all security goals have yet been considered in the lit-erature, either because they were ignored or there are no pub-lications on attacks targeting those goals specifically, and that some are difficult to assess, such as accountability. This is probably due to some ML models being a black box.
@inproceedings{auernhammer2019attacks,
abstract = {It is well-known that there is no safety without security. That being said, a sound investigation of security breaches on Ma-chine Learning (ML) is a prerequisite for any safety concerns. Since attacks on ML systems and their impact on the security goals threaten the safety of an ML system, we discuss the im-pact attacks have on the ML models’ security goals, which are rarely considered in published scientific papers.
The contribution of this paper is a non-exhaustive list of pub-lished attacks on ML models and a categorization of attacks according to their phase (training, after-training) and their im-pact on security goals. Based on our categorization we show that not all security goals have yet been considered in the lit-erature, either because they were ignored or there are no pub-lications on attacks targeting those goals specifically, and that some are difficult to assess, such as accountability. This is probably due to some ML models being a black box.},
added-at = {2019-02-13T13:25:25.000+0100},
author = {Auernhammer, Katja and Tavakoli Kolagari, Ramin and Zoppelt, Markus},
biburl = {https://www.bibsonomy.org/bibtex/2380be12dee34e661cbc16a82b10a05cf/baywiss1},
booktitle = {Proceedings of the Association for the Advancement of Artificial Intelligence AAAI Workshop on Artificial Intelligence Safety 2019},
interhash = {a83b283ad5bc372ae7c6f28a3a419362},
intrahash = {380be12dee34e661cbc16a82b10a05cf},
keywords = {kauernh},
timestamp = {2019-05-02T08:54:06.000+0200},
title = {Attacks on Machine Learning: Lurking Danger for Accountability},
year = 2019
}