@baywiss1

Attacks on Machine Learning: Lurking Danger for Accountability

, , and . Proceedings of the Association for the Advancement of Artificial Intelligence AAAI Workshop on Artificial Intelligence Safety 2019, (2019)

Abstract

It is well-known that there is no safety without security. That being said, a sound investigation of security breaches on Ma-chine Learning (ML) is a prerequisite for any safety concerns. Since attacks on ML systems and their impact on the security goals threaten the safety of an ML system, we discuss the im-pact attacks have on the ML models’ security goals, which are rarely considered in published scientific papers. The contribution of this paper is a non-exhaustive list of pub-lished attacks on ML models and a categorization of attacks according to their phase (training, after-training) and their im-pact on security goals. Based on our categorization we show that not all security goals have yet been considered in the lit-erature, either because they were ignored or there are no pub-lications on attacks targeting those goals specifically, and that some are difficult to assess, such as accountability. This is probably due to some ML models being a black box.

Links and resources

Tags

community

  • @dblp
  • @baywiss1
@baywiss1's tags highlighted