@brusilovsky

Designing Theory-Driven User-Centric Explainable AI

, , , and . Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI \textquotesingle19, ACM Press, (2019)
DOI: 10.1145/3290605.3300831

Abstract

From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.

Description

Designing Theory-Driven User-Centric Explainable AI | Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems

Links and resources

Tags

community

  • @brusilovsky
  • @dblp
@brusilovsky's tags highlighted