Author of the publication

Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies.

, , and . CoRR, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

AutoGCN - Towards Generic Human Activity Recognition with Neural Architecture Search., , and . CoRR, (2024)Reinforcement Learning in an Adaptable Chess Environment for Detecting Human-understandable Concepts., and . CoRR, (2022)Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning., , and . ACC, page 2683-2690. IEEE, (2022)Identifying Important Proteins in Meibomian Gland Dysfunction with Explainable Artificial Intelligence., , , , , , , , and . CBMS, page 204-209. IEEE, (2023)Shapley values for feature selection: The good, the bad, and the axioms., , and . CoRR, (2021)Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning., , and . CoRR, (2021)Model independent feature attributions: Shapley values that uncover non-linear dependencies., , and . PeerJ Comput. Sci., (2021)Model tree methods for explaining deep reinforcement learning agents in real-time robotic applications., , , , and . Neurocomputing, (2023)Explainability methods for machine learning systems for multimodal medical datasets: research proposal., , , and . MMSys, page 347-351. ACM, (2022)The social dilemma in AI development and why we have to solve it., , and . CoRR, (2021)