Author of the publication

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.

, , , , , and . CVPR, page 16143-16152. IEEE, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging., , , , and . CoRR, (2022)Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations., , , , , , and . CoRR, (2022)Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement., , , and . CoRR, (2022)Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond., , , , , , , and . J. Mach. Learn. Res., (2023)Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test., , , and . CoRR, (2024)Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations., , , , , and . CVPR, page 16143-16152. IEEE, (2023)Measurably Stronger Explanation Reliability Via Model Canonization., , and . ICIP, page 516-520. IEEE, (2022)Beyond explaining: Opportunities and challenges of XAI-based model improvement., , , and . Inf. Fusion, (April 2023)Layer-wise Feedback Propagation., , , , , and . CoRR, (2023)Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations., , , , , and . CoRR, (2022)