Author of the publication

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.

, , , , and . CoRR, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

From attribution maps to human-understandable explanations through Concept Relevance Propagation., , , , , , and . Nat. Mac. Intell., 5 (9): 1006-1019 (September 2023)Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations., , , and . CoRR, (2023)AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers., , , , , , and . CoRR, (2024)From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space., , , , and . AAAI, page 21046-21054. AAAI Press, (2024)From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space., , , , and . CoRR, (2023)ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs., , , , and . CoRR, (2021)From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation., , , , , , and . CoRR, (2022)Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations., , , , and . CVPR Workshops, page 3829-3839. IEEE, (2023)Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models., , , and . MICCAI (2), volume 14221 of Lecture Notes in Computer Science, page 596-606. Springer, (2023)Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations., , , , and . CoRR, (2022)