Explainable Machine Learning Group Homepage
This is the home of the Explainable Machine Learning research group of the Department for Computer Vision and Machine Learning of the Max Planck Institute for Informatics. Here, we provide an overview as well as all the details on our ongoing and past work around Explainability in Machine Learning, including generation of explanations for black box models, inherently interpretable neural network approaches, and their applications.
News
10/25 I am happy to announce that this semester we are offering a seminar on Explainable Machine Learning and Machine Learning Approaches for Building Virtual Cell Models in collaboration with other groups at Saarland University.
10/25 Excited to share the acceptance of FaCT at NeurIPS, which equips neural networks with concept traces explaining the underlying decision-making faithful to the true network reasoning, led by associated PhD student Amin.
7/25 Happy to share the acceptance of VITAL at ICCV, which enables more realistic feature visualizations to understand neural network decision making, led by our PhD student Ada, and Spotlight at ACL, which supports users in identifying the effects of prompt and model changes led by the group of Michael Hedderich at LMU.
1/10/25 PHOENIX was highlighted as one of the “Advances in Cancer Biology Research” in 2024 by the NCI (the National Cancer Institute of the US)!
11/28/24 Jonas gave an invited talk on Mechanistic Interpretability of Neural Networks at an XAI Symposium in Mainz (organized within the hessian.AI – Connectom network funding program).
10/6/24 DASH was accepted at NeurIPS 2024! Vancouver, here we come.
6/27/24 Ada joins the team as the very first PhD student. Welcome!
6/10/24 Our lab got an OpenAI grant for API credits as part of the OpenAI Researcher Access program.
1/1/24 The Explainable Machine Learning Group was founded.