Toward Unbiased Evaluation of Local Explanations: How to Tackle the Blame Problem

Note: This blog post summarizes my research paper, “The Blame Problem in Evaluating Local Explanations and How to Tackle It”. The article is available on [Arxiv] (https://arxiv.org/abs/2310.03466) and was accepted in ECAI XAI^3 Workshop. The workshop website can be accessed here. TLDR; The evaluation measures of local explanations suffer from a problem called blame problem. The blame problem is when we blame the explanation wrongfully for their unfaithful explanations when the fault can be traced back to the black-box model as the oracle....

December 14, 2023 · 14 min · Theme PaperMod

Why can't Local Additive Explanations Explain Linear Additive Models?

In this blog post, I would like to provide a summary of my research paper, “Can Local Additive Explanations Explain Linear Additive Models.” The paper was accepted at the journal track of the European Conference on Machine Learning 2023. The list of all accepted papers can be seen here. You can access the paper in the Data Mining and Knowledge Discovery journal. TL;DR We proposed an evaluation measure for local additive explanations called Model-Intrinsic Additive Score (MIAS)....

December 13, 2023 · 11 min · Theme PaperMod

Data and Label Shift in LIME explanations

In this post, I will concisely summarize my research study, “A study of data and label shift in the LIME framework,” which was a collaboration with my supervisor, Professor Henrik Boström. The paper was accepted as oral in the Neurips 2019 workshop on “Human-Centric Machine Learning.” You can read the paper on Arxiv, and the workshop website can be accessed here: https://sites.google.com/view/hcml-2019. Introduction In 2019, LIME explanations were prevalent [1], but LIME operated differed significantly from how explanations functioned in the older days....

December 12, 2023 · 7 min · Theme PaperMod