Vis enkel innførsel

dc.contributor.authorFrämling, Kary
dc.contributor.authorWestberg, Marcus
dc.contributor.authorJullum, Martin
dc.contributor.authorMadhikermi, Manik
dc.contributor.authorMalhi, Avleen Kaur
dc.date.accessioned2022-03-15T06:41:51Z
dc.date.available2022-03-15T06:41:51Z
dc.date.created2022-03-14T12:03:44Z
dc.date.issued2021
dc.identifier.citationLecture Notes in Computer Science (LNCS). 2021, 12688 39-54.en_US
dc.identifier.issn0302-9743
dc.identifier.urihttps://hdl.handle.net/11250/2985131
dc.description.abstractDifferent explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.
dc.language.isoengen_US
dc.rightsNavngivelse-Ikkekommersiell-DelPåSammeVilkår 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/deed.no*
dc.titleComparison of Contextual Importance and Utility with LIME and Shapley Valuesen_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionacceptedVersion
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1
dc.identifier.doi10.1007/978-3-030-82017-6_3
dc.identifier.cristin2009477
dc.source.journalLecture Notes in Computer Science (LNCS)en_US
dc.source.volume12688en_US
dc.source.pagenumber39-54en_US
dc.relation.projectNorges forskningsråd: 237718


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse-Ikkekommersiell-DelPåSammeVilkår 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse-Ikkekommersiell-DelPåSammeVilkår 4.0 Internasjonal