Vis enkel innførsel

dc.contributor.authorOlsen, Lars Henry Berge
dc.contributor.authorGlad, Ingrid Kristine
dc.contributor.authorJullum, Martin
dc.contributor.authorAas, Kjersti
dc.date.accessioned2023-03-03T15:20:04Z
dc.date.available2023-03-03T15:20:04Z
dc.date.created2022-08-15T10:26:26Z
dc.date.issued2022
dc.identifier.citationJournal of machine learning research. 2022, 23 (213), 1-51.en_US
dc.identifier.issn1532-4435
dc.identifier.urihttps://hdl.handle.net/11250/3055849
dc.description.abstractShapley values are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models. Shapley values have desirable theoretical properties and a sound mathematical foundation in the field of cooperative game theory. Precise Shapley value estimates for dependent data rely on accurate modeling of the dependencies between all feature combinations. In this paper, we use a variational autoencoder with arbitrary conditioning (VAEAC) to model all feature dependencies simultaneously. We demonstrate through comprehensive simulation studies that our VAEAC approach to Shapley value estimation outperforms the state-of-the-art methods for a wide range of settings for both continuous and mixed dependent features. For high-dimensional settings, our VAEAC approach with a non-uniform masking scheme significantly outperforms competing methods. Finally, we apply our VAEAC approach to estimate Shapley value explanations for the Abalone data set from the UCI Machine Learning Repository.
dc.language.isoengen_US
dc.relation.urihttps://www.jmlr.org/papers/volume23/21-1413/21-1413.pdf
dc.rightsNavngivelse-Ikkekommersiell-DelPåSammeVilkår 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/deed.no*
dc.subjectKunstig intelligens
dc.subjectArtificial intelligence
dc.subjectShapley Values
dc.subjectShapley-verdier
dc.subjectExplainable Artificial Intelligence
dc.subjectForklarlig kunstig intelligens
dc.titleUsing Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Featuresen_US
dc.title.alternativeUsing Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Featuresen_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionpublishedVersion
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2
dc.identifier.cristin2042928
dc.source.journalJournal of machine learning researchen_US
dc.source.volume23en_US
dc.source.issue213en_US
dc.source.pagenumber1-51en_US
dc.relation.projectNorges forskningsråd: 237718
dc.subject.nsiVDP::Statistikk: 412
dc.subject.nsiVDP::Statistics: 412


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse-Ikkekommersiell-DelPåSammeVilkår 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse-Ikkekommersiell-DelPåSammeVilkår 4.0 Internasjonal