Browsing NR vitenarkiv by Author "Jullum, Martin"
Now showing items 1-17 of 17
-
A comparative study of methods for estimating model-agnostic Shapley value explanations
Olsen, Lars Henry Berge; Glad, Ingrid Kristine; Jullum, Martin; Aas, Kjersti (Journal article; Peer reviewed, 2024)Shapley values originated in cooperative game theory but are extensively used today as a model-agnostic explanation framework to explain predictions made by complex machine learning models in the industry and academia. ... -
Comparison of Contextual Importance and Utility with LIME and Shapley Values
Främling, Kary; Westberg, Marcus; Jullum, Martin; Madhikermi, Manik; Malhi, Avleen Kaur (Journal article; Peer reviewed, 2021)Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. ... -
Efficient and simple prediction explanations with groupShapley: A practical perspective
Jullum, Martin; Redelmeier, Annabelle Alice; Aas, Kjersti (Peer reviewed; Journal article, 2021) -
Et forslag til strømstøtte basert på timespriser
Jullum, Martin (Others, 2023)Jeg foreslår at strømstøtten blir timebasert, med en støtte på 90 prosent av spotpris over cirka 87 øre. Den hindrer både gratisstrøm, negative priser og ekstrempriser og gir samme kostnad for staten. -
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Aas, Kjersti; Jullum, Martin; Løland, Anders (Journal article; Peer reviewed, 2021)Explaining complex or seemingly simple machine learning models is an important practical problem. We want to explain individual predictions from such models by learning simple, interpretable explanations. Shapley value is ... -
Explaining Predictive Models with Mixed Features Using Shapley Values and Conditional Inference Trees
Redelmeier, Annabelle Alice; Jullum, Martin; Aas, Kjersti (Chapter, 2022)It is becoming increasingly important to explain complex, black-box machine learning models. Although there is an expanding literature on this topic, Shapley values stand out as a sound method to explain predictions from ... -
eXplego: An interactive Tool that Helps you Select Appropriate XAI-methods for your Explainability Needs
Jullum, Martin; Sjødin, Jacob; Prabhu, Robindra; Løland, Anders (Journal article; Peer reviewed, 2023)The growing demand for transparency, interpretability, and explainability of machine learning models and AI systems has fueled the development of methods aimed at understanding the properties and behavior of such models ... -
groupShapley: Efficient prediction explanation with Shapley values for feature groups
Jullum, Martin; Redelmeier, Annabelle Alice; Aas, Kjersti (NR-notat;, Research report, 2021) -
groupShapley: Efficient prediction explanation with Shapley values for feature groups
Jullum, Martin; Redelmeier, Annabelle Alice; Aas, Kjersti (Journal article, 2021) -
MCCE: Monte Carlo sampling of valid and realistic counterfactual explanations for tabular data
Redelmeier, Annabelle Alice; Jullum, Martin; Aas, Kjersti; Løland, Anders (Journal article; Peer reviewed, 2024)We introduce MCCE: Monte Carlo sampling of valid and realistic Counterfactual Explanations for tabular data, a novel counterfactual explanation method that generates on-manifold, actionable and valid counterfactuals by ... -
Pairwise local Fisher and naive Bayes: Improving two standard discriminants
Otneim, Håkon; Jullum, Martin; Tjøstheim, Dag Bjarne (Journal article; Peer reviewed, 2020)The Fisher discriminant is probably the best known likelihood discriminant for continuous data. Another benchmark discriminant is the naive Bayes, which is based on marginals only. In this paper we extend both discriminants ... -
shapr: An R-package for explaining machine learning models with dependence-aware Shapley values
Sellereite, Nikolai; Jullum, Martin (Journal article; Peer reviewed, 2020) -
Some recent trends in embeddings of time series and dynamic networks
Tjøstheim, Dag Bjarne; Jullum, Martin; Løland, Anders (Journal article; Peer reviewed, 2023)We give a review of some recent developments in embeddings of time series and dynamic networks. We start out with traditional principal components and then look at extensions to dynamic factor models for time series. Unlike ... -
Some recent trends in embeddings of time series and dynamic networks
Tjøstheim, Dag Bjarne; Jullum, Martin; Løland, Anders (Journal article; Peer reviewed, 2023)We give a review of some recent developments in embeddings of time series and dynamic networks. We start out with traditional principal components and then look at extensions to dynamic factor models for time series. Unlike ... -
Statistical Embedding: Beyond Principal Components
Tjøstheim, Dag Bjarne; Jullum, Martin; Løland, Anders (Journal article; Peer reviewed, 2023)There has been an intense recent activity in embedding of very high-dimensional and nonlinear data structures, much of it in the data science and machine learning literature. We survey this activity in four parts. In the ... -
Statistical Embedding: Beyond Principal Components
Tjøstheim, Dag Bjarne; Jullum, Martin; Løland, Anders (Journal article; Peer reviewed, 2023)There has been an intense recent activity in embedding of very high-dimensional and nonlinear data structures, much of it in the data science and machine learning literature. We survey this activity in four parts. In the ... -
Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Olsen, Lars Henry Berge; Glad, Ingrid Kristine; Jullum, Martin; Aas, Kjersti (Journal article; Peer reviewed, 2022)Shapley values are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models. Shapley values have desirable theoretical properties and a sound mathematical ...