Model-agnostic explanations apply to any machine-learning model. They are often produced by post-hoc interpretation methods on a local (to explain individual data points), regional (to describe subgroups), or global level (to explain the model concerning the entire feature space). This special track is focused on interpretation methods that generate model-agnostic explanations for specific machine-learning tasks, including regression, classification, survival, and clustering tasks.

Topics

Model-agnostic local interpretation methods for individual data points
Local surrogate models (LIME, MOB trees, Surrogate Locally-Interpretable Models SLIM)
Extensions and approximations for game-theoretic model-agnostic explanations (Shapley sampling values, Banzhaf values, SHAP values, SAGE values)
Global interpretation methods for visualizing/quantifying feature effects (partial dependence plots, accumulated local effect plots)
Model-agnostic average marginal effects and gradient-based approaches
Global feature importance methods based on loss functions or variance of model predictions (permutation/conditional feature importance, Sobol indices)
Global/regional surrogate models for interpretable model distillation
Benchmarks, comparisons, and evaluation metrics for model-agnostic explanations
Statistical inference and confidence intervals for model-agnostic explanations
Pitfalls/limitations of model-agnostic explanations (e.g., extrapolation issues, aggregation bias, disagreement problem, fooling explanations via different sampling strategies)
Model diagnostics for machine learning interpretability (residual analysis, learning curves for explanations)
Use of model-agnostic interpretation methods for model auditing (distributional shift detection, mitigating bias in model predictions, sensitivity analysis)
Extensions of model-agnostic explanations to understand AutoML systems and hyperparameter optimization (hyperparameter effects, hyperparameter importance)

Supported by