Following the success of Explainable AI at generating faithful and understandable explanations of complex ML models, researchers have started to ask how the outcomes of Explainable AI can be used systematically to enable meaningful actions. This includes (1) what type of explanations are most helpful to enable human experts to achieve more efficient and more accurate decision-making, (2) how, based on human feedback on explanations, can one systematically improve the robustness and generalization ability of ML models or make them comply with specific human norms, and (3) how to enable meaningful actioning of real-world systems via interpretable ML-based digital twins. This special track will treat at the same time the technical aspects of how to build highly informative explanations which form the basis for actionability, the question of how to evaluate and improve the quality of actions derived from XAI, and finally, explore real-world use cases where those actions lead to improved outcomes.

Topics

Rich, structured explanations techniques (e.g. higher-order, hierarchical, disentangled) designed for actionability
Explanations techniques based on reference points (e.g. counterfactual explanations) designed for actionability
Hybrid methods combining multiple explanation paradigms to improve actionability further
Shapley-, LRP-, or attention-based XAI techniques for helping a user to take meaningful actions in a data-rich environment
Explanation-guided dimensionality reduction to facilitate taking action under high-throughput data or real-time constraints
Annotation techniques enabling a user to generate actionable explanatory feedback from XAI explanations
Techniques that leverage user explanatory feedback to produce an improved ML model
Explanation-driven pruning or retraining to reduce the reliance of an ML model on spurious correlations
Explanation-driven disentanglement or retraining to improve an ML model’s robustness to dataset shifts
Attribution methods and counterfactuals combined with digital twins to identify effective actions in real-world systems
Counterfactual and attribution methods combined with reinforcement learning to produce effective control policies in real-world systems
Design of environments (e.g. reduced or simulated environments) for end-to-end evaluation of XAI actionability
Utility-based metrics (e.g. added-value in a deployed setting) for end-to-end evaluation of XAI actionability
Indirect metrics (e.g. informativeness of an explanation, action-response prediction accuracy) for component-wise evaluation of XAI actionability
Datasets (paired with ground-truth simulatable systems) for evaluating actions derived from XAI explanations
Application of actionable XAI in biomedicine, e.g. for acting on molecular pathways
Application of actionable XAI in industry, e.g. for calibration in manufacturing processes

Supported by