
Uncertainty-Aware Explainable AI (UAXAI) is an emerging, fast-moving frontier in AI that tackles a crucial gap in today’s explainability techniques. While traditional XAI focuses on why a model made a decision, recent insights highlight that this alone is insufficient for trustworthy AI – users also need to know how sure the model is about its explanation and prediction.
There is growing recognition that conveying a model’s confidence alongside its reasoning is essential for robust, human-centered AI. However, the young field of UAXAI faces clear signs of a maturing discipline in need of synthesis and standards. Evaluation practices remain heterogeneous and largely model-centered, with few user studies and no agreed-upon metrics for uncertainty in explanations. Reliability measures like calibration, coverage, explanation variance, or stability are inconsistently reported, often conflated with raw model performance. These gaps underscore the need for community convergence and visibility.
This special session will bring together researchers and practitioners to accelerate progress in UAXAI, emphasizing both theoretical advances and real-world applications. It will bridge interpretability and reliability by showcasing methods that tell users not only what the model says, but also how certain it is, fostering appropriate trust in AI. By highlighting new frameworks, evaluation protocols, uncertainty-aware deferral/abstention strategies, and domain-specific case studies, the session aims to chart a roadmap for uncertainty-aware explanations that improve transparency, safety, and user trust in high-stakes AI systems.
Keywords: Uncertainty-Aware XAI, Uncertainty Quantification, Aleatoric Uncertainty, Epistemic Uncertainty, Calibration, Coverage, Conformal Prediction, Bayesian Methods, Monte Carlo Dropout, Counterfactual Explanations, Abstention/Deferral, Explanation Stability, Explainer Variance, Evaluation Protocols, Human-in-the-Loop, Appropriate Trust, Trust Calibration, High-Stakes AI, Autonomous Systems.
Topics
- Aleatoric vs epistemic uncertainty in explanations. Clarify and communicate their roles. Applications: diagnosis, credit risk. Novel: decomposition at feature or modality level.
- Confidence-calibrated explanations. Pair each rationale with calibrated confidence. Applications: clinical decision support, fraud. Novel: explanation-level reliability reporting.
- Integrated frameworks for prediction quality, explanation quality, and trust. Jointly optimize performance, calibration, and interpretability. Applications: regulated finance, safety-critical ops. Novel: multi-objective training using UQ signals.
- Evaluation protocols for UAXAI. Standardize coverage, calibration, stability, and human impact. Applications: cross-domain benchmarks. Novel: unified triad of metrics plus user studies.
- Explanation method variance and stability. Quantify explainer uncertainty and aggregate robustly. Applications: SHAP/LIME in healthcare, GNN explainers. Novel: variance-aware explainer selection.
- Reject and defer with explanations. Explain abstention and handoff rules under high uncertainty. Applications: clinical triage, credit decisions. Novel: policy-aware deferral calibrated to user risk.
- Conformal prediction and set-valued explanations. Distribution-free intervals and sets paired with explanations. Applications: diagnosis with guarantees, time-series forecasting. Novel: Venn-Abers and predictive distributions linked to XAI.
- Bayesian and probabilistic approaches. Use BNNs, GPs, ensembles to express epistemic uncertainty in explanations. Applications: forecasting, perception. Novel: posteriors over attributions.
- Counterfactuals with uncertainty. Generate recourse with confidence bounds. Applications: loans, safety cases. Novel: robust counterfactuals under epistemic constraints.
- Human-in-the-loop and appropriate trust calibration. Design UIs that communicate uncertainty to prevent over/under-trust. Applications: HCI studies, operational consoles. Novel: adaptive uncertainty presentation by user role.
- Domain applications and deployment. Case studies in healthcare, finance, autonomous systems, energy. Focus on data shift, latency, and governance. Novel: field-validated UAXAI playbooks.
- Emerging trends and scalability. UAXAI for LLMs, streaming and real-time systems. Novel: online calibration and latency-aware explanations at scale.



