Understanding and managing uncertainty in AI models is increasingly important to ensure transparency, appropriate trust, and reliability. This track explores diverse approaches to integrating uncertainty quantification into explainable AI (XAI) frameworks, emphasizing how explanations can communicate both model confidence and prediction reliability. Topics of interest include methods for representing and interpreting aleatoric and epistemic uncertainties, as well as techniques that use these insights to guide decision-making processes in high-stakes environments. Submissions addressing uncertainty-aware explanations in areas such as reject/defer options, time series, and human-centric systems are particularly welcome. The track also seeks novel evaluation metrics and domain-specific applications of uncertainty-aware explanations, with the goal of advancing actionable and interpretable AI systems.

  • Aleatoric and epistemic uncertainty representation in model explanations
  • Methods for uncertainty quantification in inherently interpretable models
  • Explanations for reject and defer options in predictive systems
  • Time-series uncertainty explanation methods
  • Frameworks for multi-level uncertainty representation in hierarchical systems
  • Human-centric approaches to calibrating uncertainty explanations
  • Uncertainty-aware explanations based on Bayesian theory
  • Conformal prediction-based explanations
  • Metrics for evaluating uncertainty-aware explanations in XAI
  • Feature importance explanations incorporating model uncertainty
  • Methods for explaining uncertainty in real-time systems
  • Exploring uncertainty dynamics in sequential decision systems
  • Scalable frameworks for large-scale uncertainty-aware explanations
  • Domain-specific studies of uncertainty-aware XAI (in domains driven by decision confidence)
  • Contextual uncertainty explanations tailored to fairness and bias mitigation
  • Calibration strategies for enhancing trust through uncertainty representation
  • Integration of uncertainty in federated and decentralized AI systems
  • Techniques for simultaneous explanation and uncertainty quantification in AI pipelines
  • Practical tools for uncertainty-aware model debugging
  • Uncertainty-aware explanations for multimodal data systems
  • Leveraging uncertainty explanations to improve model validation processes
  • Impact of uncertainty-aware XAI on appropriate trust and reliance
  • Novel visualization techniques for uncertainty in explanations
  • Combining uncertainty with counterfactual explanations
  • Frameworks for domain adaptation with uncertainty-aware interpretations
  • Explainable reinforcement learning under uncertainty
  • Enhancing safety in critical AI systems through uncertainty explanations
  • Tools for user-adaptive explanations based on uncertainty or confidence levels
  • Explanation methods focusing on uncertainty in low-resource settings