This special track focuses on advancing human-AI interaction with explainable artificial intelligence (XAI) systems by integrating traditional XAI techniques with diverse external knowledge sources. Hybrid human-AI decision-making, central to this exploration, refers to a collaborative framework in which humans and AI systems jointly contribute to the analysis, evaluation, and resolution of decision-making tasks. This synergy leverages the complementary strengths of humans—such as domain expertise, intuition, ethical reasoning, and contextual understanding—with AI’s capabilities, including data processing, pattern recognition, and predictive modeling. In particular, the track emphasizes hybrid decision-making systems that deliver accurate predictions and provide contextually meaningful explanations tailored to diverse user needs. These systems combine structured knowledge bases, unstructured data, and multidisciplinary approaches to create a framework for explanation. Structured knowledge bases ensure consistency and reliability by offering formal, organized explanations. Unstructured knowledge, derived from text, images, and domain-specific insights, adds nuanced, adaptable explanations that address real-world complexities. A multidisciplinary perspective bridges these methods, focusing on user-centric design to ensure that explanations are accessible, transparent, and actionable for varied audiences, including experts, novices, and interdisciplinary stakeholders.
By addressing challenges in data integration, context awareness, and explanation personalization, this track highlights how explainable hybrid systems can enhance decision quality, foster trust, and ensure ethical accountability. Submissions are encouraged to present innovative methodologies and studies that advance systems capable of explaining “how” decisions are made and “why” they matter, enabling transparent and collaborative human-AI interaction.

  • Frameworks and techniques for human-AI interaction and hybrid systems, including counterfactual reasoning and example-based explanations.
  • Explaining complex decision chains in Human-xAI hybrid systems (e.g., behavioural and psychometric analysis of users)
  • Knowledge-injected explanations (e.g., domain-specific and actionable explanations through ontologies)
  • Strategies for increasing user trust (e.g., confidence interval visualizations, uncertainty heatmaps)
  • Case studies, applications and challenges for Explainable hybrid systems in critical sectors
  • Post-hoc eXplanation techniques for Learning-to-Defer systems
  • Interpretable models for Learning-to-Defer approaches to build Human-AI teams
  • Explainable Learning-to-Reject and selective classification
  • Formal methods for verifiable hybrid decision systems (e.g., symbolic reasoning, and decision boundary visualizations)
  • Bias mitigation in hybrid decision systems (e.g., counterfactual fairness analysis, fairness-aware optimization of explanations)
  • Explainable Active Learning (e.g., feedback-driven iterative refinement of decision boundaries)
  • eXplainability-based Interactive Learning Algorithms (e.g., interactive counterfactual scenarios).
  • Novel methods for Causal Evaluation of Explanation Effects in human-AI interaction
  • Explanations for Continual Learning settings with Human-in-the-loop
  • Uncertainty-Aware Explanation methods and Uncertainty Quantification (including novel visualization techniques)
  • Customizable, Interactive and Adaptive User-driven eXplanations
  • Cognitive and Behavioural Foundations of eXplainable AI
  • Empirical user studies on eXplanations for human-AI collaboration (e.g., trust assessments, cognitive load analysis)
  • Evaluation metrics for explainability in hybrid decision-making (e.g., understandability scores, usability, evaluating automation bias)
  • Ethical, legal, societal implications of eXplainable hybrid decision-making (e.g., novel Algorithmic Impact Assessment strategies)