
The rapid development of machine learning models and explainable AI (XAI) methods has created a diverse methodological landscape, yet a significant gap remains between available techniques and their practical adoption. To move from theory to impact, research must support practitioners in operationalizing explainability by establishing frameworks, workflows, and design practices that connect algorithmic methods with human-centered design and evaluation. This requires advancing both explanation systems (X-SYS) and explanation user interfaces (XUI). Explanation systems (X-SYS) are socio-technical systems that integrate explainability mechanisms into the model pipeline and expose them to users through coherent interaction, visualization, and evaluation workflows. An X-SYS moves beyond isolated XAI techniques by treating explainability as a system-level capability that must be deliberately designed and maintained across the model lifecycle.
Explanation User Interfaces (XUI) constitute the user-facing layer of an X-SYS. They include explanation outputs and interactive elements through which users engage with an AI system, whether explanations stem directly from the model or from explanation-generating algorithms. Unlike standard UIs, an XUI is purpose-built to externalize model reasoning, uncertainty, and decision boundaries. It supports various explanation types (e.g. local, global, counterfactual, contrastive, feature-level, concept-based) and provides interaction mechanisms that allow users to inspect, explore, critique, or adjust model behaviour according to their goals and expertise.
Developing human-centered X-SYS and XUI requires aligning three perspectives: the technical mechanisms that generate explanations, the human factors that shape their design, and the development processes that integrate them throughout the system lifecycle. Current approaches often treat explainability as an add-on, leaving a methodological gap between XAI Engineering (focused on system-level transparency) and Human-Centered XAI (focused on user needs and evaluation). As a result, explainability is insufficiently embedded within established design and software engineering frameworks.
This track aims to bridge this gap by focusing on the design and development processes that turn explainability into a practical human-centered activity. It encourages contributions that address how explainability requirements can be derived, prototyped, evaluated, and refined within iterative, interdisciplinary workflows. Drawing from HCI, UX design, software engineering, and cognitive science, the track offers a forum for advancing the real-world development of explainable systems. To distinguish this session from psychometric XAI and Uncertainty-Aware XAI, the focus is not on construct development, latent-variable modelling, or uncertainty quantification. User understanding, trust, and uncertainty perception may be discussed, but only insofar as they inform interface design, system workflows, or engineering integration. Uncertainty is treated as one design parameter among many rather than a methodological centerpiece.
Intended audience: HCI and UX researchers, XAI engineers and designers, cognitive scientists, and applied AI practitioners. The session is especially relevant for those working on interface design, workflow integration, prototyping, and system-level implementation rather than psychometric measurement or algorithmic explainability.
Key questions include:
- How can explainability be integrated into design processes from early requirements to deployment?
- Which tools, frameworks, or methodologies support the creation of effective explanation user interfaces (XUI)?
- How can evaluation methods remain lightweight and diagnostic while enabling consistent metrics across studies?
- How can collaboration between AI engineers, designers, and domain experts be made more systematic?
keywords: Explainable AI, Human-Centered XAI, Explanation User Interfaces, XUI, Explanation Systems, X-SYS, Human-AI Interaction, Model Transparency, Interpretable Machine Learning, Explainability Engineering, XAI Evaluation, Cognitive Models, Uncertainty Communication, Interactive Explanations, Visualization for XAI, User-Centered Design, Participatory XAI, Trust in AI, Explanation Prototyping, Design Workflows
Topics
Design and Development of Explanation Systems (X-SYS) System-level integration of explainability mechanisms
- Pipelines connecting model transparency tools with interaction and evaluation workflows
- Lifecycle-oriented explainability engineering (requirements → design → implementation → deployment)
- Methods for embedding explainability into model pipelines and AI system architectures
- Explainability requirements analysis and traceability across the system lifecycle
Explanation User Interfaces (XUI): Design, Prototyping & Interaction
- Design principles and interaction patterns for explanation interfaces
- Prototyping methods for XUI (low-fidelity and high-fidelity)
- Interfaces supporting local, global, counterfactual, contrastive, feature-based, or concept-based explanations
- Interaction mechanisms for inspecting, critiquing, or adjusting model behaviour
- Visualization approaches for communicating uncertainty, decision boundaries, and model reasoning
Human-Centered and Interdisciplinary Approaches
- Frameworks linking XAI techniques with HCI and UX design workflows
- Co-design, participatory, and co-creative practices for explainability
- Deriving user-oriented explainability requirements for diverse stakeholders
- Cross-disciplinary collaboration models (AI engineering × UX × domain experts)
- Inclusive, accessible, and diversity-aware explainability design
Evaluation and Methodological Integration
- Multi-method evaluation of X-SYS and XUI (algorithmic + human factors)
- Lightweight and diagnostic evaluation techniques for iterative design
- Standardization of metrics for trust, usability, understanding, uncertainty perception, and mental models
- Cognitive models and theories informing explanation design (e.g., cognitive load, dual-process reasoning)
- Experimental methods for studying human-AI interaction with explanations
Tools, Frameworks, and Development Workflows
- Toolkits connecting XAI methods to design and prototyping environments
- Integration of explainability into agile, DevOps, or human-centered development processes
- End-to-end workflows for building, deploying, and maintaining explainable systems
- Model monitoring, explanation drift detection, adaptive explanation systems
Applied Case Studies and Domain-Specific Investigations
- Explanation interface design in safety-critical or high-sensitivity domains (healthcare, law, finance, mobility)
- Studies on user needs, mental models, and trust dynamics in real deployments
- Field evaluations of X-SYS or XUI prototypes
- Organizational adoption, governance, and training for explainable systems
