Traditional Explainable AI (XAI) approaches, focused primarily on algorithmic transparency, often overlook the varied needs of users across these critical domains. A shift towards human-centered approaches is required to address these limitations and ensure that AI systems can communicate effectively with diverse stakeholders facilitating enriched Human-Computer Interaction (HCI). Explainability in HCI refers to the ability of AI systems to provide transparent and understandable explanations of their functionalities. This session focuses on advancing Human-Centered Explainable AI (HCXAI) by examining how human-centered perspectives can be incorporated at conceptual, methodological, and technical levels. The session seeks to foster the development of actionable frameworks, transferable evaluation methods, and concrete design guidelines to operationalize HCXAI. We aim to explore how XAI can go beyond expert users, ensuring that explanations are tailored to different recipients and contexts, addressing the WHO, WHEN, WHY, and HOW of explanation delivery. Contributions that integrate holistic design principles and explore the intersection of human-centered design with XAI are particularly encouraged. This session will provide a platform for innovative approaches that move the conversation beyond the algorithm to include the human factors critical to the adoption and effectiveness of AI systems.

  • Human-Centered Design for Explainable AI including design principles (eg.Clarity, Transparency, Personalization, Interactivity, Responsiveness )
  • Explanation Refinement with Human Feedback Loops (e.g. with Active learning, Reinforcement learning, Cognitive feedback loops)
  • Contextualizing Explanations for Non-Expert Users based on the user’s situation, background, or specific needs (e.g. with User Knowledge, Task Context, Cultural Context)
  • Multi-Stakeholder Approaches to HCXAI for different needs (e.g. for healthcare, law, finance)
  • Improving User Engagement on AI Systems with HCI and XAI (via Active/Reinforcement Learning, Personalizing Explanations via user’s interaction, user modeling and satisfaction)
  • Human Cognitive Load and Adaptive Granularity Control Mechanisms for Explainability Design (e.g. with chunking, progressive disclosure, visual aids)
  • Bridging Human Cognitive Models with XAI using theories (e.g. cognitive load theory, dual process theory)
  • Embodiment Influence (Physical or Virtual) on Human Perception/Comprehension of Explanations
  • Considering Contextual Cues, Communication Bandwidth, and Non-Verbal Feedback for Tailored AI-Explanation Delivery
  •  Accessibility in AI Explanations for Diverse Users (e.g. Text-to-Speech for visually impaired users, Sign Language Interpretation for hearing-impaired, Simplified Language)
    • Personalized Explanations in Healthcare, Finance, and Legal AI
    • Investigation of User Acceptance Effectiveness of Anthropomorphic (Human-like) AI Explainers (e.g. with Social Presence Theory, Turing Test)
    • Impact of Agent Appearance/Behavior on Explainability Perception (e.g. with Usability Testing, Trust Metrics)
    • Trust-Building Through Human-Centered Explanations (trust scales, Behavioral Metrics)
    • Ethical Considerations in Human-Centered AI Design (e.g. measuring Fairness, accountability)
    • Designing Explanations for Legal AI Applications (with explanation protocols, decision rationales)
    • Collaborative Decision-Making with Explainable AI (via multi-agent systems, rule-based systems, case-based reasoning)
    • Eye-Tracking Techniques, and Perception-Based Evaluation Metrics for XAI-Based Explanations (e.g. fixation duration, heatmaps, perceived usability, cognitive load, user satisfaction)
    • Addressing Social Biases in Decision-Making, Employing Social Sciences Methodologies in XAI (e.g. bias audits, fairness metrics)
    • Natural Language Generation Models for Crafting Textual Explanations for AI Model Predictions (e.g. via GPT-models, BERT)