With the widespread use of Artificial Intelligence (AI) systems, understanding the behaviour of intelligent agents and robots (remote or colocated) is crucial to guarantee a smooth explainer-explainee relationship. This is because it is not straightforward for an explainee (mainly human) to understand an explainer’s state of mind. In the relationship with humans, the explainer could be computers, machines, AI, agents, or robots. The relationship could take several forms: interaction, cooperation, collaboration, team, symbiosis, and/or integration. Explainability in Human-Computer Interaction (HCI) refers to the ability of AI systems to provide transparent and understandable explanations of their functionalities. Most works on XAI focus on delivering sophisticated explanations specifically targeting AI researchers and domain experts, neglecting lay users in general. Explainable AI (XAI) can bring various advantages to HCI: increased trust and satisfaction, improved accountability and transparency, better user engagement, and improved decision-making. However, recent literature has pointed out that the design of XAI systems should consider the user’s background, cognitive skills, and prior knowledge. Thus, various challenges need to be considered: balancing explainability and performance; understanding user needs and preferences; addressing diversity, discrimination, and bias; considering complexity and overhead, and ethical and social implications; textual explanations and generation of model predictions; conducting evaluations and validation; and ensuring privacy, security, and trustworthiness.
Topics
explainer-Explainee Interaction analysis with communication/HCI theories/studies |
effective communication in xAI with conversational strategies/interactive interfaces |
improving explainer-explainee interaction in xAI with feedback loops for mutual understanding |
explanation refinement with human feedback loops within AI systems |
improving user engagement on AI systems with HCI via active/reinforcement learning |
embodiment influence (physical or virtual) on human perception/comprehension of explanations of AI systems |
investigation of embodied cognition theories/virtual embodiment techniques in XAI interfaces |
considering contextual cues, communication bandwidth, and non-verbal feedback for tailored AI-explanation delivery |
investigation of user acceptance effectiveness of anthropomorphic (human-like) AI explainers |
crafting realistic explanations of AI systems with embodied AI agents/avatars |
comparison of anthropomorphic AI explainers versus virtual agents in delivering explanations |
design principles for anthropomorphic/non-anthropomorphic agents in XAI |
trust, relatability, and cultural perceptions in explanation delivery. |
effective collaboration between humans and robots/agents through xAI |
co-creation of models and participatory design approaches for explainable collaborative interfaces |
enhancing human-robot trust via explanations |
explanations in cooperation/task performance in collaborative scenarios via user-centric evaluations |
delivering explanations of AI-system in remove/face-to-face colocated settings |
investigation of distance/medium on explanation effectiveness in XAI |
adaptive explanation for xAI interfaces for on remote or face-to-face interactions |
adaptive explanation for xAI interfaces for remote or face-to-face interactions |
natural language explanations via conversational AI systems |
dialog-based explanation generation models and context-aware conversational interfaces |
impact of agent appearance/behaviour on explainability perception |
identification of optimal prompt structures for different XAI tasks and user scenarios |
designing prompts/queries techniques for eliciting more interpretable and informative responses from AI models |
conversation flow and user engagement metrics for evaluating explanations of AI systems |
prompt design evaluation on the interpretability and fidelity of AI model outputs |
identificaiton of optimal prompt structures for different XAI tasks and user scenarios |
understanding user perception/mental models of AI explanations |
exploration of perceptual psychology theories/cognitive science models to enhance XAI interfaces |
cognitive load measurement techniques for evaluating explanations of/for AI systems |
investigating the impact of explanation formats (text, visual, interactive) on user perception and comprehension in XAI |
eye-tracking techniques, and perception-based evaluation metrics for xAI-based explanations |
cognitive load measurement techniques for evaluating explanations of/for AI-systems |
cognitive load measurement techniques for evaluating explanations of/for AI systems |
user-centric evaluation frameworks for assessing the human-centered aspects of XAI |
participatory design methodologies, personalized recommendation algorithms, and user-centred approaches for XAI development |
cultural diversity, societal norms, and group dynamics in designing socially-aware XAI interfaces |
transparency/interpretability roles in AI systems for promoting societal trust |
addressing social biases in decision-making, employing social sciences methodologies in XAI |
natural language generation models for crafting textual explanations for AI model predictions |
leveraging techniques such as attention mechanisms/transformer architectures for crafting explanations |
comprehensibility/effectiveness evaluation of textual explanations |
enhancing user understanding of model predictions via readability assessments/linguistic analysis |
investigating interpretable models for sentiment analysis tasks |
sentiment-specific feature extraction techniques for explainability |
XAI interfaces for sentiment analysis applications |
elucidating the rationale behind sentiment predictions with AI-systems |
sentiment visualization methods and sentiment-specific explanation algorithms |
explanations for text generation models |
elucidating the rationale behind sentiment predictions with AI systems |
trade-offs fluency-explainability exploration in text generation tasks |
evaluating user preferences/comprehension in text generation explainability interfaces |
Context-aware AI explanations in HCI |
contextual bandit algorithms, adaptive interface designs for timely/relevant explanations and user-context alignment |
AI explanations adaptation to match user mental models/context in HCI |
employing adaptive explanation generation methods and context-aware interfaces |
dynamic detail/complexity adjustment of AI explanations based on user expertise |
human cognitive load and adaptive granularity control mechanisms for explainability design |
assessing the usability/effectiveness of XAI interfaces in HCI using user-centered evaluation methodologies |
think-aloud protocols, usability testing, task performance measures for xAI |
Assessing the usability/effectiveness of XAI interfaces in HCI using user-centered evaluation methodologies |