This special track at XAI 2024 focuses on the crucial aspects of fairness and trustworthiness in applying Explainable AI (XAI). The objective is to explore how XAI can be leveraged to ensure equitable AI decision-making processes across various societal domains. We highlight how robust metrics for explanation evaluations can enable the evaluation of group fairness for XAI techniques. The focus is also on discussing whether explainers can comply with the normative requirements of legal documents such as the GDPR and the AI Act. Finally, we aim to address the challenge of embedding ethical principles and trust in AI systems, particularly in sensitive healthcare, finance, and public policy areas.


Advanced techniques for bias detection and mitigation via explanations in AI systems
Cognitive perception of explanations for trustworthy AI (e.g. amount of information of explanation, modalities of delivery)
Context-dependent approaches for explanations (e.g. for personnel/patients in clinical domains, etc.)
Subject-specific explanations of AI systems (ML engineers/developers, auditing entities, etc.)
Compliance of XAI-based systems with regulations (GDPR, AI Act, USA’s Blueprint for an AI bill of Rights, etc.)
Mapping approaches for normative terminology to state-of-the-art xAI techniques (e.g. clause GDPR 63)
Consistency of AI explanations across demographics (groups, minorities, intersectional sub-cohorts)
Standards and protocols for delivering explanations of AI-based systems for transparent/responsible decision-making
Ethical considerations in the use of XAI for predictive analytics (e.g. understandability of explanations, alignment with information sensibility and protection of its attributes)
Responsible explanations of AI-based systems (e.g. moral implications/standards, observational/interventional explanations)
Development of AI systems with culturally sensitive explanations (native languages vs English, alignment with personal beliefs)
XAI methods for AI systems with subjective explanations (e.g. direct/nuanced explanations, color codes in image-based explanations)
Evaluation of the degree of accessibility of AI-based explanations (e.g. loan applicants/employees in banking)
Explainable methods for uncovering misrepresentation of categories in AI models
Development of approaches for assessing the loss of trust in AI-based automated systems via explanations
Carbon footprint of the synthetic generation of explanation in AI-based models (e.g. hardware requirements, computational complexity, resource consumption)
Investigation of the societal impact of AI-based systems via xAI methods (e.g. trust, reliability, discrimination, environmental)
Computational methods of trust for explainable Artificial Intelligence

Supported by