The field of Explainable Artificial Intelligence (XAI) is at a pivotal point where the focus is shifting from solely focusing on people understanding AI model outputs and inner logic to a new paradigm. In this paradigm, explainability serves as a tool to assist individuals in verifying, mining, and exploring information, including outputs from AI and other automated decision-making systems. This special track emphasises the integral role that generative AI can play in explainability to enable the constructive verification of both AI model outputs and human decisions/intuitions, thereby deepening our comprehension of data and phenomena. These generative XAI systems should embed epistemological and legal insights, ensuring that they are not only technically sound but also ethically and legally robust, particularly in situations where such qualities are essential. This track aims to catalyze advancements in the field by combining computational theories of explainability with philosophical and legal insights, enhancing the capability of XAI systems not only to explain but also to verify and expand our understanding of AI-generated data and decisions and the world they model.

Topics

Using generative AI to produce verifiable explanations that enhance understanding and knowledge acquisition from data
Using generative AI to drive interaction with task-specific AI models
Integrating philosophical/legal principles into XAI systems to foster the generation of verifiable and informative explanations
Exploring the epistemological aspects of XAI, focusing on the nature of models, their explanatory targets, and the interplay between system transparency and verifiability
Developing theory-driven methodologies for the verifiability of AI outputs
Assessing the quality and verifiability of explainability techniques through interdisciplinary methods
Evaluative AI strategies for critiquing AI-generated options and human intuitions, reducing biases such as automation dependence
Intelligent user interfaces for generative XAI
Developing theory-driven methodologies for the verifiability of output
Using generative AI to infer beliefs and intent of explainees
Resolving misunderstandings/disagreements between machines and explainees using generative language models
XAI as a toolset for knowledge generation/discovery
Explanatory AI (YAI) and generative XAI for customized explanations.
Language models for the automated generation of surrogate or explainable-by-design AI models