
Generative Artificial Intelligence (GenAI) is revolutionizing machine learning research and rapidly pushing the boundaries of computer vision, natural language processing, and multi-modal learning. This phenomenon has already raised significant concerns regarding the extreme complexity of machine learning systems. The effective and safe implementation of GenAI solutions must align with a deeper understanding of their decision-making processes. Historically focused on purely predictive models, the eXplainable Artificial Intelligence (XAI) domain has tackled the challenge of understandability for years. However, current XAI methods often constrain human creativity when debugging machine learning systems. The field of XAI is now at a pivotal point where the focus is shifting from simply understanding AI model outputs and inner logic to a new paradigm. In this paradigm, explainability becomes a tool for verifying, mining, and exploring information, including outputs from AI and other automated decision-making systems. This special track emphasizes the critical role generative AI can play in enhancing explainability, enabling constructive verification of both AI model outputs and human decisions/intuitions. With this in mind, we distinguish between two key themes: i)How GenAI can advance the frontier of XAI (GenAI for XAI). ii) How the XAI experience can address critical challenges in GenAI (XAI for GenAI). The goal of this track is to bridge the two domains and integrate their development, fostering innovation and collaboration.
- Using generative AI to produce verifiable explanations that enhance understanding and knowledge acquisition from data.
- Developing theory-driven methodologies for the verifiability of AI outputs.
- Assessing the quality and verifiability of explainability techniques through interdisciplinary methods.
- Evaluative AI strategies for critiquing AI-generated options and human intuitions, reducing biases such as automation dependence.
- Using generative AI to infer the beliefs and intent of explainees.
- XAI as a toolset for knowledge generation and discovery.
- Explanatory AI (YAI) and generative XAI for customized explanations.
- Language models for the automated generation of surrogate or explainable-by-design AI models.
- GenAI and XAI solutions for mitigating phenomena of over-reliance or under-reliance on AI explanations.
- Using generative AI to create counterfactual, semifactual, and alterfactual explanations for enhanced transparency.
- Developing methods for diagnosing model failures to improve robustness and reliability.
- Designing approaches for model and data correction to address errors and improve accuracy.
- Creating novel sample-based explanation techniques to provide diverse insights into model behavior.
- Generating textual explanations that align with human language and improve interpretability.
- Developing dialogue-based and interactive XAI systems for iterative and personalized explanations.
- Constructing interpretable models that prioritize human understanding while maintaining performance.
- Enhancing latent space understanding to uncover meaningful patterns and relationships in model representations.
- Detecting and eliminating memorization in generative models to improve generalization and safety.
- Establishing data attribution and valuation frameworks to enhance model accountability and provenance.
- Implementing bias mitigation techniques to ensure fairness and equity in AI-generated outcomes.
- Advancing feature attribution methods to provide clearer insights into model decision-making.
- Developing concept-based explanation techniques for deeper interpretability of generative models.
- Improving the interpretability and safety of generative models for ethical deployment.






