
Existing explainable AI techniques, such as LIME or SHAP, primarily produce feature-level explanations, i.e., they focus on identifying the features (or sets of features) of the input that are most responsible for a given outcome. These techniques are effective with machine learning (ML) models that use humanly interpretable features such as ‘income’ or ‘age’. However, they are much less effective with more contemporary deep learning (DL) models, which typically rely on low-level features, such as the pixels of an image, that lack human-interpretable meaning. Overcoming this issue requires a shift from feature-based to concept-based explanations, i.e., explanations involving higher-level variables (called concepts) that human users can easily understand and possibly manipulate. In recent years, a variety of concept-based XAI techniques have been proposed, including both inherently interpretable models and post-hoc explainability methods. These techniques tend to provide more effective explanations, exhibit greater stability under perturbations, and offer enhanced robustness to adversarial attacks. Despite these benefits, research in concept-based XAI remains in its early stages, with opportunities for further advancement, particularly in the context of real-world applications. This special track seeks to engage the XAI community in advancing concept-based methodologies and promoting their application in domains where high-level, human-interpretable explanations can enhance AI-system user interaction. Submissions are invited that address novel methods for generating concept-based explanations, explore the application of both new and existing concept-based AI techniques across specific domains, and propose evaluation frameworks and metrics for assessing the efficacy of such explanations.
- Novel inherently interpretable concept-based architectures, e.g., based on Concept Bottleneck Models and extensions
- Novel techniques for generating post-hoc concept-based explanations
- Generative Concept-based architectures
- Multi-modal Concept-based architectures
- Concept-based XAI techniques for Large Language Models (LLMs)
- Neural-symbolic concept-based XAI methods
- Architectures integrating concept-based XAI with causal discovery and causal inference techniques
- Architectures combining concept-based XAI with ontologies or semantic structures
- Methods for identifying and retrieving high-level concepts using LLMs
- Concept-based approaches for managing out-of-distribution samples
- Applications of Human-Computer Interaction to concept-based XAI
- Evaluation frameworks for assessing concept-based explanations, focusing on accuracy, robustness, and domain-specific relevance
- Experimental studies on concept-based explanations and user trust, exploring both general and domain-specific contexts
- Experimental studies on cognitive alignment between concept-based explanations and human reasoning across application domains
- Interfaces and tools for non-expert users, facilitating exploration and interaction with concept-based explanations
- Theoretical guarantees on robustness, stability, and generalization in concept-based explanation methods
- Critical analyses of the strengths and limitations of concept-based explainability approaches
- Methods to address information leakage in concept representation
- Methods for improving concept intervention techniques
- Node-concept association methods for unsupervised models





