{"id":5631,"date":"2024-12-30T18:43:01","date_gmt":"2024-12-30T18:43:01","guid":{"rendered":"https:\/\/xaiworldconference.com\/2025\/?page_id=5631"},"modified":"2024-12-30T18:54:56","modified_gmt":"2024-12-30T18:54:56","slug":"concept-based-explainable-ai","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2025\/concept-based-explainable-ai\/","title":{"rendered":"Concept Based Explainable AI"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-1024x1024.png\" alt=\"\" class=\"wp-image-5637\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-300x300.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-150x150.png 150w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-768x768.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-1536x1536.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-2048x2048.png 2048w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-concept-based-explainable-AI-470x470.png 470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p style=\"font-size:14px\">Existing explainable AI techniques, such as LIME or SHAP, primarily produce feature-level explanations, i.e., they focus on identifying the features (or sets of features) of the input that are most responsible for a given outcome. These techniques are effective with machine learning (ML) models that use humanly interpretable features such as &#8216;income&#8217; or &#8216;age&#8217;. However, they are much less effective with more contemporary deep learning (DL) models, which typically rely on low-level features, such as the pixels of an image, that lack human-interpretable meaning. Overcoming this issue requires a shift from feature-based to concept-based explanations, i.e., explanations involving higher-level variables (called concepts) that human users can easily understand and possibly manipulate. In recent years, a variety of concept-based XAI techniques have been proposed, including both inherently interpretable models and post-hoc explainability methods. These techniques tend to provide more effective explanations, exhibit greater stability under perturbations, and offer enhanced robustness to adversarial attacks. Despite these benefits, research in concept-based XAI remains in its early stages, with opportunities for further advancement, particularly in the context of real-world applications. This special track seeks to engage the XAI community in advancing concept-based methodologies and promoting their application in domains where high-level, human-interpretable explanations can enhance AI-system user interaction. Submissions are invited that address novel methods for generating concept-based explanations, explore the application of both new and existing concept-based AI techniques across specific domains, and propose evaluation frameworks and metrics for assessing the efficacy of such explanations.<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Novel inherently interpretable concept-based architectures, e.g., based on Concept Bottleneck Models and extensions<\/li>\n\n\n\n<li style=\"font-size:14px\">Novel techniques for generating post-hoc concept-based explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Generative Concept-based architectures<\/li>\n\n\n\n<li style=\"font-size:14px\">Multi-modal Concept-based architectures<\/li>\n\n\n\n<li style=\"font-size:14px\">Concept-based XAI techniques for Large Language Models (LLMs)<\/li>\n\n\n\n<li style=\"font-size:14px\">Neural-symbolic concept-based XAI methods<\/li>\n\n\n\n<li style=\"font-size:14px\">Architectures integrating concept-based XAI with causal discovery and causal inference techniques<\/li>\n\n\n\n<li style=\"font-size:14px\">Architectures combining concept-based XAI with ontologies or semantic structures<\/li>\n\n\n\n<li style=\"font-size:14px\">Methods for identifying and retrieving high-level concepts using LLMs<\/li>\n\n\n\n<li style=\"font-size:14px\">Concept-based approaches for managing out-of-distribution samples<\/li>\n\n\n\n<li style=\"font-size:14px\">Applications of Human-Computer Interaction to concept-based XAI<\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Evaluation frameworks for assessing concept-based explanations, focusing on accuracy, robustness, and domain-specific relevance<\/li>\n\n\n\n<li style=\"font-size:14px\">Experimental studies on concept-based explanations and user trust, exploring both general and domain-specific contexts<\/li>\n\n\n\n<li style=\"font-size:14px\">Experimental studies on cognitive alignment between concept-based explanations and human reasoning across application domains<\/li>\n\n\n\n<li style=\"font-size:14px\">Interfaces and tools for non-expert users, facilitating exploration and interaction with concept-based explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Theoretical guarantees on robustness, stability, and generalization in concept-based explanation methods<\/li>\n\n\n\n<li style=\"font-size:14px\">Critical analyses of the strengths and limitations of concept-based explainability approaches<\/li>\n\n\n\n<li style=\"font-size:14px\">Methods to address information leakage in concept representation<\/li>\n\n\n\n<li style=\"font-size:14px\">Methods for improving concept intervention techniques<\/li>\n\n\n\n<li style=\"font-size:14px\">Node-concept association methods for unsupervised models<\/li>\n<\/ul>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"200\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/SUPSI_logo.png\" alt=\"\" class=\"wp-image-5639\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/SUPSI_logo.png 500w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/SUPSI_logo-300x120.png 300w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"253\" height=\"160\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/TUM_Logo.png\" alt=\"\" class=\"wp-image-5640\" style=\"width:253px;height:auto\"\/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"160\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UniLI_logo-1024x160.png\" alt=\"\" class=\"wp-image-5641\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UniLI_logo-1024x160.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UniLI_logo-300x47.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UniLI_logo-768x120.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UniLI_logo-1536x240.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UniLI_logo.png 1705w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"724\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/PoliTo_logo-1024x724.png\" alt=\"\" class=\"wp-image-5644\" style=\"width:249px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/PoliTo_logo-1024x724.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/PoliTo_logo-300x212.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/PoliTo_logo-768x543.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/PoliTo_logo.png 1268w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"210\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/CAM_logo.png\" alt=\"\" class=\"wp-image-5646\" style=\"width:251px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/CAM_logo.png 800w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/CAM_logo-300x79.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/CAM_logo-768x202.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"411\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/IBM_logo-1024x411.png\" alt=\"\" class=\"wp-image-5647\" style=\"width:232px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/IBM_logo-1024x411.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/IBM_logo-300x120.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/IBM_logo-768x308.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/IBM_logo-1536x616.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/IBM_logo-2048x822.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Existing explainable AI techniques, such as LIME or SHAP, primarily produce feature-level explanations, i.e., they focus on identifying the features (or sets of features) of the input that are most responsible for a given outcome. These techniques are effective with machine learning (ML) models that use humanly interpretable features such as &#8216;income&#8217; or &#8216;age&#8217;. However, &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-5631","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5631","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/comments?post=5631"}],"version-history":[{"count":5,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5631\/revisions"}],"predecessor-version":[{"id":5648,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5631\/revisions\/5648"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/media?parent=5631"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}