{"id":2834,"date":"2024-01-10T17:28:40","date_gmt":"2024-01-10T17:28:40","guid":{"rendered":"https:\/\/xaiworldconference.com\/2024\/?page_id=2834"},"modified":"2024-06-27T16:03:31","modified_gmt":"2024-06-27T16:03:31","slug":"concept-based-global-explainability","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2025\/concept-based-global-explainability\/","title":{"rendered":"Concept-based global explainability"},"content":{"rendered":"\n<div class=\"wp-block-media-text is-stacked-on-mobile\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-1024x1024.png\" alt=\"\" class=\"wp-image-5131 size-full\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-300x300.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-150x150.png 150w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-768x768.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-1536x1536.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-2048x2048.png 2048w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/06\/concept-based-global-explainability-470x470.png 470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Deep Neural Networks have demonstrated remarkable success across various disciplines, primarily due to their ability to learn intricate data representations. However, the inherent semantic nature of these representations remains elusive, posing challenges for the responsible application of Deep Learning methods, particularly in safety-critical domains. In response to this challenge, this special track delves into the critical aspects of global explainability, a subfield of Explainable AI. Generally, the global explainability methods aim to interpret what abstractions have been learned by the network. This can be achieved by analyzing the network&#8217;s reliance on specific concepts in general or by examining individual neurons and their functional roles within models. This approach facilitates the elucidation of abstractions learned by the networks. It extends to identifying and interpreting circuits\u2014computational subgraphs within the models that elucidate information flow within complex architectures. Furthermore, global explainability could be employed to explain the local decision-making of machines,  termed glocal explainability.<\/p>\n<\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Topics<\/h2>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>quantification of interpretability of deep visual representations via network dissection xAI methods<\/td><\/tr><tr><td>compositional explanations of neurons<\/td><\/tr><tr><td>labelling neural representations with inverse recognition<\/td><\/tr><tr><td>automatic description XAI methods for neuron representations in deep vision networks<\/td><\/tr><tr><td>natural language-based descriptions of deep visual features for XAI<\/td><\/tr><tr><td>identification and analysis of interpretable subspaces in image representations<\/td><\/tr><tr><td>magnitude constrained optimization methods for feature visualization in deep neural networks<\/td><\/tr><tr><td>human-understandable explanations through concept relevance propagation<\/td><\/tr><tr><td>attribution maps for enhancing the explainability of concept-based features<\/td><\/tr><tr><td>concept recursive activation factorization methods for xAI<\/td><\/tr><tr><td>quantitative testing via concept activation vectors<\/td><\/tr><tr><td>completeness-aware concept-based explanations in deep learning<\/td><\/tr><tr><td>non-negative concept activation vectors for invertible concept-based explanations in convolutional neural networks<\/td><\/tr><tr><td>multi-dimensional concept discovery methods for XAI<\/td><\/tr><tr><td>mechanistic interpretability for automated circuit discovery<\/td><\/tr><tr><td>brain-inspired modular training for mechanistic interpretability<\/td><\/tr><tr><td>vision-language mechanistic interpretability xAI methods<\/td><\/tr><tr><td>mechanistic interpretability for grokking measures <\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Supported by<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/twitter.com\/umi_lab_ai?s=21&amp;t=_l2dvmNwfXVT_BhNkrvo2w\"><img loading=\"lazy\" decoding=\"async\" width=\"335\" height=\"296\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/umi_lab-e1708463697933.png\" alt=\"\" class=\"wp-image-4031\" style=\"width:192px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/02\/umi_lab-e1708463697933.png 335w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/02\/umi_lab-e1708463697933-300x265.png 300w\" sizes=\"auto, (max-width: 335px) 100vw, 335px\" \/><\/a><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/www.bifold.berlin\/\"><img loading=\"lazy\" decoding=\"async\" width=\"501\" height=\"100\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/image.png\" alt=\"\" class=\"wp-image-4036\" style=\"width:278px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/02\/image.png 501w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/02\/image-300x60.png 300w\" sizes=\"auto, (max-width: 501px) 100vw, 501px\" \/><\/a><\/figure>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.atb-potsdam.de\/en\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"364\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/03\/atb-1024x364.png\" alt=\"\" class=\"wp-image-4217\" style=\"width:347px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/03\/atb-1024x364.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/03\/atb-300x107.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/03\/atb-768x273.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/03\/atb.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Deep Neural Networks have demonstrated remarkable success across various disciplines, primarily due to their ability to learn intricate data representations. However, the inherent semantic nature of these representations remains elusive, posing challenges for the responsible application of Deep Learning methods, particularly in safety-critical domains. In response to this challenge, this special track delves into the &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-2834","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/2834","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/comments?post=2834"}],"version-history":[{"count":16,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/2834\/revisions"}],"predecessor-version":[{"id":5132,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/2834\/revisions\/5132"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/media?parent=2834"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}