{"id":2998,"date":"2024-01-19T11:50:10","date_gmt":"2024-01-19T11:50:10","guid":{"rendered":"https:\/\/xaiworldconference.com\/2024\/?page_id=2998"},"modified":"2024-02-17T19:06:54","modified_gmt":"2024-02-17T19:06:54","slug":"software-engineering-for-xai","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2024\/software-engineering-for-xai\/","title":{"rendered":"Software Engineering for XAI"},"content":{"rendered":"\n<div class=\"wp-block-media-text is-stacked-on-mobile\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-1024x1024.png\" alt=\"\" class=\"wp-image-3079 size-full\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-300x300.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-150x150.png 150w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-768x768.png 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-1536x1536.png 1536w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-2048x2048.png 2048w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Software-Engineering-for-XAI-1-470x470.png 470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Explainability is a non-functional requirement for software systems that focuses on providing information about specific aspects to a particular audience. It can target models and algorithms at design time (similar to classical XAI) and context-dependent system behaviour at runtime. In the latter case, we call a system self-explainable if it can autonomously decide when, how and to whom it should explain a topic. To develop such self-explainable systems, explainability needs to be carefully considered during the whole software engineering process: discovering requirements related to explanations (e.g., comprehension problems, timing, interaction capabilities, different needs of stakeholders), design methodologies, tools and algorithms for (self-) explaining systems, as well as testing explainability. Currently, there is only little research on systematic explainability engineering, but instead, a long list of open research questions exists, with challenges for different SW engineering disciplines. This challenge must be tackled on an interdisciplinary level, integrating expertise from various areas of computer science (SW engineering, AI, requirements engineering, HCI, formal methods, logic) and social sciences (law, psychology, philosophy). This track tackles software engineering perspectives on explainability, intending to join forces to make complex systems of systems with AI components explainable.<\/p>\n<\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Topics<\/h2>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Design approaches and modelling mechanisms for self-explaining AI systems (e.g. through UML models,&#8230;)<\/td><\/tr><tr><td>Modelling AI-based systems and explanations for temporal uncertainties<\/td><\/tr><tr><td>Event-driven explanations for enhancing situational awareness of AI-based systems<\/td><\/tr><tr><td>Adaptive explanation delivery<\/td><\/tr><tr><td>Temporal dynamics and aspects of model explanations in human-AI interaction<\/td><\/tr><tr><td>Tools for error analysis\/debugging to identify root causes of issues related to explanations during run-time<\/td><\/tr><tr><td>Integration of user\/environment feedback on the quality\/usefulness of explanations into the XAI systems<\/td><\/tr><tr><td>Definition\/adjustment of confidence levels of AI models based on the current operating conditions via run-time explanations<\/td><\/tr><tr><td>Tracking\/Explaining an AI model&#8217;s performance at runtime (e.g. real-time monitoring, performance\/validation metrics at run-time)<\/td><\/tr><tr><td>Assessing the robustness of explanations of AI-based systems in dynamically changing environments<\/td><\/tr><tr><td>Explaining the dynamic behaviour of run-time environments to assist AI-based technologies in choosing sensible outcomes<\/td><\/tr><tr><td>Explainability of verification methods for machine-learned models (e.g. model checking)<\/td><\/tr><tr><td>Engineering of explanations for system behaviour using machine-learned models (e.g. finite automata, state machines) <\/td><\/tr><tr><td>Industry case studies on SW development processes for XAI<\/td><\/tr><tr><td>XAI methods for analysing AI components during system design time (correctness, safety, reliability and robustness)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Supported by<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/ceti.one\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"286\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/kloes_CeTI-1024x286.jpg\" alt=\"\" class=\"wp-image-3790\" style=\"width:317px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/kloes_CeTI-1024x286.jpg 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/kloes_CeTI-300x84.jpg 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/kloes_CeTI-768x214.jpg 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/kloes_CeTI.jpg 1470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.icm-bw.de\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"688\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/innovation_campus-1024x688.png\" alt=\"\" class=\"wp-image-3791\" style=\"width:228px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/innovation_campus-1024x688.png 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/innovation_campus-300x202.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/innovation_campus-768x516.png 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/innovation_campus.png 1188w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Explainability is a non-functional requirement for software systems that focuses on providing information about specific aspects to a particular audience. It can target models and algorithms at design time (similar to classical XAI) and context-dependent system behaviour at runtime. In the latter case, we call a system self-explainable if it can autonomously decide when, how &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-2998","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2998","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/comments?post=2998"}],"version-history":[{"count":18,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2998\/revisions"}],"predecessor-version":[{"id":4014,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2998\/revisions\/4014"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/media?parent=2998"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}