{"id":5450,"date":"2024-12-21T17:49:41","date_gmt":"2024-12-21T17:49:41","guid":{"rendered":"https:\/\/xaiworldconference.com\/2025\/?page_id=5450"},"modified":"2024-12-24T20:44:39","modified_gmt":"2024-12-24T20:44:39","slug":"xai-engineering","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2025\/xai-engineering\/","title":{"rendered":"XAI engineering"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-1024x1024.png\" alt=\"\" class=\"wp-image-5452\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-300x300.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-150x150.png 150w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-768x768.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-1536x1536.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-2048x2048.png 2048w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-engineering-470x470.png 470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p style=\"font-size:14px\">Explainability is the requirement for AI systems to provide human understandable reasons for the output and actions of the system. For this, it must be taken into account that such AI systems are generally comprised of three kinds of components: 1) Symbolic AI, also known as classical AI, uses approaches of logic, programming and classical engineering, 2) sub-symbolic AI uses approaches of statistical learning, often referred to as machine-learning, and, 3) traditional non-AI software using deterministic programming. Explainability of such AI systems can target models and algorithms at design time and context-dependent system behaviour at run-time. In the latter case, we call a system self-explainable if it can autonomously decide when, how and to whom it should explain a topic. To develop such self-explainable systems, explainability needs to be carefully considered during the whole systems engineering process: discovering requirements related to explanations (e.g., comprehension problems, timing, interaction capabilities, different needs of stakeholders), design methodologies, tools and algorithms for (self-) explaining systems, as well as testing explainability. Currently, there is only little research on systematic explainability engineering; instead, we face a long list of open research questions in different system engineering disciplines. These challenges must be tackled on an interdisciplinary level, integrating expertise from various areas of computer science (e.g., systems engineering, AI, requirements engineering, HCI, formal methods, logic) and social sciences (e.g., law, psychology, philosophy). This track tackles systems engineering perspectives on explainability, intending to join forces to make complex systems of systems with AI components explainable.<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Design and modeling of self-explaining AI systems: \n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Approaches and mechanisms for modeling explanations <\/li>\n\n\n\n<li style=\"font-size:14px\">Explaining temporal uncertainties <\/li>\n\n\n\n<li style=\"font-size:14px\">Modeling dynamic behaviors in human-AI interactions <\/li>\n\n\n\n<li style=\"font-size:14px\">Explaining system behavior with machine-learned models <\/li>\n\n\n\n<li style=\"font-size:14px\">Reflecting domain constraints, e.g., safety, in the design of self-explaining AI systems <\/li>\n<\/ul>\n<\/li>\n\n\n\n<li style=\"font-size:14px\">Run-time analysis and explanation of AI systems: \n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Tools for error analysis and debugging explanations <\/li>\n\n\n\n<li style=\"font-size:14px\">Monitoring the performance of explanation systems <\/li>\n\n\n\n<li style=\"font-size:14px\">Integrating feedback in the generation of explanations <\/li>\n\n\n\n<li style=\"font-size:14px\">Robustness and adaptivity of explanations in changing environments <\/li>\n\n\n\n<li style=\"font-size:14px\">Determining confidence levels of explanations based on current operating conditions<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Explanations for verification and validation of AI models:\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Explainability and its relation to observability and monitoring <\/li>\n\n\n\n<li style=\"font-size:14px\">Validating the interaction of AI and traditional software &#8211; Explanations across the boundary of AI and traditional software <\/li>\n\n\n\n<li style=\"font-size:14px\">Explanations as an oracle to systematically test AI systems <\/li>\n\n\n\n<li style=\"font-size:14px\">Using explanations to assess the quality of an AI system <\/li>\n<\/ul>\n<\/li>\n\n\n\n<li style=\"font-size:14px\">Practical approaches and (industry) case studies: \n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Impact of XAI on the engineering process <\/li>\n\n\n\n<li style=\"font-size:14px\">Explainability of AI in Software Engineering with LLMs<\/li>\n\n\n\n<li style=\"font-size:14px\">Explainability application to correctness, safety, reliability, and robustness <\/li>\n\n\n\n<li style=\"font-size:14px\">Explanation validation metrics <\/li>\n\n\n\n<li style=\"font-size:14px\">Case Studies<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"969\" height=\"626\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UOL.png\" alt=\"\" class=\"wp-image-5456\" style=\"width:221px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UOL.png 969w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UOL-300x194.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/UOL-768x496.png 768w\" sizes=\"auto, (max-width: 969px) 100vw, 969px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"512\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/kit-1024x512.png\" alt=\"\" class=\"wp-image-5457\" style=\"width:240px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/kit-1024x512.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/kit-300x150.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/kit-768x384.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/kit.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"310\" height=\"93\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/dlr.png\" alt=\"\" class=\"wp-image-5458\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/dlr.png 310w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/dlr-300x90.png 300w\" sizes=\"auto, (max-width: 310px) 100vw, 310px\" \/><\/figure>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"340\" height=\"100\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/fortiss.png\" alt=\"\" class=\"wp-image-5459\" style=\"width:232px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/fortiss.png 340w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/fortiss-300x88.png 300w\" sizes=\"auto, (max-width: 340px) 100vw, 340px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"724\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ICM-1024x724.png\" alt=\"\" class=\"wp-image-5460\" style=\"width:216px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ICM-1024x724.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ICM-300x212.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ICM-768x543.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ICM-1536x1086.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ICM-2048x1448.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"830\" height=\"130\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/logo_horizontal.png\" alt=\"\" class=\"wp-image-5557\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/logo_horizontal.png 830w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/logo_horizontal-300x47.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/logo_horizontal-768x120.png 768w\" sizes=\"auto, (max-width: 830px) 100vw, 830px\" \/><\/figure>\n<\/div><\/div>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Explainability is the requirement for AI systems to provide human understandable reasons for the output and actions of the system. For this, it must be taken into account that such AI systems are generally comprised of three kinds of components: 1) Symbolic AI, also known as classical AI, uses approaches of logic, programming and classical &hellip; <\/p>\n","protected":false},"author":6,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-5450","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5450","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/comments?post=5450"}],"version-history":[{"count":4,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5450\/revisions"}],"predecessor-version":[{"id":5558,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5450\/revisions\/5558"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/media?parent=5450"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}