Explainability is the requirement for AI systems to provide human understandable reasons for the output and actions of the system. For this, it must be taken into account that such AI systems are generally comprised of three kinds of components: 1) Symbolic AI, also known as classical AI, uses approaches of logic, programming and classical engineering, 2) sub-symbolic AI uses approaches of statistical learning, often referred to as machine-learning, and, 3) traditional non-AI software using deterministic programming. Explainability of such AI systems can target models and algorithms at design time and context-dependent system behaviour at run-time. In the latter case, we call a system self-explainable if it can autonomously decide when, how and to whom it should explain a topic. To develop such self-explainable systems, explainability needs to be carefully considered during the whole systems engineering process: discovering requirements related to explanations (e.g., comprehension problems, timing, interaction capabilities, different needs of stakeholders), design methodologies, tools and algorithms for (self-) explaining systems, as well as testing explainability. Currently, there is only little research on systematic explainability engineering; instead, we face a long list of open research questions in different system engineering disciplines. These challenges must be tackled on an interdisciplinary level, integrating expertise from various areas of computer science (e.g., systems engineering, AI, requirements engineering, HCI, formal methods, logic) and social sciences (e.g., law, psychology, philosophy). This track tackles systems engineering perspectives on explainability, intending to join forces to make complex systems of systems with AI components explainable.

  • Design and modeling of self-explaining AI systems:
    • Approaches and mechanisms for modeling explanations
    • Explaining temporal uncertainties
    • Modeling dynamic behaviors in human-AI interactions
    • Explaining system behavior with machine-learned models
    • Reflecting domain constraints, e.g., safety, in the design of self-explaining AI systems
  • Run-time analysis and explanation of AI systems:
    • Tools for error analysis and debugging explanations
    • Monitoring the performance of explanation systems
    • Integrating feedback in the generation of explanations
    • Robustness and adaptivity of explanations in changing environments
    • Determining confidence levels of explanations based on current operating conditions
  • Explanations for verification and validation of AI models:
    • Explainability and its relation to observability and monitoring
    • Validating the interaction of AI and traditional software – Explanations across the boundary of AI and traditional software
    • Explanations as an oracle to systematically test AI systems
    • Using explanations to assess the quality of an AI system
  • Practical approaches and (industry) case studies:
    • Impact of XAI on the engineering process
    • Explainability of AI in Software Engineering with LLMs
    • Explainability application to correctness, safety, reliability, and robustness
    • Explanation validation metrics
    • Case Studies