Explainability is a non-functional requirement for software systems that focuses on providing information about specific aspects to a particular audience. It can target models and algorithms at design time (similar to classical XAI) and context-dependent system behaviour at runtime. In the latter case, we call a system self-explainable if it can autonomously decide when, how and to whom it should explain a topic. To develop such self-explainable systems, explainability needs to be carefully considered during the whole software engineering process: discovering requirements related to explanations (e.g., comprehension problems, timing, interaction capabilities, different needs of stakeholders), design methodologies, tools and algorithms for (self-) explaining systems, as well as testing explainability. Currently, there is only little research on systematic explainability engineering, but instead, a long list of open research questions exists, with challenges for different SW engineering disciplines. This challenge must be tackled on an interdisciplinary level, integrating expertise from various areas of computer science (SW engineering, AI, requirements engineering, HCI, formal methods, logic) and social sciences (law, psychology, philosophy). This track tackles software engineering perspectives on explainability, intending to join forces to make complex systems of systems with AI components explainable.

Topics

Design approaches and modelling mechanisms for self-explaining AI systems (e.g. through UML models,…)
Modelling AI-based systems and explanations for temporal uncertainties
Event-driven explanations for enhancing situational awareness of AI-based systems
Adaptive explanation delivery
Temporal dynamics and aspects of model explanations in human-AI interaction
Tools for error analysis/debugging to identify root causes of issues related to explanations during run-time
Integration of user/environment feedback on the quality/usefulness of explanations into the XAI systems
Definition/adjustment of confidence levels of AI models based on the current operating conditions via run-time explanations
Tracking/Explaining an AI model’s performance at runtime (e.g. real-time monitoring, performance/validation metrics at run-time)
Assessing the robustness of explanations of AI-based systems in dynamically changing environments
Explaining the dynamic behaviour of run-time environments to assist AI-based technologies in choosing sensible outcomes
Explainability of verification methods for machine-learned models (e.g. model checking)
Engineering of explanations for system behaviour using machine-learned models (e.g. finite automata, state machines)
Industry case studies on SW development processes for XAI
XAI methods for analysing AI components during system design time (correctness, safety, reliability and robustness)

Supported by