This special track focuses on explainable autonomous agents – agents that make sequential decisions in their environment, aiming for example at reaching some kind of goal, or maximizing a notion of reward. This includes agents that choose their actions by planning with an environment model, or by learning from experience; agents that prepare plans or policies offline, or choose each action while interacting with their environment online. The focus of this special track contrasts with the extensive body of work on interpretable machine learning, which typically emphasizes understanding the individual input-output relationships in “black box” models like neural networks. While such models are an important tool, intelligent behaviour extends over time and needs to be explained and understood as such. The challenge of explaining sequential decision-making, such as that of robots collaborating with humans or software agents engaged in complex ongoing tasks, has only recently gained attention. We may have AI agents that can beat us in chess, but can they teach us how to play? We may have search and rescue robots, but can we effectively and efficiently work with them in the field? Autonomous agents are becoming an indispensable technology in countless domains such as manufacturing, (semi-)autonomous cars, and socially assistive robotics. To increase successful AI adoption and acceptance in these fields, we need to better understand the behavior of these sequential decision-making agents, their learning and reasoning, their strengths and limitations.

  • Explainable (classical) planning
  • Explainable online search
  • Explainable and interpretable reinforcement learning
  • Causal Inference in Decision Sequences
  • Explainable multi-agent systems
  • Evaluation methods and metrics for explainable agents
  • Human-centered Evaluation of Explainable Sequential Decision-Making Models
  • Policy or plan summarization and visualization
  • Explanations through state or feature importance
  • Explanations through counterfactuals for plans or policies
  • Explainable multi-objective planning/scheduling
  • User interfaces for sequential decision-making XAI
  • Interactive explanations and explanatory dialogue
  • Explainability for embodied systems/robotics
  • Formal foundations of explainable agency
  • Contestability of (semi-)automated decisions
  • HCI for Explainability of Sequential Decision Making
  • Explainability for Embodied Systems/Robots
  • Practical Applications for XAI in Goal Oriented Tasks, e.g., in Planning/Scheduling, Pathfinding.
  • Sequential Decision-Making Approaches as Models of  Explanatory Dialogue with Users