
This Special Session will focus on the explainability of Agentic AI, a rapidly emerging paradigm in which artificial systems exhibit autonomous, goal-directed, and proactive behavior. As AI agents gain the ability to make independent decisions and execute complex tasks in dynamic or collaborative environments, the need for transparency and interpretability becomes critical. The session will explore how Explainable AI (XAI) principles can be integrated into agentic architectures to ensure that autonomous agents remain understandable, trustworthy, and aligned with human values.
Key questions include:
- How can reasoning, planning, and decision-making processes within agentic systems be made transparent to users?
- What forms of explanation are needed for agents that maintain long-term goals and adapt to evolving contexts?
- How can we balance autonomy and explainability in multi-agent systems (MAS), particularly when agents interact or negotiate with humans and other agents?
The session will also examine hybrid approaches that combine large language models (LLMs) with symbolic reasoning, knowledge representation, and human-centered design to achieve both high performance and interpretability. Emphasis will be placed on human-agent collaboration and teamwork, highlighting explainable interaction strategies that enhance user understanding, trust, and control. Through these discussions, the session aims to bridge Agentic AI and XAI, fostering responsible and transparent autonomy in next-generation intelligent systems.
Keywords: Explainable AI, Agentic AI, Human-Agent Interaction, Multi-Agent Systems, Autonomous Agents, Trustworthy AI, Interpretability, Transparency, Cognitive Alignment, Explainable Planning, Human-Centered AI, Adaptive Explanations, Symbolic Reasoning, Large Language Models, Responsible AI, Governance, Ethics, Verification, Evaluation Frameworks, Trust and Safety
Topics
1. Agent Architectures and Multi-Agent Orchestration
- Formal models of agency, autonomy, and coordination
- Design of explainable single-agent and multi-agent systems
- Orchestration frameworks (e.g., CrewAI, AutoGen) with interpretable reasoning flows
- Explainable task decomposition, planning, and meta-reasoning strategies
- Visualization and explanation of agent decision paths and coordination mechanisms
- Development of inherently explainable agent architectures and orchestration policies
2. Autonomous Tool Use and Web Navigation
- Agents interacting with APIs, tools, and web environments
- Verification and compositional reasoning for autonomous operations
- Self-explaining tool-use pipelines and transparent decision flows
- Natural language rationales for browsing, data extraction, and tool selection
- Traceable and explainable tool invocation chains
- Adaptive explanations based on user context and expertise
3. Benchmarking and Evaluation
- Evaluation frameworks emphasizing reasoning, robustness, and interpretability
- Integration of XAI metrics into existing benchmarks (AgentBench, SWE-bench, GAIA, WebArena)
- Quantitative and user-centered evaluation of transparency and trust
- Development of explainability-centered benchmarks for cognitive alignment and comprehension
- Multi-modal evaluation combining behavioral and interpretive performance
4. Human-Agent Teaming and Protocols
- Formal models of trust, delegation, transparency, and cognitive alignment
- Adaptive explanation generation in collaborative settings
- Dialogue-based and context-aware explanatory interfaces
- Interactive explanations supporting joint problem solving and mutual understanding
- Personalization of explanations according to user goals and expertise
5. Reliability, Trust, and Safety
- Formal validation, interpretability, and runtime assurance of agentic behavior
- Transparent safety mechanisms and interpretable control models
- Explainable reasoning logs for alignment and verification
- Integration of explainability with formal verification frameworks
- Real-time, interpretable monitoring and adaptive control strategies
6. Ethics, Security, and Governance of Agentic Systems
- Normative frameworks for accountability, fairness, and privacy protection
- Transparent compliance and explainable ethical reasoning
- Human-readable justifications for agentic decisions
- Explainable audit trails supporting governance and oversight
- Detection and explanation of biases, risks, and adversarial behavior




