Argumentation in AI encompasses formal frameworks and computational models that study, replicate, and support reasoning processes involving constructing, evaluating, and comparing arguments. Rooted in logic, philosophy, and cognitive science, argumentation enables systems to engage in tasks like decision-making, negotiation, and explanation by presenting structured arguments and counterarguments. This capability plays an important role in enhancing Explainable AI (XAI), as it provides transparent, intuitive, and interpretable justifications for its decisions. Key applications include resolving conflicts in multi-agent systems, supporting human-computer interaction through transparent reasoning, and providing clear and intuitive justifications for AI decisions.
Integrating XAI with argumentation represents a frontier in enhancing AI systems’ transparency, accountability, and user trust. Open research quests include the exploration of the synergies between XAI and argumentation theory, emphasising how argumentation frameworks can be leveraged to generate, structure, and present intuitive explanations in AI systems. This entails investigating the development of argumentation-based methods for interpretability, the role of argumentation in human-AI interaction, and the formalisation of explainability using argumentation models. Pivotal to the successful integration of argumentation and XAI are contributions addressing practical challenges, such as the scalability of argumentation-based explanations in large-scale AI models and the evaluation of these explanations in real-world applications. Lastly, encouraging interdisciplinary collaborations and research initiatives can help overcome these challenges and advance the integration of XAI with argumentation, fostering progress towards AI systems that are more understandable, ethical, and socially acceptable.

  • Development and analyses of formal frameworks for argumentation-based explainability.
  • Exploration of theoretical connections between XAI and argumentation theory.
  • Logic-based approaches to explanations in AI.
  • Development of argumentation frameworks for interpretable machine learning.
  • Generation and visualisation of explanations using argumentation.
  • Integration of argument-based explanations with neural networks and other AI algorithms.
  • Argumentation for human-AI interaction and decision support.
  • Cognitive and psychological aspects of understanding argument-based explanations.
  • Adaptive explanations based on user feedback and argumentation schemes.
  • Argumentation-driven explainability in legal, medical, and financial AI systems.
  • Deploying argumentation-based XAI in autonomous single- and multi-agent systems and robotics.
  • Use of argumentation in ethical and social decision-making.
  • Scalability of argumentation frameworks for large-scale AI systems.
  • Managing conflicting arguments in complex decision-making scenarios.
  • Limitations of current XAI and argumentation methods: gaps and future directions.
  • Comparison of argumentation-based explanations with other XAI methods (i.e., Anchors) across domains.
  • Frameworks for assessing the quality of argument-based explanations by addressing their fairness, bias, and accountability.
  • Evaluation metrics for human-centred studies to assess the effectiveness of argumentation in XAI.
  • Combining argumentation with natural language processing for explanation generation.
  • Leveraging argumentation in AI ethics and governance.
  • Cross-disciplinary insights from Philosophy, Logic and Cognitive Science.
  • Argumentation and explainability in hybrid human-AI systems.
  • Argumentation in explainability for generative AI systems (e.g., large language models).
  • Interactive argumentation for conversational AI explainability.
  • Multi-modal explanations via argumentation frameworks.
  • Argumentation in adversarial scenarios for robust XAI systems.