This track focuses on the emerging field of Computational Argumentation and its intersection with Explainable AI, exploring novel approaches that harness the power of argumentation to enhance the interpretability and explainability of AI systems. Examples include implementing interpretable machine learning models using rule-based approaches and expressing rules in a structured argumentative form, as well as models that incorporate feature importance explanation through argumentation by providing arguments for and against the relevance of each feature. Other examples include using argumentative structures for visual explanations, such as interactive graphs or diagrams, representing a decision-making process. Argumentation can also help identify biases within a model by evaluating the supporting and opposing evidence for different predictions or improving performance via error analysis. When a model makes errors, argumentation can be used to analyze and present the reasons behind those errors. Argumentation can be employed to create interactive explanation interfaces, allowing users to interact with a model’s explanations and challenge or question its decisions. Eventually, argumentation can be adapted to incorporate expert opinions and domain-specific knowledge into the AI model. This track invites submissions contributing to this interdisciplinary domain’s theoretical foundations, methodological advancements, and practical applications. Researchers and practitioners are encouraged to share their insights, methodologies, and findings, fostering a collaborative environment to propel the field forward as a tangible tool for Explainable Artificial Intelligence.


explainable rule-based structured argumentation
feature importance through computational argumentation for xAI
arguments for/against features’ relevance
argumentative structures for visual explanations
argument-based interactive graphs/diagrams for explainable decision-making
computational argumentation for bias identification
evaluation of machine-learned predictions via supporting/opposing arguments
computational argumentation as a tool for error analysis of predictive performance
interactive explanations via argumentation
interactive visual argumentation for knowledge-base improvement
model enhancement via argumentative expert opinions
model performance via integration with argument-based domain-specific knowledge

Supported by