The special track explores the synergy of AI transparency and the challenges posed by cyber and cognitive security threats. Cyber attacks raise alarms among the population, damage the economy, and endanger the very safety of citizens when they disrupt the distribution networks of essential services. Meanwhile, cognitive security defends against tactics like disinformation, fake news, deep fake and propaganda targeting decision-making in the emerging realm of cognitive warfare. This track delves into the interpretability of AI models to counter these cyber and cognitive threats, collecting original contributions using and defining XAI-based approaches. Furthermore, the track scrutinizes the role of XAI in enhancing the resilience of methods and models adopted in cyber and cognitive security. This involves developing models capable of withstanding adversarial attacks and incorporating defensive mechanisms. The overarching goal is to encompass the role of XAI, including its applications and characterizations of systems explainable by design for addressing cyber and cognitive security, in particular information disorder, aligning with the efforts undertaken in the SERICS project.


Application of XAI methods in Intrusion Detection Systems
Explainable Malware and Phishing/Spam Detection
Explainable Toxic Speech and Hate Speech Detection
Explainable Sentiment and Stance Analysis
Application of XAI methods in BotNets and Fraud Detection models
XAI-based approaches for Digital/Network Forensics and Zero-Day vulnerabilities
Application of XAI methods and explainable Deep Fake Detection
Application of XAI for Cyber-Physical Systems Security
Explainable Misinformation and Fake News Detection
Explainable Deepfake Detection
Explainable Fact-Checking methods
Application of XAI methods for Cyber Attack Attribution
Application of XAI methods for Misinformation Attack Attribution
XAI-based Models for Propaganda Detection
XAI-based approaches for increasing Robustness of models against Adversarial Attacks
Explainable AI models for preventing Adversarial Attacks