This track is dedicated to integrating Explainable AI into computational neuroscience, focusing on its application in interpreting complex brain data such as EEG, fMRI, and invasive recordings. The track aims to showcase the latest advancements in XAI that enhance our understanding of neural processes in constructing brain functioning architecture and identifying crucial neural features in brain disorders. It will highlight the importance of accurate, transparent AI interpretations in neuroscience, addressing both AI’s potential and challenges in unravelling the complexities of brain data for scientific and medical advancements.


Explainable AI in Neuroimaging: Interpretable Models vs. Black Box Approaches
Feature-based Attributions and Brain Atlases
XAI Techniques for Reconstructing Visual Stimuli from Neural Data
Enhancing Neurofeedback with Explainable Machine Learning
Explainable Spike Representation and Sorting
Predictive Modeling in Cognitive Neuroscience using XAI
XAI-driven Behavioural Neurostimulation
Privacy and Ethics in AI-Driven Neurological Data Analysis
Real-Time EEG Data Processing with XAI Methods
Neuro-symbolic-driven Latent Feature Disentanglement for neuroimaging
Interpretable Sleep-State Decoding
Identifying Neurological Biomarkers through Explainable Machine Learning