Electroencephalography (EEG) is central to unravelling the dynamic processes in the brain activity, providing insights into both healthy and pathological scenarios, for example, when coupled with Brain-Computer Interface (BCI) technology. Importantly, Artificial Intelligence (AI) approaches are widely used to analyse and interpret EEG signals, but with results that are not always satisfying. This is mainly because EEG signals have distinctive characteristics such as high inter-individual and intra-individual variability due to the pronounced non-stationarity of the EEG signal and limited spatial resolution compared to other biosignals. Therefore, identifying EEG signal features and/or feature transformations that allow generalisation for classification, regression and anomaly detection tasks is an exciting challenge in AI. Crucially, eXplainable Artificial Intelligence (XAI) approaches can provide significant solutions to this challenge in two aspects: improving the interpretation of the output of an AI system and optimising its performance. XAI methods can facilitate the identification of EEG signal features and their transformations with enhanced stability properties, thereby enabling a greater degree of generalisation and interpretability capabilities.

Topics

Explainable anomaly detection for EEG signals (e.g., spectral and spatiotemporal explanations for seizure detection)
Identification/selection approaches for stable feature extraction by XAI to tackle the non-stationarity problem of EEG signals
Enhancing EEG analysis robustness by XAI (e.g., building on the misclassified data instances together with characteristics identified by XAI techniques)
XAI for EEG biomarkers Identification (e.g., analysing theta activity as a prospective biomarker of post-stroke pathology by XAI techniques)
Integration of XAI into EEG-Based Clinical Decision Support Systems
Explainable AI methods for improving EEG-based BCI performance (e.g. augmenting classification/regression models with explanations)
Interpretability for deep learning algorithms in passive BCI tasks (e.g. in EEG emotion recognition)
Explainable synthetic EEG-data generation via XAI (e.g. using XAI counterfactual methods for data oversampling to tackle the class imbalance problem)
Evaluation of XAI methods in EEG signals (e.g., tackling the uncertainty of the effectiveness of GRAD-CAM/SHAP and SaliencyMaps on EEG analysis)
Explainable EEG-based Human activity recognition models
Development/testing of wearable EEG devices using XAI (e.g., reduction of EEG-electrodes in specific tasks through XAI methods)
EEG signal conversion to topographic maps and their processing via XAI methods
XAI methods for biomarkers extraction (e.g. from topographic spectral/power maps of EEG data)
XAI methods for EEG-based detection/prediction of neurodegenerative diseases/disorders (e.g. Alzheimer’s disease, frontotemporal dementia, parkinson).
New XAI-based approaches tailored for EEG signals in specific scenarios (e.g. sleepXAI and XAI4EEG)