
In the era of artificial intelligence (AI), machine learning (ML) technology is progressively achieving impressive advancements in the realm of scientific research. However, the lack of transparency and reliability of many state-of-the-art AI models have been met with substantial criticism, posing barriers to human cognition and impeding scientific understanding to a certain extent. Recent research progress in explainable artificial intelligence (XAI) offers a prospect to bridge the gap between human understanding and model behaviour. This could potentially facilitate not only a deeper understanding of the decision-making processes inherent in ML models but also guide human experts to discover new scientific knowledge.
The special track will consider submissions that explore the development of XAI methods and their applications in support of scientific discovery, including but not limited to, weather and climate, medicine and healthcare, material science, behaviour, cognitive and social science.
- interpretability methods (i.e., both post-hoc and ante-hoc, such as LIME, Deep SHAP, Shapley Sampling, ProtoPNet, PIPNet, and others) to discover factors and trends that are important in e.g., climate change and disease progression, among others.
- interpretable methods (i.e., both post-hoc and ante-hoc, such as LIME, Deep SHAP, Shapley Sampling, ProtoPNet, PIPNet, and others) to discover spatiotemporal patterns used by machine learning models that may converge or diverge from current understanding of a certain scientific problem in the context of geoscience, material science, fluid dynamics or medicine, among others.
- interpretability methods to select input variables based on feature attribution (e.g., Shapley sampling, attention mechanisms) in the context of geoscience and medicine, such as discovery of principles in disease diagnosis (symptom correlation, and important factors in disease progression), among others.
- interpretability methods to discover spatial and temporal correlations in complex systems e.g., weather and climate, material science, earthquake science, fluid dynamics, among others.
- interpretability methods to discover significant precursors to important events, such as extreme weather events, or extreme events in fluid dynamics, among others.
- interpretability methods for multi-modal predictions that allow for discovering coherent patterns across data modalities that may lead to new principles if different from current human knowledge in e.g., geoscience and medicine, among others.
- visual interpretability tools (i.e., attribution maps and others) for tracing the spatiotemporal evolution patterns in e.g., extreme weather events, and earthquakes, among others.
- interpretable knowledge graphs to extract intrinsic relationships within the data (e.g., ontology learning, graph embedding methods), that may reveal novel causal relationships in e.g., medicine, among others.
- interpretability methods for the discovery of mechanisms behind memory, perception, reasoning, and understanding, such as discovery of visual and auditory patterns, among others.
- interpretable autoencoders to identify patterns within brain signals, such as discovery of patterns in EEGs, fMRIs that may lead to better understanding of brain functioning.
- explainable attention mechanism for language models to discover group behaviours (e.g., important factors in online consumer and social media behavior).
- interpretability methods to understand neural network hidden states in the social sciences (e.g., societal attitudes evolve over time towards climate change).
- explainable modeling of complex interrelations between human, technical, and organizational factors (e.g., interrelations between clinicians, AI diagnostic systems, and hospitals to affect patient outcomes).
- potential bridges between human knowledge and XAI methods for scientific discovery (e.g., collaboration of SHAP and clinicians to understand important factors in disease progression)









