
Recent advances in Artificial Intelligence (AI) have embedded Machine Learning (ML), especially foundation models, into the workflow of scientific discovery, while simultaneously keeping the intrinsic opacity of these black-box approaches. As this integration grows so does the need for transparent, principled, and explainable machine reasoning. Beyond performance gains, the scientific community requires AI systems capable of interpretable hypothesis generation, causal insight, and knowledge-consistent explanations, enabling models to serve as trustworthy collaborators rather than opaque predictors.Â
Recent advances in eXplainable Artificial Intelligence (XAI) are leading the way in these directions, promising to transform opaque prediction engines into partners for reasoning, hypothesis generation, and validation. Â
The special session will consider submissions that explore the development of XAI methods and frameworks in support of interpretable scientific discovery, including but not limited to works that uncover causality, structure, or mechanistic insight from complex data, extract symbolic or conceptual representations aligned with scientific reasoning, integrate human knowledge to guide explainability, support hypothesis generation, evaluation, or refinement. By fostering dialogue between AI researchers and domain scientists, this special session aims to chart a path toward transparent, explainable, and trustworthy AI for science discovery.
keywords: XAI4Science, scientific discovery, explainable AI, explainable modeling, knowledge-informed XAI, Knowledge-informed interpretability; Causal-structure discovery; Interpretable scientific reasoning; Foundation-model interpretability; Hypothesis-generation workflows; Cross-domain discovery frameworks
Topics
- Interpretable scientific reasoning frameworks, enabling models to articulate mechanistic insight, conceptual links, or structured explanations.
- Causal structure discovery and explanation methods for identifying relationships that generalize across scientific domains.
- Symbolic, rule-based, and concept-based scientific explanations, including hybrid neuro-symbolic approaches.
- Knowledge-informed XAI, where domain knowledge constraints, scientific laws, or expert priors guide explanations.
- Explainable hypothesis generation and testing, including methods that surface candidate mechanisms or counterfactual scientific scenarios.
- Model-based interpretability for latent dynamics, discovering hidden states or interpretable processes shared across systems.
- Multimodal interpretability frameworks that extract coherent conceptual knowledge across heterogeneous sources (e.g., imagery, text, sensors) without domain specificity.
- Human-in-the-loop explainability, focusing on how experts interpret AI-derived scientific hypotheses or insights.
- Interactive interpretability interfaces that support scientific reasoning, conceptual exploration, and discovery workflows.
- Explainable graph and knowledge-graph reasoning for uncovering cross-domain conceptual or causal structures.
- Foundational model interpretability for science, including explanation of language or multimodal scientific foundation models in a domain-agnostic manner.







