
Machine learning models, especially deep neural networks, are increasingly used across the natural sciences: from modeling complex systems, accelerating the discovery of novel materials, and approximating phenomena governed by differential equations to predicting extreme weather events. Yet, as these systems grow in complexity and predictive power, they remain largely opaque and hard to interpret, for example by obscuring whether predictions arise from physically meaningful relationships or from spurious correlations in the data. This lack of interpretability hinders trust, understanding, and reproducibility, key pillars of the scientific process.
Therefore, we present a special session that aims to address the challenges of aligning models with the principles and representations of the physical sciences. The session explores methods that integrate prior knowledge into their interpretability frameworks, extract scientific insights from models and develop approaches that empower SciML model interpretability and improvement. Applications may span physics, chemistry, materials science, quantum systems, biology, climate modeling, and other domains where scientific understanding is the ultimate goal.
In particular, the session emphasizes novel XAI approaches for scientific modeling, focusing on the interpretability of emerging model classes, mechanistic and symbolic insight extraction, explanation-guided model verification and refinement, rigorous evaluation grounded in scientific theory, and the philosophical foundations of scientific explanation.
With this initiative, we aim to foster the interplay between the XAI, AI4Science, and physical-sciences communities, bridging interpretability, symbolic reasoning and scientific modeling. Thereby, it supports the overarching goal of imbuing machine learning with explanatory and discovery-oriented frameworks capable of yielding physically grounded and human-understandable scientific knowledge.
keywords: XAI4Science, SciML, physics-informed machine learning, graph neural networks, explainable AI, scientific discovery, climate modeling, physical sciences
Topics
- Post-hoc explainability techniques for characterizing model dynamics, for identification of key factors in phase transitions, extreme events and dynamics in weather prediction, biology, chemistry (e.g., using counterfactual analysis, attribution methods, surrogate models, perturbation analysis).
- Methods translating neural representations and explanations into human-understandable scientific forms, such as graphs, equations, symbolic rules, causal diagrams, energy landscapes, and visualizations or user interfaces.
- Interpretable-by-design SciML and physics-guided models, including architectures that embed physical structure or prior information by construction, potentially supporting the discovery of new physical laws, governing equations, and causal relationships.
- Philosophical and methodological foundations of interpretability, addressing scientific understanding and the interplay between human and AI for knowledge discovery.
- Frameworks and benchmarks for evaluating XAI in scientific applications, including protocols that combine simulator ground truth, expert review, and reproducibility checks, as well as ground-truth validated feature attributions and standardized suites for testing the faithfulness of explanations.
- XAI-based auditing and semantic verification, including methods to validate the physical plausibility of model reasoning, e.g., ensuring alignment between attributions and physical expectations or detecting spurious correlations (Clever-Hans effect).
- Uncertainty quantification and error estimation approaches for estimating and bounding prediction errors for certification of models used in scientific, industrial, and real-world decision-making contexts.
- XAI-guided refinement of models, using explanations to diagnose shortcomings, drive adaptive resampling of training data, and improve or debug models, with a particular interest in physics-informed and theory-guided approaches.
- Probing and analysis of internal reasoning in SciML models, including the identification of inductive biases and computational circuits matching physical and known or novel scientific processes.
- Interpretability for emerging scientific architectures and representations, such as mechanistic interpretability for physics and scientific foundation models, XAI for quantum machine learning, XAI for PINNs, neural operators, and GNNs, as well as concept-based analyses (e.g., using sparse autoencoders) in AI4Science.
- Information- and theory-driven XAI for the physical sciences, including physics-informed explanations based on conservation laws and symmetries, information-theoretic and decomposition-based interpretability, causality-aware approaches, and methods that align latent representations with established physical and mathematical theories.
- Explainable inverse design and steering in scientific applications, where XAI supports inverse problems and design tasks by guiding search in parameter or structure space and provides human-understandable and theory-grounded rationales for proposed scientific or engineering designs.
