
While many XAI approaches focus on explaining feature influence at an individual level, they often fall short when it comes to capturing how features interact or influence one another through their dependencies. To gain a better understanding of feature influence, it is essential to provide insights into the interplay between features by uncovering learned feature interactions and the influence of feature dependencies on feature effects. By analyzing these relationships, we can better interpret individual as well as joint contributions, and provide a better understanding of both the model’s learned relationships and the underlying data-generating process. This special track aims to bring together researchers and practitioners to address the challenges of interpreting and explaining feature interactions and dependencies in machine learning models. We invite contributions showcasing innovative methodologies, theoretical advancements, and practical applications that advance this critical area of XAI, fostering a deeper understanding of machine learning models and their behaviour.
- Model-agnostic approaches for feature interactions and dependencies: Generalizable techniques applicable across different machine learning models.
- Model-specific approaches for feature interactions and dependencies: Tailored methods leveraging unique properties of specific model architectures.
- Detection of feature interactions: Novel methods or statistical tests identifying and analyzing interactions between features.
- New methods to quantify global feature interactions: Frameworks to summarize interaction effects across entire datasets or models.
- New methods to quantify local feature interactions: Instance-specific approaches for explaining interactions in individual predictions.
- New approximation techniques for Shapley interactions: Faster and more scalable methods for computing interaction contributions.
- New and meaningful visualization techniques for Shapley interactions: Advanced visual representations to explore Shapley interaction effects.
- Innovative applications of Shapley interactions in different domains: Applying Shapley-based interactions to domain-specific problems.
- Functional decomposition into effects of different orders: Techniques for separating effects into main, pairwise, and higher-order components.
- New methods to select relevant feature interactions: Strategies for identifying the most impactful and interpretable interactions.
- New visualization methods for feature interactions and decomposition effects: Visual tools simplifying interpretation of interaction dynamics.
- Insights into how feature dependencies affect contributions: Examining how dependencies alter individual and combined feature contributions.
- Estimation techniques for dependency-aware feature interaction quantification: Methods capturing dependencies in interaction effect estimation.
- Robustness of interaction explanations to perturbations: Evaluating stability of interaction insights under data perturbations or adversarial conditions.
- Feature interactions and dependencies in time series data: Capturing temporal dynamics and dependencies in sequential datasets.
- Interaction analysis for fairness and bias detection: Identifying and addressing biases emerging from interacting features.
- Incorporating domain knowledge into interaction discovery: Using expert-driven constraints or priors to guide interaction identification and interpretation.
- Efficient algorithms for scalable interaction analysis: Scalable methods enabling interaction detection in high-dimensional, large-scale datasets.
- Novel benchmarks and metrics for evaluating explanations: Developing standards for assessing feature interaction and dependency interpretation methods.
- Role of feature interactions in unsupervised learning: Investigating interaction effects in clustering, dimensionality reduction, and semi-supervised models.
- Exploring interaction effects in semi-supervised models: Uncovering how feature interactions influence learning with limited labeled data.



