
The interplay of Retrieval Augmented Generation (RAG) and Graph-RAG models has introduced new complexities in understanding how outputs are produced. These systems rely on intricate interactions between retrieved evidence, graph-structured data, and the generative processes that integrate them, making their inner workings challenging to interpret. Achieving explainability in such systems requires uncovering relationships between multi-hop retrieval pathways, the influence of graph nodes and edges, and the dynamics of integrating structured and unstructured information. This effort is critical not only for ensuring trust in these systems but also for aligning their outputs with user needs in knowledge-intensive and high-stakes domains. This special track aims to address these challenges by fostering discussion and innovation in explaining the mechanisms underlying RAG and Graph-RAG. Key areas of interest include advanced methods for provenance tracking, counterfactual reasoning, and multi-hop explainability; novel metrics and frameworks for evaluating clarity, usability, and factual accuracy in explanations; and techniques for balancing detailed evidence chains with actionable simplicity. Contributions are encouraged that propose scalable algorithms for explaining large-scale graph systems, demonstrate user-centred designs for interactive explanations, and explore the role of explainability in enhancing decision-making and accountability. The session invites researchers and practitioners to present theoretical advancements, practical methodologies, and domain-agnostic applications that advance explainability in RAG and Graph-RAG. By delving into these challenges, the track aims to illuminate pathways for building more interpretable, trustworthy, and effective AI systems capable of addressing complex, real-world problems.
- Provenance tracking in retrieval pipelines: tracing multi-hop evidence paths in knowledge-based systems
- Counterfactual explanations for graph-based generation: modifying graph connections to observe changes in generated outputs.
- Personalised explanation mechanisms in retrieval-augmented systems: tailoring outputs to user expertise or preferences.
- Techniques for multi-hop retrieval explainability: explaining intermediate nodes’ roles in reasoning pipelines.
- Graph-based reasoning transparency in augmented systems: clarifying relationships in structured data retrieval.
- Scalable provenance mechanisms in large knowledge graphs: enabling real-time updates for dynamic graph systems.
- Instance-specific explanations for retrieval paths: detailing evidence chains for individual queries.
- Statistical tests for graph structure influence on generation: identifying critical nodes in structured datasets.
- Efficient methods for explainable evidence retrieval: reducing computation costs for large-scale search engines.
- Visualising retrieved evidence in graph-based systems: using interactive tools for explaining system outputs.
- Explainable evidence chains for decision-making pipelines: highlighting reasoning in augmented generation systems.
- Explainability in dynamic graph-based retrieval systems: adapting outputs for evolving knowledge sources.
- Techniques for disentangling multi-modal retrieval contributions: separating the influence of different data modalities.
- Explaining entity resolution in knowledge graph pipelines: resolving ambiguities in retrieved structured data.
- Frameworks for evaluating explainability in graph-augmented generation: combining clarity and trust metrics in pipeline outputs.
- Methods for dependency-aware graph interaction explainability: quantifying interdependencies in retrieved evidence.
- Robustness of graph-RAG explanations to data changes: testing stability under adversarial modifications.
- Interactive graph-based explanations for decision support systems: enabling real-time reasoning through structured data.
- Domain-neutral metrics for evaluating explainability in graph retrieval: trust and usability measures for end-user outputs.
- Scalable graph representation learning with explainability insights: applying interpretable embeddings in large datasets.
- Explainability-driven node pruning for graph retrieval: removing low-importance nodes to improve clarity.
- Explainable query reformulation in multi-domain RAG systems: tailoring search expansions to align with user needs.
- Temporal dynamics in explainable retrieval for sequential data: tracing trends in time-series analysis.
- Interaction effects of graph augmentation on retrieval outputs: quantifying the impact of structural changes in graphs.
- Explainable graph-based clustering in semi-supervised systems: identifying critical features influencing model outputs.
- Graph-based fairness and bias detection in retrieval systems: mitigating biases emerging from structured data retrieval.
- Efficient algorithms for explainability in large graph retrieval: scalable methods for multi-hop data insights.
- Explainable evidence integration in multi-modal systems: combining graph and text inputs for clearer outputs.
- Explainability for hierarchical graph-based retrieval systems: layered reasoning for complex knowledge pipelines.
- Visualisation techniques for multi-hop graph explanations: interactive dashboards simplifying retrieval dynamics.
- Incorporating domain-neutral constraints into explainable retrieval pipelines: embedding constraints in structured data systems.
- Explainable integration of structured and unstructured data sources: blending database entries with free-text evidence.
- Provenance explanation in unsupervised graph learning systems: detailing learned relationships in clustering tasks.
- Explainable counterfactual edits in knowledge graph augmentation: testing hypothetical changes in graph connections.
- Novel metrics for evaluating graph-RAG explainability: clarity and usability measures in augmented pipelines.
- Explainability in reinforcement learning over graph-based environments: interpreting path choices in decision-making tasks.
- Explaining retrieval outputs in hybrid RAG systems: understanding roles of combined modalities in reasoning.
- Explainable query expansion for multilingual knowledge graph search: optimising results across languages.
- Explainable integration of retrieval and reasoning chains in RAG systems: tracing outputs through connected structured data.
- Efficient explainability techniques for heterogeneous graph systems: explaining multi-type node roles in complex graphs.
