Late-breaking work
Tianyi Ren, Juampablo Heras Rivera, Hitender Oswal, Yutong Pan, Agamdeep Chopra, Jacob Ruzevick and Mehmet Kurt | Here Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation |
Md Shajalal, Md. Atabuzzaman, Alexander Boden, Gunnar Stevens and Delong Du | Do the Explanations Make Sense? Explainable Fake Review Identification and Users’ Perspectives on Explanations |
Prabha M. Kumarage and Mirka Saarela | Explainability in Generative Artificial Intelligence: An Umbrella Review of Current Techniques, Limitations, and Future Directions |
Sahatova Kseniya, Johannes De Smedt and Xuefei Lu | Robustness Analysis of Counterfactual Explanations from Generative Models: An Empirical Study |
Amal Saadallah | SHAP-Guided Regularization in Machine Learning Models |
Pierfrancesco Novielli, Donato Romano, Michele Magarelli, Pierpaolo Di Bitonto, Roberto Bellotti and Sabina Tangaro | Bridging Structural and Functional Imaging: Integrated PET/CT Radiomics with Explainable Machine Learning |
Lukasz Sztukiewicz, Ignacy Stepka, Wilinski Michal and Jerzy Stefanowski | Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps |
Yogachandran Rahulamathavan, Misbah Farooq and Varuna De Silva | PFLex: Perturbation-Free Local Explanations in Language Model-Based Text Classification |
Xenia Demetriou, Sophie Sananikone, Vojislav Westmoreland, Matthia Sabatelli and Marco Zullich | A Study on the Faithfulness of Feature Attribution Explanations in Pruned Vision-Based Multi-Task Learning |
Farnoosh Javar and Kei Wakabayashi | Concept Bottleneck Model with Emergent Communication Framework for Explainable AI |
Nighat Bibi | Multi-Level Explainability in Radiomic-Based Classification of Multiple Sclerosis and Ischemic Lesions |
Timo P. Gros, David Groß, Julius Kamp, Stefan Gumhold and Joerg Hoffmann | Visual Analysis of Action Policy Behavior: A Case Study in Grid-World Driving |
Sakir Furkan Yöndem, Benedikt Schlereth-Groh and Ramin Tavakoli Kolagari | Bridging the Sim-to-Real Gap with Explainability for ML-based Object Detection on Sonar Data |
Christian Dormagen, Jonas Amling, Stephan Scheele and Ute Schmid | Explaining Process Behavior: A Declarative Framework for Interpretable Event Data |
Ashkan Yousefi Zadeh, Xiaomeng Li, Andry Rakotonirainy, Ronald Schroeter and Sebastien Glaser | PsyLingXAV: A Psycholinguistics Design Framework for XAI in Automated Vehicles |
Jacob LaRock, Md Shajalal and Gunnar Stevens | Interpretable Deepfake Voice Detection: A Hybrid Deep-Learning Model and Explanation Evaluation |
Samuele Pe, Tommaso Buonocore, Giovanna Nicora and Enea Parimbelli | SignalGrad-CAM: beyond image explanation |
Alec Parise and Brian Mac Namee | Global Interpretability for ProtoPNet Using Rule-Based Explanations |
Paolo Fantozzi, Paolo Junior Fantozzi, Najwa Yousef, Mathilde Casagrande, Gianluca Tenore, Antonella Polimeni, James J. Sciubba, Tiffany Tavares, Umberto Romeo, Ahmed Sultan and Maurizio Naldi | Discriminative Feature Analysis in XAI for Multi-Class Classification of Oral Lesions |
Marija Kopanja, Miloš Savić and Luca Longo | Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation |
Fatima Ezzeddine, Rinad Akel, Ihab Sbeity, Silvia Giordano, Marc Langheinrich and Omran Ayoub | On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction |
Franz Motzkus and Ute Schmid | Concepts Guide and Explain Diffusion Visual Counterfactuals |
Fabian Beer, Dimitry Mindlin, Sebastian Kost, Isabel Krause, Katharina Schwarz, Kai Seidensticker, Philipp Cimiano and Elena Esposito | Dialogue-based XAI for predictive policing: a field study |
Ilaria Vascotto, Valentina Blasone, Alex Rodriguez, Alessandro Bonaita and Luca Bortolussi | Assessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events |
Arthur Picard, Yazan Mualla and Franck Gechter | Explaining in Natural Language: A Discussion on Leveraging the Reasoning Capabilities of LLMs for XAI |
Jacek Karolczak and Jerzy Stefanowski | This part looks alike this: identifying important parts of explained instances and prototypes |
Gi Woong Choi and Sang Won Bae | MetaCoXAI: A Conceptual Framework Leveraging Explainable AI to Enhance Computational Thinking and Metacognition in Learning Environments |
Orfeas Menis Mastromichalakis, Jason Liartis and Giorgos Stamou | Beyond One-Size-Fits-All: How User Objectives Shape Counterfactual Explanations |
Sarah Seifi, Tobias Sukianto, Cecilia Carbonelli, Lorenzo Servadei and Robert Wille | Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar-Based Hand Gesture Recognition |
Vladimir Marochko and Luca Longo | Explainable AI for evaluation of the differences between Event-Related Potentials recorded in a traditional environment and a Virtual Reality-based environment |
Md Shajalal, Shamima Rayhana and Gunnar Stevens | Interpretable Sexism Detection with Explainable Transformers |
Tommaso Colafiglio, Angela Lombardi, Paolo Sorino, Domenico Lofù, Danilo Danese, Fedelucio Narducci, Eugenio Di Sciascio and Tommaso Di Noia | Explainable Evaluation of Emotion Recognition with Low-Cost EEG: Feature Engineering and Interpretability Insights |
Demos
Daphne Lenders, Roberto Pellungrini and Fosca Giannotti | A GUI for the Fair & Explainable Selective Classifier IFAC |
Gero Szepannek | Explanation Groves – Controlling the Trade-off between the Degree of Explanation vs. its Complexity |
Enrico Sciacca, Giorgia Grasso, Damiano Verda and Enrico Ferrari | Deriving and Visualizing Predictive Rules for Disease Risk: A Transparent Approach to Medical AI |
Mahdi Dhaini, Kafaite Zahra Hussain, Efstratios Zaradoukas and Gjergji Kasneci | Which Explainability Method Should I Use? EvalxNLP: A Framework for Benchmarking Explainability Methods on NLP Models |
Rebecca Eifler and Guilhem Fouilhe | IPEXCO: A Platform for Iterative Planning with Interactive Explanations |
Christopher Lorenz Werner, Jonas Amling, Christian Dormagen and Stephan Scheele | ClustXRAI: Interactive Cluster Exploration and Explanation for Process Mining with Generative AI |
Tim Katzke, Jan Corazza, Mustafa Yalçıner, Alfio Ventura, Tim-Moritz Bündert and Emmanuel Müller | SkinSplain: An XAI Framework for Trust Calibration in Skin Lesion Analysis |
Doctoral Consortium
Ephrem T. Mekonnen | Explaining Time Series Classifiers Through Post-Hoc XAI Methods Capturing Temporal Dependencies |
Riccardo Giuseppe D’Elia | Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications |
Gedeon Hakizimana | Extended Nomological Deductive Reasoning (eNDR) for Transparent AI Outputs |
Emily Schiller | Explaining Uncertainty: Exploring the Synergies of Explainable AI and Uncertainty Quantification |
Craig Pirie | Towards Ensemble Explanation Strategies and Solving the Disagreement Problem in XAI |
Maria Luigia Natalia De Bonis | Beyond Model Trust: Dual XAI for Adaptive and User-Centric Explainability |
Amina Mević | Explainable Virtual Metrology in Semiconductor Industry |
Haadia Amjad | Temporal Explainable AI Models for Surgery Evaluation Research Description |
Anton Hummel | Human-Centered Explainable AI: Creating Explanations that Address Stakeholder Needs |
Fatima Rabia Yapicioglu | Uncertainty Considerations of Explainable AI in Data-Driven Systems |
Vahidin Hasić | Towards Explainable and Reliable Computer Vision and Pattern Recognition |
Michele Magarelli | Strengthening of the Italian Research Infrastructure for Metrology and Open Access Data in support to the Agrifood (METROFOOD-IT) |
Karim Moustafa. | A Novel Approach for Benchmarking Local Binary Classification XAI Methods Using Synthetic Ground Truth Datasets |
Caterina Fregosi | How to Explain in XAI? – Investigating Explanation Protocols in Decision Support Systems |