Late-breaking work

Tianyi Ren, Juampablo Heras Rivera, Hitender Oswal, Yutong Pan, Agamdeep Chopra, Jacob Ruzevick and Mehmet KurtHere Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation
Md Shajalal, Md. Atabuzzaman, Alexander Boden, Gunnar Stevens and Delong DuDo the Explanations Make Sense? Explainable Fake Review Identification and Users’ Perspectives on Explanations
Prabha M. Kumarage and Mirka SaarelaExplainability in Generative Artificial Intelligence: An Umbrella Review of Current Techniques, Limitations, and Future Directions
Sahatova Kseniya, Johannes De Smedt and Xuefei LuRobustness Analysis of Counterfactual Explanations from Generative Models: An Empirical Study
Amal SaadallahSHAP-Guided Regularization in Machine Learning Models
Pierfrancesco Novielli, Donato Romano, Michele Magarelli, Pierpaolo Di Bitonto, Roberto Bellotti and Sabina TangaroBridging Structural and Functional Imaging: Integrated PET/CT Radiomics with Explainable Machine Learning
Lukasz Sztukiewicz, Ignacy Stepka, Wilinski Michal and Jerzy StefanowskiInvestigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Yogachandran Rahulamathavan, Misbah Farooq and Varuna De SilvaPFLex: Perturbation-Free Local Explanations in Language Model-Based Text Classification
Xenia Demetriou, Sophie Sananikone, Vojislav Westmoreland, Matthia Sabatelli and Marco ZullichA Study on the Faithfulness of Feature Attribution Explanations in Pruned Vision-Based Multi-Task Learning
Farnoosh Javar and Kei WakabayashiConcept Bottleneck Model with Emergent Communication Framework for Explainable AI
Nighat BibiMulti-Level Explainability in Radiomic-Based Classification of Multiple Sclerosis and Ischemic Lesions
Timo P. Gros, David Groß, Julius Kamp, Stefan Gumhold and Joerg HoffmannVisual Analysis of Action Policy Behavior: A Case Study in Grid-World Driving
Sakir Furkan Yöndem, Benedikt Schlereth-Groh and Ramin Tavakoli KolagariBridging the Sim-to-Real Gap with Explainability for ML-based Object Detection on Sonar Data
Christian Dormagen, Jonas Amling, Stephan Scheele and Ute SchmidExplaining Process Behavior: A Declarative Framework for Interpretable Event Data
Ashkan Yousefi Zadeh, Xiaomeng Li, Andry Rakotonirainy, Ronald Schroeter and Sebastien GlaserPsyLingXAV: A Psycholinguistics Design Framework for XAI in Automated Vehicles
Jacob LaRock, Md Shajalal and Gunnar StevensInterpretable Deepfake Voice Detection: A Hybrid Deep-Learning Model and Explanation Evaluation
Samuele Pe, Tommaso Buonocore, Giovanna Nicora and Enea ParimbelliSignalGrad-CAM: beyond image explanation
Alec Parise and Brian Mac NameeGlobal Interpretability for ProtoPNet Using Rule-Based Explanations
Paolo Fantozzi, Paolo Junior Fantozzi, Najwa Yousef, Mathilde Casagrande, Gianluca Tenore, Antonella Polimeni, James J. Sciubba, Tiffany Tavares, Umberto Romeo, Ahmed Sultan and Maurizio NaldiDiscriminative Feature Analysis in XAI for Multi-Class Classification of Oral Lesions
Marija Kopanja, Miloš Savić and Luca LongoEnhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation
Fatima Ezzeddine, Rinad Akel, Ihab Sbeity, Silvia Giordano, Marc Langheinrich and Omran AyoubOn the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction
Franz Motzkus and Ute SchmidConcepts Guide and Explain Diffusion Visual Counterfactuals
Fabian Beer, Dimitry Mindlin, Sebastian Kost, Isabel Krause, Katharina Schwarz, Kai Seidensticker, Philipp Cimiano and Elena EspositoDialogue-based XAI for predictive policing: a field study
Ilaria Vascotto, Valentina Blasone, Alex Rodriguez, Alessandro Bonaita and Luca BortolussiAssessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events
Arthur Picard, Yazan Mualla and Franck GechterExplaining in Natural Language: A Discussion on Leveraging the Reasoning Capabilities of LLMs for XAI
Jacek Karolczak and Jerzy StefanowskiThis part looks alike this: identifying important parts of explained instances and prototypes
Gi Woong Choi and Sang Won BaeMetaCoXAI: A Conceptual Framework Leveraging Explainable AI to Enhance Computational Thinking and Metacognition in Learning Environments
Orfeas Menis Mastromichalakis, Jason Liartis and Giorgos StamouBeyond One-Size-Fits-All: How User Objectives Shape Counterfactual Explanations
Sarah Seifi, Tobias Sukianto, Cecilia Carbonelli, Lorenzo Servadei and Robert WilleLearning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar-Based Hand Gesture Recognition
Vladimir Marochko and Luca LongoExplainable AI for evaluation of the differences between Event-Related Potentials recorded in a traditional environment and a Virtual Reality-based environment
Md Shajalal, Shamima Rayhana and Gunnar StevensInterpretable Sexism Detection with Explainable Transformers
Tommaso Colafiglio, Angela Lombardi, Paolo Sorino, Domenico Lofù, Danilo Danese, Fedelucio Narducci, Eugenio Di Sciascio and Tommaso Di NoiaExplainable Evaluation of Emotion Recognition with Low-Cost EEG: Feature Engineering and Interpretability Insights

Demos

Daphne Lenders, Roberto Pellungrini and Fosca GiannottiA GUI for the Fair & Explainable Selective Classifier IFAC
Gero SzepannekExplanation Groves – Controlling the Trade-off between the Degree of Explanation vs. its Complexity
Enrico Sciacca, Giorgia Grasso, Damiano Verda and Enrico FerrariDeriving and Visualizing Predictive Rules for Disease Risk: A Transparent Approach to Medical AI
Mahdi Dhaini, Kafaite Zahra Hussain, Efstratios Zaradoukas and Gjergji KasneciWhich Explainability Method Should I Use? EvalxNLP: A Framework for Benchmarking Explainability Methods on NLP Models
Rebecca Eifler and Guilhem FouilheIPEXCO: A Platform for Iterative Planning with Interactive Explanations
Christopher Lorenz Werner, Jonas Amling, Christian Dormagen and Stephan ScheeleClustXRAI: Interactive Cluster Exploration and Explanation for Process Mining with Generative AI
Tim Katzke, Jan Corazza, Mustafa Yalçıner, Alfio Ventura, Tim-Moritz Bündert and Emmanuel MüllerSkinSplain: An XAI Framework for Trust Calibration in Skin Lesion Analysis

Doctoral Consortium

Ephrem T. MekonnenExplaining Time Series Classifiers Through Post-Hoc XAI Methods Capturing Temporal Dependencies
Riccardo Giuseppe D’EliaInterpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications
Gedeon HakizimanaExtended Nomological Deductive Reasoning (eNDR) for Transparent AI Outputs
Emily SchillerExplaining Uncertainty: Exploring the Synergies of Explainable AI and Uncertainty Quantification
Craig PirieTowards Ensemble Explanation Strategies and Solving the Disagreement Problem in XAI
Maria Luigia Natalia De BonisBeyond Model Trust: Dual XAI for Adaptive and User-Centric Explainability
Amina MevićExplainable Virtual Metrology in Semiconductor Industry
Haadia AmjadTemporal Explainable AI Models for Surgery Evaluation Research Description
Anton HummelHuman-Centered Explainable AI: Creating Explanations that Address Stakeholder Needs
Fatima Rabia YapiciogluUncertainty Considerations of Explainable AI in Data-Driven Systems
Vahidin HasićTowards Explainable and Reliable Computer Vision and Pattern Recognition
Michele MagarelliStrengthening of the Italian Research Infrastructure for Metrology and Open Access Data in support to the Agrifood (METROFOOD-IT)
Karim Moustafa.A Novel Approach for Benchmarking Local Binary Classification XAI Methods Using Synthetic Ground Truth Datasets
Caterina FregosiHow to Explain in XAI? – Investigating Explanation Protocols in Decision Support Systems

Event Timeslots (1)

Restaurant
-