{"id":4717,"date":"2024-03-29T22:26:09","date_gmt":"2024-03-29T22:26:09","guid":{"rendered":"https:\/\/xaiworldconference.com\/2024\/?post_type=mp-event&#038;p=4717"},"modified":"2025-06-27T09:11:01","modified_gmt":"2025-06-27T09:11:01","slug":"late-breaking-work-demos-coffee-break-2","status":"publish","type":"mp-event","link":"https:\/\/xaiworldconference.com\/2025\/timetable\/event\/late-breaking-work-demos-coffee-break-2\/","title":{"rendered":"Late-breaking work + Demos + DC (poster session coffee break morning)"},"content":{"rendered":"\n<p><strong>Late-breaking work<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table class=\"has-fixed-layout\"><tbody><tr><td>Tianyi Ren, Juampablo Heras Rivera, Hitender Oswal, Yutong Pan, Agamdeep Chopra, Jacob Ruzevick and Mehmet Kurt<\/td><td>Here Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation<\/td><\/tr><tr><td>Md Shajalal, Md. Atabuzzaman, Alexander Boden, Gunnar Stevens and Delong Du<\/td><td>Do the Explanations Make Sense? Explainable Fake Review Identification and Users\u2019 Perspectives on Explanations<\/td><\/tr><tr><td>Prabha M. Kumarage and Mirka Saarela<\/td><td>Explainability in Generative Artificial Intelligence: An Umbrella Review of Current Techniques, Limitations, and Future Directions<\/td><\/tr><tr><td>Sahatova Kseniya, Johannes De Smedt and Xuefei Lu<\/td><td>Robustness Analysis of Counterfactual Explanations from Generative Models: An Empirical Study<\/td><\/tr><tr><td>Amal Saadallah<\/td><td>SHAP-Guided Regularization in Machine Learning Models<\/td><\/tr><tr><td>Pierfrancesco Novielli, Donato Romano, Michele Magarelli, Pierpaolo Di Bitonto, Roberto Bellotti and Sabina Tangaro<\/td><td>Bridging Structural and Functional Imaging: Integrated PET\/CT Radiomics with Explainable Machine Learning<\/td><\/tr><tr><td>Lukasz Sztukiewicz, Ignacy Stepka, Wilinski Michal and Jerzy Stefanowski<\/td><td>Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps<\/td><\/tr><tr><td>Yogachandran Rahulamathavan, Misbah Farooq and Varuna De Silva<\/td><td>PFLex: Perturbation-Free Local Explanations in Language Model-Based Text Classification<\/td><\/tr><tr><td>Xenia Demetriou, Sophie Sananikone, Vojislav Westmoreland, Matthia Sabatelli and Marco Zullich<\/td><td>A Study on the Faithfulness of Feature Attribution Explanations in Pruned Vision-Based Multi-Task Learning<\/td><\/tr><tr><td>Farnoosh Javar and Kei Wakabayashi<\/td><td>Concept Bottleneck Model with Emergent Communication Framework for Explainable AI<\/td><\/tr><tr><td>Nighat Bibi<\/td><td>Multi-Level Explainability in Radiomic-Based Classification of Multiple Sclerosis and Ischemic Lesions<\/td><\/tr><tr><td>Timo P. Gros, David Gro\u00df, Julius Kamp, Stefan Gumhold and Joerg Hoffmann<\/td><td>Visual Analysis of Action Policy Behavior: A Case Study in Grid-World Driving<\/td><\/tr><tr><td>Sakir Furkan Y\u00f6ndem, Benedikt Schlereth-Groh and Ramin Tavakoli Kolagari<\/td><td>Bridging the Sim-to-Real Gap with Explainability for ML-based Object Detection on Sonar Data<\/td><\/tr><tr><td>Christian Dormagen, Jonas Amling, Stephan Scheele and Ute Schmid<\/td><td>Explaining Process Behavior: A Declarative Framework for Interpretable Event Data<\/td><\/tr><tr><td>Ashkan Yousefi Zadeh, Xiaomeng Li, Andry Rakotonirainy, Ronald Schroeter and Sebastien Glaser<\/td><td>PsyLingXAV: A Psycholinguistics Design Framework for XAI in Automated Vehicles<\/td><\/tr><tr><td>Jacob LaRock, Md Shajalal and Gunnar Stevens<\/td><td>Interpretable Deepfake Voice Detection: A Hybrid Deep-Learning Model and Explanation Evaluation<\/td><\/tr><tr><td>Samuele Pe, Tommaso Buonocore, Giovanna Nicora and Enea Parimbelli<\/td><td>SignalGrad-CAM: beyond image explanation<\/td><\/tr><tr><td>Alec Parise and Brian Mac Namee<\/td><td>Global Interpretability for ProtoPNet Using Rule-Based Explanations<\/td><\/tr><tr><td>Paolo Fantozzi, Paolo Junior Fantozzi, Najwa Yousef, Mathilde Casagrande, Gianluca Tenore, Antonella Polimeni, James J. Sciubba, Tiffany Tavares, Umberto Romeo, Ahmed Sultan and Maurizio Naldi<\/td><td>Discriminative Feature Analysis in XAI for Multi-Class Classification of Oral Lesions<\/td><\/tr><tr><td>Marija Kopanja, Milo\u0161 Savi\u0107 and Luca Longo<\/td><td>Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation<\/td><\/tr><tr><td>Fatima Ezzeddine, Rinad Akel, Ihab Sbeity, Silvia Giordano, Marc Langheinrich and Omran Ayoub<\/td><td>On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction<\/td><\/tr><tr><td>Franz Motzkus and Ute Schmid<\/td><td>Concepts Guide and Explain Diffusion Visual Counterfactuals<\/td><\/tr><tr><td>Fabian Beer, Dimitry Mindlin, Sebastian Kost, Isabel Krause, Katharina Schwarz, Kai Seidensticker, Philipp Cimiano and Elena Esposito<\/td><td>Dialogue-based XAI for predictive policing: a field study<\/td><\/tr><tr><td>Ilaria Vascotto, Valentina Blasone, Alex Rodriguez, Alessandro Bonaita and Luca Bortolussi<\/td><td>Assessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events<\/td><\/tr><tr><td>Arthur Picard, Yazan Mualla and Franck Gechter<\/td><td>Explaining in Natural Language: A Discussion on Leveraging the Reasoning Capabilities of LLMs for XAI<\/td><\/tr><tr><td>Jacek Karolczak and Jerzy Stefanowski<\/td><td>This part looks alike this: identifying important parts of explained instances and prototypes<\/td><\/tr><tr><td>Gi Woong Choi and Sang Won Bae<\/td><td>MetaCoXAI: A Conceptual Framework Leveraging Explainable AI to Enhance Computational Thinking and Metacognition in Learning Environments<\/td><\/tr><tr><td>Orfeas Menis Mastromichalakis, Jason Liartis and Giorgos Stamou<\/td><td>Beyond One-Size-Fits-All: How User Objectives Shape Counterfactual Explanations<\/td><\/tr><tr><td>Sarah Seifi, Tobias Sukianto, Cecilia Carbonelli, Lorenzo Servadei and Robert Wille<\/td><td>Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar-Based Hand Gesture Recognition<\/td><\/tr><tr><td>Vladimir Marochko and Luca Longo<\/td><td>Explainable AI for evaluation of the differences between Event-Related Potentials recorded in a traditional environment and a Virtual Reality-based environment<\/td><\/tr><tr><td>Md Shajalal, Shamima Rayhana and Gunnar Stevens<\/td><td>Interpretable Sexism Detection with Explainable Transformers<\/td><\/tr><tr><td>Tommaso Colafiglio, Angela Lombardi, Paolo Sorino, Domenico Lof\u00f9, Danilo Danese, Fedelucio Narducci, Eugenio Di Sciascio and Tommaso Di Noia<\/td><td>Explainable Evaluation of Emotion Recognition with Low-Cost EEG: Feature Engineering and Interpretability Insights<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Demos<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table class=\"has-fixed-layout\"><tbody><tr><td><br>Daphne Lenders, Roberto Pellungrini and Fosca Giannotti<br><\/td><td>A GUI for the Fair &amp; Explainable Selective Classifier IFAC<\/td><\/tr><tr><td>Gero Szepannek<\/td><td>Explanation Groves \u2013 Controlling the Trade-off between the Degree of Explanation vs. its Complexity<\/td><\/tr><tr><td>Enrico Sciacca, Giorgia Grasso, Damiano Verda and Enrico Ferrari<\/td><td>Deriving and Visualizing Predictive Rules for Disease Risk: A Transparent Approach to Medical AI<\/td><\/tr><tr><td>Mahdi Dhaini, Kafaite Zahra Hussain, Efstratios Zaradoukas and Gjergji Kasneci<\/td><td>Which Explainability Method Should I Use? EvalxNLP: A Framework for Benchmarking Explainability Methods on NLP Models<\/td><\/tr><tr><td>Rebecca Eifler and Guilhem Fouilhe<\/td><td>IPEXCO: A Platform for Iterative Planning with Interactive Explanations<\/td><\/tr><tr><td>Christopher Lorenz Werner, Jonas Amling, Christian Dormagen and Stephan Scheele<\/td><td>ClustXRAI: Interactive Cluster Exploration and Explanation for Process Mining with Generative AI<\/td><\/tr><tr><td>Tim Katzke, Jan Corazza, Mustafa Yal\u00e7\u0131ner, Alfio Ventura, Tim-Moritz B\u00fcndert and Emmanuel M\u00fcller<\/td><td>SkinSplain: An XAI Framework for Trust Calibration in Skin Lesion Analysis<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Doctoral Consortium<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table class=\"has-fixed-layout\"><tbody><tr><td>Ephrem T. Mekonnen<\/td><td>Explaining Time Series Classifiers Through Post-Hoc XAI Methods Capturing Temporal Dependencies<\/td><\/tr><tr><td>Riccardo Giuseppe D&#8217;Elia<\/td><td>Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications<\/td><\/tr><tr><td>Gedeon Hakizimana<\/td><td>Extended Nomological Deductive Reasoning (eNDR) for Transparent AI Outputs<\/td><\/tr><tr><td>Emily Schiller<\/td><td>Explaining Uncertainty: Exploring the Synergies of Explainable AI and Uncertainty Quantification<\/td><\/tr><tr><td>Craig Pirie<\/td><td>Towards Ensemble Explanation Strategies and Solving the Disagreement Problem in XAI<\/td><\/tr><tr><td>Maria Luigia Natalia De Bonis<\/td><td>Beyond Model Trust: Dual XAI for Adaptive and User-Centric Explainability<\/td><\/tr><tr><td>Amina Mevi\u0107<\/td><td>Explainable Virtual Metrology in Semiconductor Industry<\/td><\/tr><tr><td>Haadia Amjad<\/td><td>Temporal Explainable AI Models for Surgery Evaluation Research Description<\/td><\/tr><tr><td>Anton Hummel<\/td><td>Human-Centered Explainable AI: Creating Explanations that Address Stakeholder Needs<\/td><\/tr><tr><td>Fatima Rabia Yapicioglu<\/td><td>Uncertainty Considerations of Explainable AI in Data-Driven Systems<\/td><\/tr><tr><td>Vahidin Hasi\u0107<\/td><td>Towards Explainable and Reliable Computer Vision and Pattern Recognition<\/td><\/tr><tr><td>Michele Magarelli<\/td><td>Strengthening of the Italian Research Infrastructure for Metrology and Open Access Data in support to the Agrifood (METROFOOD-IT)<\/td><\/tr><tr><td>Karim Moustafa.<\/td><td>A Novel Approach for Benchmarking Local Binary Classification XAI Methods Using Synthetic Ground Truth Datasets<\/td><\/tr><tr><td>Caterina Fregosi<\/td><td>How to Explain in XAI? &#8211; Investigating Explanation Protocols in Decision Support Systems<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Late-breaking work Tianyi Ren, Juampablo Heras Rivera, Hitender Oswal, Yutong Pan, Agamdeep Chopra, Jacob Ruzevick and Mehmet Kurt Here Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation Md Shajalal, Md. Atabuzzaman, Alexander Boden, Gunnar Stevens and Delong Du Do the Explanations Make Sense? Explainable Fake Review Identification and Users\u2019 Perspectives on Explanations &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"menu_order":0,"template":"","mp-event_category":[64],"mp-event_tag":[],"class_list":["post-4717","mp-event","type-mp-event","status-publish","hentry","mp-event_category-18th-of-july","mp-event-item"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/mp-event\/4717","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/mp-event"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/types\/mp-event"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/users\/2"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/media?parent=4717"}],"wp:term":[{"taxonomy":"mp-event_category","embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/mp-event_category?post=4717"},{"taxonomy":"mp-event_tag","embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/mp-event_tag?post=4717"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}