- Wednesday 26 Jul 2023
- Thursday 27 Jul 2023
- Friday 28 Jul 2023
Wednesday 26 Jul 2023
8:00 am - 9:00 am (Registration - day 1)
Foyer, Sophia de Mello Bregner Andresen
9:00 am - 9:15 am (Welcome - day 1)
Foyer, Sophia de Mello Bregner Andresen
9:30 am - 11:00 am (Parallel sessions - morning - day 1)
[Interdisciplinary perspectives on xAI]
Chairs: Kevin Baum, Timo Speith
Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAI
Francesco Sovrano, Fabio VitaliSophia de Mello Bregner Andresen Teather
Speeding things up. Can explainability improve human learning?
Jakob Mannmeusel, Mario Rothfelder, and Samaneh KhoshrouSophia de Mello Bregner Andresen Teather
Statutory Professions in AI governance and their consequences for explainable AI
Labhaoise NiFhaolain, Andrew Hines, Vivek NallurSophia de Mello Bregner Andresen Teather
Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research
Timo Freiesleben, Gunnar KonigSophia de Mello Bregner Andresen Teather
XAI Requirements in Smart Production Processes: A Case Study
Deborah Baum, Kevin Baum, Timo P. Gros, Verena WolfSophia de Mello Bregner Andresen Teather
[Model-agnostic explanations]
Chair: Julia Herbinger
iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios
Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke HullermeierVianna da Motta
Algorithm-Agnostic Feature Attributions for Clustering
Christian A. Scholbeck, Henri Funk, Giuseppe CasalicchioVianna da Motta
Feature Importance versus Feature Influence and What It Signifies for Explainable AI
Kary FramlingVianna da Motta
SAC-FACT: Soft Actor-Critic Reinforcement Learning for Counterfactual Explanations
Fatima Ezzeddine, Omran Ayoub, Davide Andreoletti, Silvia GiordanoVianna da Motta
11:00 am - 11:30 am (Coffee break - morning - day 1)
Foyer, Sophia de Mello Bregner Andresen
11:30 am - 12:30 pm (Keynote)
Explainable performance evaluation in machine learning
Prof. Peter FlachSophia de Mello Bregner Andresen Teather
1:00 pm - 2:30 pm (Lunch - buffet - day 1)
SALA VM - Vitorino nemesio
2:30 pm - 5:30 pm (Parallel sessions - afternoon - day 1)
[Actionable eXplainable AI]
Chair: Grégoire Montavon
Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly?
Leonardo Arrighi, Sylvio Barbon Jr., Felice Andrea Pellegrino, Michele Simonato, Marco ZullichVianna da Motta
DExT: Detector Explanation Toolkit
Deepan Chakravarthi Padmanabhan, Paul G., Plöger, Octavio Arriaga, Matias, Valdenegro-ToroVianna da Motta
Unveiling Black-boxes: Explainable Deep Learning Models for Patent Classification
Md Shajalal, Sebastian Denef, Md. Rezaul Karim, Alexander Boden, Gunnar StevensVianna da Motta
Propaganda Detection Robustness through Adversarial Attacks driven by eXplainable AI
Danilo Cavaliere, Mariacristina Gallo, and Claudio StanzioneVianna da Motta
[xAI for decision-making & human-AI collaboration - A]
Chair: Jasper van der Waa
PERFEX: Classifier Performance Explanations for Trustworthy AI Systems
Erwin Walraven, Ajaya Adhikari, Cor J. VeenmanSophia de Mello Bregner Andresen Teather
Explaining Black-Boxes in Federated Learning
Luca Corbucci, Riccardo Guidotti, Anna MonrealeSophia de Mello Bregner Andresen Teather
The Duet of Representations and How Explanations Exacerbate It
Charles Wan, Rodrigo Belo, Leid Zejnilovic, Susana LavadoSophia de Mello Bregner Andresen Teather
Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media
Thales Bertaglia, Stefan Huber, Catalina Goanta, Gerasimos Spanakis, Adriana IamnitchiSophia de Mello Bregner Andresen Teather
[Semantics and explainability]
Chair: Christophe Labreuche
HOLMES: HOLonym-MEronym based Semantic inspection for Convolutional Image Classifiers
Francesco Dibitonto, Fabio Garcea, André Panisson, Alan Perotti, Lia MorraVianna da Motta
Beyond One-Hot-Encoding: Injecting Semantics to Drive Image Classifiers
Alan Perotti, Simone Bertolotto, Eliana Pastor, André PanissonVianna da Motta
Finding Spurious Correlations with Function-Semantic Contrast Analysis
Kirill Bykov, Laura Kopf, Marina M.-C. HöhneVianna da Motta
Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna BadeVianna da Motta
[xAI for decision-making & human-AI collaboration – B]
Chair: Jasper van der Waa
Handling Missing Values in Local Post-hoc Explainability
Martina Cinquini, Fosca Giannotti, Riccardo Guidotti, Andrea Mattei
Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and without interacting Criteria
Christophe Labreuche, Roman BressonSophia de Mello Bregner Andresen Teather
Human-Computer Interaction and Explainability: Intersection and Terminology
Arthur Picard, Yazan Mualla, Franck, Gechter, Stephane GallandSophia de Mello Bregner Andresen Teather
Explaining Deep Reinforcement Learning-based methods for control of building HVAC systems
Javier Jiménez-Raboso, Antonio Manjavacas, Alejandro, Campoy-Nieves, Miguel Molina-Solana, Juan Gómez-RomeroSophia de Mello Bregner Andresen Teather
[Explainable AI in Finance & cybersecurity]
Chair: Paolo Giudici
Cost of Explainability in AI: an Example with Credit Scoring Models
Jean Dessain, Nora Bentaleb, Fabien VinasSophia de Mello Bregner Andresen Teather
Explainable Machine Learning for Bag of Words-based Phishing Detection
Maria Carla Calzarossa, Paolo Giudici, Rasha ZieniSophia de Mello Bregner Andresen Teather
Lorenz Zonoids for Trustworthy AI
Paolo Giudici, Emanuela RaffinettiSophia de Mello Bregner Andresen Teather
Evaluating feature relevance XAI in network intrusion detection
Julian Tritscher, Maximilian Wolf, Andreas Hotho, Daniel SchlorSophia de Mello Bregner Andresen Teather
[xAI in biomedicine]
Chairs: Avleen Malhi, Kary Främling
Evaluating explanations of an Alzheimer’s Disease 18F-FDG Brain PET black-box classifier
Lisa Anita De Santi, Filippo Bargagna, Maria Filomena Santarelli, Vincenzo PositanoVianna da Motta
An Evaluation of Contextual Importance and Utility for Outcome Explanation of Black-box Predictions for medical datasets
Avleen Malhi, Kary FramlingVianna da Motta
The accuracy and faithfullness of AL-DLIME -Active Learning-based Deterministic Local Interpretable Model-Agnostic Explanations: a comparison with LIME and DLIME in Medicine
Sarah Holm, Luis MacedoVianna da Motta
Understanding Unsupervised Learning Explanations using Contextual Importance and Utility
Avleen Malhi Vlad Apopei, Kary FramlingVianna da Motta
Thursday 27 Jul 2023
8:00 am - 9:00 am (Registration - day 2)
Foyer, Sophia de Mello Bregner Andresen
9:30 am - 11:00 am (Parallel sessions - morning A - day 2)
[Causality & Explainable AI]
Chair: Tjitze Rienstra
Explaining Model Behavior with Global Causal Analysis
Marcel Robeer, Floris Bex, Ad Feelders, Henry PrakkenVianna da Motta
The Importance of Time in Causal Algorithmic Recourse
Isacco Beretta, Martina CinquiniVianna da Motta
Counterfactual Explanations for Graph Classification Through the Lenses of Density
Carlo Abrate, Giulia Preti, Francesco BonchiVianna da Motta
Ablation Path Saliency
Justus Sagemuller, Olivier VerdierVianna da Motta
ABC-GAN: Spatially Constrained Counterfactual Generation for Image Classification Explanations
Dimitry Mindlin, Malte Schilling, Philipp CimianoVianna da Motta
[Human-centered explanations for xAI]
Chair: Roberto Prevete
Adding Why to What? Analyses of an Everyday Explanation
Lutz Terfloth, Michael Schaffer, Heike M. Buhl, Carsten SchulteMaria Helena Vieira da Silva
Concept Distillation in Graph Neural Networks
Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja, Jamnik, Pietro LioMaria Helena Vieira da Silva
For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI
Ulrike Kuhl, Andre Artelt, Barbara HammerMaria Helena Vieira da Silva
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
Ivania Donoso-Guzman, Jeroen Ooge, Denis Parra, Katrien VerbertMaria Helena Vieira da Silva
Development of a human-centred psychometric test for the evaluation of explanations produced by XAI methods
Giulia Vilone, Luca LongoMaria Helena Vieira da Silva
11:00 am - 11:30 am (POSTER LATE-BREAKING WORKS + coffee break - morning - day 2)
Foyer, Sophia de Mello Bregner Andresen
11:30 am - 1:00 pm (Parallel sessions - morning B - day 2)
[xAI and Natural Language Processing]
Chair: Guidotti Riccardo
Evaluating self-attention interpretability through human-grounded experimental protocol
Milan Bhan, Nina Achache, Victor Legrand, Annabelle Blangero, Nicolas ChesneauVianna da Motta
Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks
Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe KuhnbergerVianna da Motta
From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent
Van Bach Nguyen, Jörg Schlötterer, Christin SeifertVianna da Motta
Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive Tweets
Muhammad Deedahwar Mazhar Qureshi, M. Atif Qureshi, Wael RashwanVianna da Motta
Understanding Interpretability: Explainable AI Approaches for Hate Speech Classifiers
Sargam Yadav, Abhishek Kaushik, Kevin McDaid3Vianna da Motta
[xAI for Machine Learning on Graphs with Ontologies & Graph Neural Networks]
Chair: Alan Perotti
XInsight: Revealing Model Insights for GNNs with Flow-based Explanations
Eli Laird, Ayesh Madushanka, Elfi Kraka, Corey ClarkMaria Helena Vieira da Silva
What Will Make Misinformation Spread: An XAI Perspective
Hongbo Bo, Yiwen Wu, Zinuo You, Ryan McConville, Jun Hong, Weiru LiuMaria Helena Vieira da Silva
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel, Luca Torresi, Patrick, Reiser, Pascal FriederichMaria Helena Vieira da Silva
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel, Luca Torresi, Pascal FriederichMaria Helena Vieira da Silva
Evaluating Link Prediction Explanations for Graph Neural Networks
Claudio Borile, Alan Perotti, and Andre PanissonMaria Helena Vieira da Silva
1:00 pm - 2:30 pm (Stand-up Lunch - day 2)
No workshops in this session.
2:30 pm - 4:30 pm (Parallel sessions - afternoon - day 2)
[Explanations for Advice-Giving Systems]
Chair: Francesco Barile
Explaining Socio-demographic and Behavioral Patterns of Vaccination against the Swine Flu(H1N1) Pandemic
Clara Punzi, Aleksandra Maslennikova, Gizem Gezici, Roberto, Pellungrini, Fosca GiannottiMaria Helena Vieira da Silva
Explaining Search Result Stances to Opinionated People
Zhangyi Wu, Tim Draws, Federico Cau, Francesco Barile, Alisa Rieger, and Nava TintarevMaria Helena Vieira da Silva
A Co-design Study for Multi-Stakeholder Job Recommender System Explanations
Roan Schellingerhout, Francesco Barile and Nava TintarevMaria Helena Vieira da Silva
Semantic Meaningfulness: Evaluating Counterfactual Approaches for Real-World Plausibility and Feasibility
Jacqueline Hollig, Aniek F. Markus, Jef de Slegte, Prachi BagaveMaria Helena Vieira da Silva
[xAI for Trustworthy and Responsible AI]
Chair: Rocio Gonzalez Diaz
Weighted Mutual Information for Out-Of-Distribution Detection
Giacomo De Bernardi, Sara Narteni, Enrico, Cambiaso, Marco Muselli, Maurizio Mongelli1Vianna da Motta
Leveraging Group Contrastive Explanations for Handling Fairness
Alessandro Castelnovo, Nicole Inverardi, Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Andrea SevesoVianna da Motta
LUCID–GAN: Conditional Generative Models to Locate Unfairness
Andres Algaba, Carmen Mazijn, Carina Prunkl, Jan Danckaert, Vincent GinisVianna da Motta
The Importance of Distrust in AI
Tobias M. Peters, Roel W. VisserVianna da Motta
[Explainable & Interpretable AI with Argumentation and reasoning]
Chair: Lucas Rizzo
Integrating GPT-technologies with Decision Models for Explainability
Alexandre Goossens , Jan VanthienenVianna da Motta
Explainable Machine Learning via Argumentation
Nicoletta Prentzas, Constantinos Pattichis. Antonis KakasVianna da Motta
A novel structured argumentation framework for improved explainability of classification tasks
Lucas RizzoVianna da Motta
Hardness of Deceptive Certificate Selection
Stephan WaldchenVianna da Motta
[XAI in health-care]
Chair: Ruairi O' Reilly
An Interactive XAI Interface with Application in Healthcare for Non-experts
Jingyu Hu, Yizhu Liang, Weiyu Zhao, Kevin McAreavey, and Weiru LiuMaria Helena Vieira da Silva
Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson’s Disease Progression
Jose Luis Corcuera Barcena, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, and Fabrizio RuffiniMaria Helena Vieira da Silva
Color Shadows 2: Assessing the Impact of XAI on Diagnostic Decision-Making
Chiara Natali, Lorenzo Famiglini, Andrea Campagner, Giovanni Andrea La Maida, Enrico Gallazzi, and Federico CabitzaMaria Helena Vieira da Silva
Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method
Oleksandr Davydko, Vladimir Pavlov, and Luca LongoMaria Helena Vieira da Silva
Friday 28 Jul 2023
8:00 am - 9:00 am (registration - day 3)
Foyer, Sophia de Mello Bregner Andresen
9:30 am - 11:00 am (Parallel sessions - morning - day 3)
(Doctoral Consortium A)
Demystifying research hypothesis
Supervised activity on personalising a research hypothesis
[Approaches and strategies for xAI]
Chair: Alessandro Renda
The Xi method: unlocking the mysteries of regression with Statistics
Valentina GhidiniSophia de Mello Bregner Andresen Teather
Do intermediate feature coalitions aid explainability of black-box models?
Minal Suresh Patil, Kary FramlingSophia de Mello Bregner Andresen Teather
Strategies to exploit XAI to improve classification systems
Andrea Apicella, Luca Di Lorenzo, Francesco, Isgro, Andrea Pollastro, Roberto PreveteSophia de Mello Bregner Andresen Teather
Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values
Kristin Blesch, Marvin N., Wright, David WatsonSophia de Mello Bregner Andresen Teather
Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAI
Ettore Mariotti, Adarsa Sivaprasad, Jose Maria Alonso MoralSophia de Mello Bregner Andresen Teather
[Methods and techniques for xAI]
Chair: Sibylle Sager
Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process
Christoph Molnar, Timo Freiesleben, Gunnar Konig, Julia Herbinger, Tim Reisinger, Giuseppe Casalicchio, Marvin N. Wright, Bernd BischlMaria Helena Vieira da Silva
Sanity Checks for Saliency Methods Explaining Object Detectors
Deepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias, Valdenegro-ToroMaria Helena Vieira da Silva
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Meike Nauta, Christin SeifertMaria Helena Vieira da Silva
IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit based on Analyses of Interestingness
Pedro Sequeira, Melinda GervasioMaria Helena Vieira da Silva
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State, Salvatore Ruggieri, Franco TuriniMaria Helena Vieira da Silva
11:00 am - 11:30 am (coffee break - morning - day 3)
Foyer, Sophia de Mello Bregner Andresen
11:30 am - 1:00 pm (PANEL - eXplainable Artificial Intelligence: definitions, boundaries, impacts and challenges)
xaiworldconference.com/panel-discussion/
1:00 pm - 3:00 pm (FREE TIME around)
Attendees can have a longer break and explore the "Padrão dos Descobrimentos" and the "Belem Tower" on the beautiful Tagus River just in front of the conference venue.
https://en.wikipedia.org/wiki/Padrao_dos_Descobrimentos
https://en.wikipedia.org/wiki/Belem_Tower
3:00 pm - 4:00 pm (Parallel sessions - afternoon A - day 3)
(Doctoral Consortium B)
The research aims and objectives
Vianna da Motta
[Representational Learning and concept extraction for xAI]
Chair: Pietro Ducange
An Exploration of the Latent Space of a Convolutional Variational Autoencoder for theGeneration of Musical Instrument Tones
Anastasia Natsiou, Sean O’Leary, and Luca LongoMaria Helena Vieira da Silva
Improving local fidelity of LIME by CVAE
Daisuke Yasui, Hirosh Sato, Masao KuboMaria Helena Vieira da Silva
Outcome-Guided Counterfactuals from a Jointly Trained Generative Latent Space
Eric Yeh, Pedro Sequeira, Jesse Hostetler, Melinda GervasioMaria Helena Vieira da Silva
Scalable Concept Extraction in Industry 4.0
Andres Felipe Posada-Moreno, Kai Muller, Florian Brillowski, Friedrich Solowjow, Thomas Gries, Sebastian TrimpeMaria Helena Vieira da Silva
[xAI for time series]
Chair: Jens Lundström
State Graph Based Explanation Approach for Black-box Time Series Model
Yiran Huang, Chaofan Li, Hansen Lu, Till Riedel, Michael BeiglSophia de Mello Bregner Andresen Teather
A Deep Dive into Perturbationsas Evaluation Technique for Time Series XAI
Udo Schlegel, Daniel A. KeimSophia de Mello Bregner Andresen Teather
Causal-based Spatio-Temporal Graph Neural Networks for Industrial Internet of Things Multivariate Time Series Forecasting
Amir Miraki, Austeja Dapkute, Vytautas Siozinys, Martynas Jonaitis, Reza ArghandehSophia de Mello Bregner Andresen Teather
Investigating the effect of pre-processing methods on model decision-making in EEG-based person identification
Carlos Gómez Tapia, Bojan Bozic, Luca LongoSophia de Mello Bregner Andresen Teather
4:00 pm - 4:30 pm (coffee break - afternoon - day 3)
Foyer, Sophia de Mello Bregner Andresen
4:30 pm - 5:30 pm (Parallel sessions - afternoon B - day 3)
(Doctoral Consortium C)
Boundaries of research: scope, assumptions, limitations and delimitations
Vianna da Motta
Questions & answers with professors & post-docs
Vianna da Motta
[Applications for xAI]
Chair: Giulia Vilone
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
Laura State, Hadrien Salat, Stefania Rubrichi, Zbigniew SmoredaSophia de Mello Bregner Andresen Teather
A Novel Architecture for Robust Explainable AI Approaches in Critical Object Detection Scenarios Based on Bayesian Neural Networks
Daniel Gierse, Felix Neuburger, Thomas Kopinski1Sophia de Mello Bregner Andresen Teather
Is LIME appropriate to explain polypharmacy prediction model ?
Lynda Dib, Richard KhourySophia de Mello Bregner Andresen Teather
Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark
Mohamed Karim Belaid, Richard Bornemann, Maximilian Rabus, Ralf Krestel, Eyke HüllermeierSophia de Mello Bregner Andresen teather
[Surveys, benchmarks and visual representations for xAI]
Chair: Weiru Liu
Natural Example-Based Explainability: a Survey
Antonin Poche, Lucas Hervier, Mohamed-Chafik BakkayMaria Helena Vieira da Silva
Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features
Igor Cheperanov, David Sessler, Alex Ulmer, Hendrik Lücke-Tieke, Jörn KohlhammerMaria Helena Vieira da Silva
Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards
Xiaowei Liu, Kevin McAreavey, Weiru LiuMaria Helena Vieira da Silva
Explainable Artificial Intelligence in Education: A Comprehensive Review
Blerta Abazi Chaushi, Besnik Selimi, Agron Chaushi, Marika ApostolovaMaria Helena Vieira da Silva
7:30 pm - 10:00 pm (GALA DINNER - Conference closing + AWARDS)
SALA VM - Vitorino nemesio
*note the order of articles might slightly change over time