• Wednesday 26 Jul 2023
  • Thursday 27 Jul 2023
  • Friday 28 Jul 2023

Wednesday 26 Jul 2023

8:00 am - 9:00 am (Registration - day 1)

Foyer, Sophia de Mello Bregner Andresen

9:00 am - 9:15 am (Welcome - day 1)

Foyer, Sophia de Mello Bregner Andresen

9:30 am - 11:00 am (Parallel sessions - morning - day 1)

[Interdisciplinary perspectives on xAI]

Chairs: Kevin Baum, Timo Speith

Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAI

Francesco Sovrano, Fabio VitaliSophia de Mello Bregner Andresen Teather

Wed 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Speeding things up. Can explainability improve human learning?

Jakob Mannmeusel, Mario Rothfelder, and Samaneh KhoshrouSophia de Mello Bregner Andresen Teather

Wed 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Statutory Professions in AI governance and their consequences for explainable AI

Labhaoise NiFhaolain, Andrew Hines, Vivek NallurSophia de Mello Bregner Andresen Teather

Wed 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research

Timo Freiesleben, Gunnar KonigSophia de Mello Bregner Andresen Teather

Wed 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

XAI Requirements in Smart Production Processes: A Case Study

Deborah Baum, Kevin Baum, Timo P. Gros, Verena WolfSophia de Mello Bregner Andresen Teather

Wed 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

[Model-agnostic explanations]

Chair: Julia Herbinger

iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios

Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke HullermeierVianna da Motta

Wed 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

Algorithm-Agnostic Feature Attributions for Clustering

Christian A. Scholbeck, Henri Funk, Giuseppe CasalicchioVianna da Motta

Wed 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

Feature Importance versus Feature Influence and What It Signifies for Explainable AI

Kary FramlingVianna da Motta

Wed 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

SAC-FACT: Soft Actor-Critic Reinforcement Learning for Counterfactual Explanations

Fatima Ezzeddine, Omran Ayoub, Davide Andreoletti, Silvia GiordanoVianna da Motta

Wed 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

11:00 am - 11:30 am (Coffee break - morning - day 1)

Foyer, Sophia de Mello Bregner Andresen

11:30 am - 12:30 pm (Keynote)

Explainable performance evaluation in machine learning

Prof. Peter FlachSophia de Mello Bregner Andresen Teather

Wed 11:30 am - 12:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

1:00 pm - 2:30 pm (Lunch - buffet - day 1)

SALA VM - Vitorino nemesio

2:30 pm - 5:30 pm (Parallel sessions - afternoon - day 1)

[Actionable eXplainable AI]

Chair: Grégoire Montavon

Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly?

Leonardo Arrighi, Sylvio Barbon Jr., Felice Andrea Pellegrino, Michele Simonato, Marco ZullichVianna da Motta

Wed 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

DExT: Detector Explanation Toolkit

Deepan Chakravarthi Padmanabhan, Paul G., Plöger, Octavio Arriaga, Matias, Valdenegro-ToroVianna da Motta

Wed 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

Unveiling Black-boxes: Explainable Deep Learning Models for Patent Classification

Md Shajalal, Sebastian Denef, Md. Rezaul Karim, Alexander Boden, Gunnar StevensVianna da Motta

Wed 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

Propaganda Detection Robustness through Adversarial Attacks driven by eXplainable AI

Danilo Cavaliere, Mariacristina Gallo, and Claudio StanzioneVianna da Motta

Wed 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

[xAI for decision-making & human-AI collaboration - A]

Chair: Jasper van der Waa

PERFEX: Classifier Performance Explanations for Trustworthy AI Systems

Erwin Walraven, Ajaya Adhikari, Cor J. VeenmanSophia de Mello Bregner Andresen Teather

Wed 2:30 pm - 3:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Explaining Black-Boxes in Federated Learning

Luca Corbucci, Riccardo Guidotti, Anna MonrealeSophia de Mello Bregner Andresen Teather

Wed 2:30 pm - 3:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

The Duet of Representations and How Explanations Exacerbate It

Charles Wan, Rodrigo Belo, Leid Zejnilovic, Susana LavadoSophia de Mello Bregner Andresen Teather

Wed 2:30 pm - 3:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media

Thales Bertaglia, Stefan Huber, Catalina Goanta, Gerasimos Spanakis, Adriana IamnitchiSophia de Mello Bregner Andresen Teather

Wed 2:30 pm - 3:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

[Semantics and explainability]

Chair: Christophe Labreuche

HOLMES: HOLonym-MEronym based Semantic inspection for Convolutional Image Classifiers

Francesco Dibitonto, Fabio Garcea, André Panisson, Alan Perotti, Lia MorraVianna da Motta

Wed 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

Beyond One-Hot-Encoding: Injecting Semantics to Drive Image Classifiers

Alan Perotti, Simone Bertolotto, Eliana Pastor, André PanissonVianna da Motta

Wed 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

Finding Spurious Correlations with Function-Semantic Contrast Analysis

Kirill Bykov, Laura Kopf, Marina M.-C. HöhneVianna da Motta

Wed 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability

Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna BadeVianna da Motta

Wed 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

[xAI for decision-making & human-AI collaboration – B]

Chair: Jasper van der Waa

Handling Missing Values in Local Post-hoc Explainability

Martina Cinquini, Fosca Giannotti, Riccardo Guidotti, Andrea Mattei

Wed 3:30 pm - 4:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and without interacting Criteria

Christophe Labreuche, Roman BressonSophia de Mello Bregner Andresen Teather

Wed 3:30 pm - 4:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Human-Computer Interaction and Explainability: Intersection and Terminology

Arthur Picard, Yazan Mualla, Franck, Gechter, Stephane GallandSophia de Mello Bregner Andresen Teather

Wed 3:30 pm - 4:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Explaining Deep Reinforcement Learning-based methods for control of building HVAC systems

Javier Jiménez-Raboso, Antonio Manjavacas, Alejandro, Campoy-Nieves, Miguel Molina-Solana, Juan Gómez-RomeroSophia de Mello Bregner Andresen Teather

Wed 3:30 pm - 4:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

[Explainable AI in Finance & cybersecurity]

Chair: Paolo Giudici

Cost of Explainability in AI: an Example with Credit Scoring Models

Jean Dessain, Nora Bentaleb, Fabien VinasSophia de Mello Bregner Andresen Teather

Wed 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Explainable Machine Learning for Bag of Words-based Phishing Detection

Maria Carla Calzarossa, Paolo Giudici, Rasha ZieniSophia de Mello Bregner Andresen Teather

Wed 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Lorenz Zonoids for Trustworthy AI

Paolo Giudici, Emanuela RaffinettiSophia de Mello Bregner Andresen Teather

Wed 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Evaluating feature relevance XAI in network intrusion detection

Julian Tritscher, Maximilian Wolf, Andreas Hotho, Daniel SchlorSophia de Mello Bregner Andresen Teather

Wed 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

[xAI in biomedicine]

Chairs: Avleen Malhi, Kary Främling

Evaluating explanations of an Alzheimer’s Disease 18F-FDG Brain PET black-box classifier

Lisa Anita De Santi, Filippo Bargagna, Maria Filomena Santarelli, Vincenzo PositanoVianna da Motta

Wed 4:30 pm - 5:30 pm
Vianna da Motta (SALA 16)

An Evaluation of Contextual Importance and Utility for Outcome Explanation of Black-box Predictions for medical datasets

Avleen Malhi, Kary FramlingVianna da Motta

Wed 4:30 pm - 5:30 pm
Vianna da Motta (SALA 16)

The accuracy and faithfullness of AL-DLIME -Active Learning-based Deterministic Local Interpretable Model-Agnostic Explanations: a comparison with LIME and DLIME in Medicine

Sarah Holm, Luis MacedoVianna da Motta

Wed 4:30 pm - 5:30 pm
Vianna da Motta (SALA 16)

Understanding Unsupervised Learning Explanations using Contextual Importance and Utility

Avleen Malhi Vlad Apopei, Kary FramlingVianna da Motta

Wed 4:30 pm - 5:30 pm
Vianna da Motta (SALA 16)

Thursday 27 Jul 2023

8:00 am - 9:00 am (Registration - day 2)

Foyer, Sophia de Mello Bregner Andresen

9:30 am - 11:00 am (Parallel sessions - morning A - day 2)

[Causality & Explainable AI]

Chair: Tjitze Rienstra

Explaining Model Behavior with Global Causal Analysis

Marcel Robeer, Floris Bex, Ad Feelders, Henry PrakkenVianna da Motta

Thu 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

The Importance of Time in Causal Algorithmic Recourse

Isacco Beretta, Martina CinquiniVianna da Motta

Thu 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

Counterfactual Explanations for Graph Classification Through the Lenses of Density

Carlo Abrate, Giulia Preti, Francesco BonchiVianna da Motta

Thu 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

Ablation Path Saliency

Justus Sagemuller, Olivier VerdierVianna da Motta

Thu 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

ABC-GAN: Spatially Constrained Counterfactual Generation for Image Classification Explanations

Dimitry Mindlin, Malte Schilling, Philipp CimianoVianna da Motta

Thu 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

[Human-centered explanations for xAI]

Chair: Roberto Prevete

Adding Why to What? Analyses of an Everyday Explanation

Lutz Terfloth, Michael Schaffer, Heike M. Buhl, Carsten SchulteMaria Helena Vieira da Silva

Thu 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

Concept Distillation in Graph Neural Networks

Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja, Jamnik, Pietro LioMaria Helena Vieira da Silva

Thu 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI

Ulrike Kuhl, Andre Artelt, Barbara HammerMaria Helena Vieira da Silva

Thu 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI

Ivania Donoso-Guzman, Jeroen Ooge, Denis Parra, Katrien VerbertMaria Helena Vieira da Silva

Thu 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

Development of a human-centred psychometric test for the evaluation of explanations produced by XAI methods

Giulia Vilone, Luca LongoMaria Helena Vieira da Silva

Thu 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

11:00 am - 11:30 am (POSTER LATE-BREAKING WORKS + coffee break - morning - day 2)

Foyer, Sophia de Mello Bregner Andresen

11:30 am - 1:00 pm (Parallel sessions - morning B - day 2)

[xAI and Natural Language Processing]

Chair: Guidotti Riccardo

Evaluating self-attention interpretability through human-grounded experimental protocol

Milan Bhan, Nina Achache, Victor Legrand, Annabelle Blangero, Nicolas ChesneauVianna da Motta

Thu 11:30 am - 1:00 pm
Vianna da Motta (SALA 16)

Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks

Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe KuhnbergerVianna da Motta

Thu 11:30 am - 1:00 pm
Vianna da Motta (SALA 16)

From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent

Van Bach Nguyen, Jörg Schlötterer, Christin SeifertVianna da Motta

Thu 11:30 am - 1:00 pm
Vianna da Motta (SALA 16)

Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive Tweets

Muhammad Deedahwar Mazhar Qureshi, M. Atif Qureshi, Wael RashwanVianna da Motta

Thu 11:30 am - 1:00 pm
Vianna da Motta (SALA 16)

Understanding Interpretability: Explainable AI Approaches for Hate Speech Classifiers

Sargam Yadav, Abhishek Kaushik, Kevin McDaid3Vianna da Motta

Thu 11:30 am - 1:00 pm
Vianna da Motta (SALA 16)

[xAI for Machine Learning on Graphs with Ontologies & Graph Neural Networks]

Chair: Alan Perotti

XInsight: Revealing Model Insights for GNNs with Flow-based Explanations

Eli Laird, Ayesh Madushanka, Elfi Kraka, Corey ClarkMaria Helena Vieira da Silva

Thu 11:30 am - 1:00 pm
Maria Helena Vieira da Silva (SALA 9)

What Will Make Misinformation Spread: An XAI Perspective

Hongbo Bo, Yiwen Wu, Zinuo You, Ryan McConville, Jun Hong, Weiru LiuMaria Helena Vieira da Silva

Thu 11:30 am - 1:00 pm
Maria Helena Vieira da Silva (SALA 9)

MEGAN: Multi-Explanation Graph Attention Network

Jonas Teufel, Luca Torresi, Patrick, Reiser, Pascal FriederichMaria Helena Vieira da Silva

Thu 11:30 am - 1:00 pm
Maria Helena Vieira da Silva (SALA 9)

Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies

Jonas Teufel, Luca Torresi, Pascal FriederichMaria Helena Vieira da Silva

Thu 11:30 am - 1:00 pm
Maria Helena Vieira da Silva (SALA 9)

Evaluating Link Prediction Explanations for Graph Neural Networks

Claudio Borile, Alan Perotti, and Andre PanissonMaria Helena Vieira da Silva

Thu 11:30 am - 1:00 pm
Maria Helena Vieira da Silva (SALA 9)

1:00 pm - 2:30 pm (Stand-up Lunch - day 2)

No workshops in this session.

2:30 pm - 4:30 pm (Parallel sessions - afternoon - day 2)

[Explanations for Advice-Giving Systems]

Chair: Francesco Barile

Explaining Socio-demographic and Behavioral Patterns of Vaccination against the Swine Flu(H1N1) Pandemic

Clara Punzi, Aleksandra Maslennikova, Gizem Gezici, Roberto, Pellungrini, Fosca GiannottiMaria Helena Vieira da Silva

Thu 2:30 pm - 3:30 pm
Maria Helena Vieira da Silva (SALA 9)

Explaining Search Result Stances to Opinionated People

Zhangyi Wu, Tim Draws, Federico Cau, Francesco Barile, Alisa Rieger, and Nava TintarevMaria Helena Vieira da Silva

Thu 2:30 pm - 3:30 pm
Maria Helena Vieira da Silva (SALA 9)

A Co-design Study for Multi-Stakeholder Job Recommender System Explanations

Roan Schellingerhout, Francesco Barile and Nava TintarevMaria Helena Vieira da Silva

Thu 2:30 pm - 3:30 pm
Maria Helena Vieira da Silva (SALA 9)

Semantic Meaningfulness: Evaluating Counterfactual Approaches for Real-World Plausibility and Feasibility

Jacqueline Hollig, Aniek F. Markus, Jef de Slegte, Prachi BagaveMaria Helena Vieira da Silva

Thu 2:30 pm - 3:30 pm
Maria Helena Vieira da Silva (SALA 9)

[xAI for Trustworthy and Responsible AI]

Chair: Rocio Gonzalez Diaz

Weighted Mutual Information for Out-Of-Distribution Detection

Giacomo De Bernardi, Sara Narteni, Enrico, Cambiaso, Marco Muselli, Maurizio Mongelli1Vianna da Motta

Thu 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

Leveraging Group Contrastive Explanations for Handling Fairness

Alessandro Castelnovo, Nicole Inverardi, Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Andrea SevesoVianna da Motta

Thu 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

LUCID–GAN: Conditional Generative Models to Locate Unfairness

Andres Algaba, Carmen Mazijn, Carina Prunkl, Jan Danckaert, Vincent GinisVianna da Motta

Thu 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

The Importance of Distrust in AI

Tobias M. Peters, Roel W. VisserVianna da Motta

Thu 2:30 pm - 3:30 pm
Vianna da Motta (SALA 16)

[Explainable & Interpretable AI with Argumentation and reasoning]

Chair: Lucas Rizzo

Integrating GPT-technologies with Decision Models for Explainability

Alexandre Goossens , Jan VanthienenVianna da Motta

Thu 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

Explainable Machine Learning via Argumentation

Nicoletta Prentzas, Constantinos Pattichis. Antonis KakasVianna da Motta

Thu 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

A novel structured argumentation framework for improved explainability of classification tasks

Lucas RizzoVianna da Motta

Thu 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

Hardness of Deceptive Certificate Selection

Stephan WaldchenVianna da Motta

Thu 3:30 pm - 4:30 pm
Vianna da Motta (SALA 16)

[XAI in health-care]

Chair: Ruairi O' Reilly

An Interactive XAI Interface with Application in Healthcare for Non-experts

Jingyu Hu, Yizhu Liang, Weiyu Zhao, Kevin McAreavey, and Weiru LiuMaria Helena Vieira da Silva

Thu 3:30 pm - 4:30 pm
Maria Helena Vieira da Silva (SALA 9)

Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson’s Disease Progression

Jose Luis Corcuera Barcena, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, and Fabrizio RuffiniMaria Helena Vieira da Silva

Thu 3:30 pm - 4:30 pm
Maria Helena Vieira da Silva (SALA 9)

Color Shadows 2: Assessing the Impact of XAI on Diagnostic Decision-Making

Chiara Natali, Lorenzo Famiglini, Andrea Campagner, Giovanni Andrea La Maida, Enrico Gallazzi, and Federico CabitzaMaria Helena Vieira da Silva

Thu 3:30 pm - 4:30 pm
Maria Helena Vieira da Silva (SALA 9)

Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method

Oleksandr Davydko, Vladimir Pavlov, and Luca LongoMaria Helena Vieira da Silva

Thu 3:30 pm - 4:30 pm
Maria Helena Vieira da Silva (SALA 9)

Friday 28 Jul 2023

8:00 am - 9:00 am (registration - day 3)

Foyer, Sophia de Mello Bregner Andresen

9:30 am - 11:00 am (Parallel sessions - morning - day 3)

(Doctoral Consortium A)

Demystifying research hypothesis

Fri 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

Supervised activity on personalising a research hypothesis

Fri 9:30 am - 11:00 am
Vianna da Motta (SALA 16)

[Approaches and strategies for xAI]

Chair: Alessandro Renda

The Xi method: unlocking the mysteries of regression with Statistics

Valentina GhidiniSophia de Mello Bregner Andresen Teather

Fri 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Do intermediate feature coalitions aid explainability of black-box models?

Minal Suresh Patil, Kary FramlingSophia de Mello Bregner Andresen Teather

Fri 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Strategies to exploit XAI to improve classification systems

Andrea Apicella, Luca Di Lorenzo, Francesco, Isgro, Andrea Pollastro, Roberto PreveteSophia de Mello Bregner Andresen Teather

Fri 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values

Kristin Blesch, Marvin N., Wright, David WatsonSophia de Mello Bregner Andresen Teather

Fri 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAI

Ettore Mariotti, Adarsa Sivaprasad, Jose Maria Alonso MoralSophia de Mello Bregner Andresen Teather

Fri 9:30 am - 11:00 am
Sophia de Mello Bregner Andresen teather (SALA 7)

[Methods and techniques for xAI]

Chair: Sibylle Sager

Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process

Christoph Molnar, Timo Freiesleben, Gunnar Konig, Julia Herbinger, Tim Reisinger, Giuseppe Casalicchio, Marvin N. Wright, Bernd BischlMaria Helena Vieira da Silva

Fri 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

Sanity Checks for Saliency Methods Explaining Object Detectors

Deepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias, Valdenegro-ToroMaria Helena Vieira da Silva

Fri 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers

Meike Nauta, Christin SeifertMaria Helena Vieira da Silva

Fri 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit based on Analyses of Interestingness

Pedro Sequeira, Melinda GervasioMaria Helena Vieira da Silva

Fri 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

Reason to explain: Interactive contrastive explanations (REASONX)

Laura State, Salvatore Ruggieri, Franco TuriniMaria Helena Vieira da Silva

Fri 9:30 am - 11:00 am
Maria Helena Vieira da Silva (SALA 9)

11:00 am - 11:30 am (coffee break - morning - day 3)

Foyer, Sophia de Mello Bregner Andresen

11:30 am - 1:00 pm (PANEL - eXplainable Artificial Intelligence: definitions, boundaries, impacts and challenges)

xaiworldconference.com/panel-discussion/

1:00 pm - 3:00 pm (FREE TIME around)

Attendees can have a longer break and explore the "Padrão dos Descobrimentos" and the "Belem Tower" on the beautiful Tagus River just in front of the conference venue.

https://en.wikipedia.org/wiki/Padrao_dos_Descobrimentos
https://en.wikipedia.org/wiki/Belem_Tower

3:00 pm - 4:00 pm (Parallel sessions - afternoon A - day 3)

(Doctoral Consortium B)

The research aims and objectives

Vianna da Motta

Fri 3:00 pm - 4:00 pm
Vianna da Motta (SALA 16)

[Representational Learning and concept extraction for xAI]

Chair: Pietro Ducange

An Exploration of the Latent Space of a Convolutional Variational Autoencoder for theGeneration of Musical Instrument Tones

Anastasia Natsiou, Sean O’Leary, and Luca LongoMaria Helena Vieira da Silva

Fri 3:00 pm - 4:00 pm
Maria Helena Vieira da Silva (SALA 9)

Improving local fidelity of LIME by CVAE

Daisuke Yasui, Hirosh Sato, Masao KuboMaria Helena Vieira da Silva

Fri 3:00 pm - 4:00 pm
Maria Helena Vieira da Silva (SALA 9)

Outcome-Guided Counterfactuals from a Jointly Trained Generative Latent Space

Eric Yeh, Pedro Sequeira, Jesse Hostetler, Melinda GervasioMaria Helena Vieira da Silva

Fri 3:00 pm - 4:00 pm
Maria Helena Vieira da Silva (SALA 9)

Scalable Concept Extraction in Industry 4.0

Andres Felipe Posada-Moreno, Kai Muller, Florian Brillowski, Friedrich Solowjow, Thomas Gries, Sebastian TrimpeMaria Helena Vieira da Silva

Fri 3:00 pm - 4:00 pm
Maria Helena Vieira da Silva (SALA 9)

[xAI for time series]

Chair: Jens Lundström

State Graph Based Explanation Approach for Black-box Time Series Model

Yiran Huang, Chaofan Li, Hansen Lu, Till Riedel, Michael BeiglSophia de Mello Bregner Andresen Teather

Fri 3:00 pm - 4:00 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

A Deep Dive into Perturbationsas Evaluation Technique for Time Series XAI

Udo Schlegel, Daniel A. KeimSophia de Mello Bregner Andresen Teather

Fri 3:00 pm - 4:00 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Causal-based Spatio-Temporal Graph Neural Networks for Industrial Internet of Things Multivariate Time Series Forecasting

Amir Miraki, Austeja Dapkute, Vytautas Siozinys, Martynas Jonaitis, Reza ArghandehSophia de Mello Bregner Andresen Teather

Fri 3:00 pm - 4:00 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Investigating the effect of pre-processing methods on model decision-making in EEG-based person identification

Carlos Gómez Tapia, Bojan Bozic, Luca LongoSophia de Mello Bregner Andresen Teather

Fri 3:00 pm - 4:00 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

4:00 pm - 4:30 pm (coffee break - afternoon - day 3)

Foyer, Sophia de Mello Bregner Andresen

4:30 pm - 5:30 pm (Parallel sessions - afternoon B - day 3)

(Doctoral Consortium C)

Boundaries of research: scope, assumptions, limitations and delimitations

Vianna da Motta

Fri 4:30 pm - 5:30 pm
Vianna da Motta (SALA 16)

Questions & answers with professors & post-docs

Vianna da Motta

Fri 4:30 pm - 5:30 pm
Vianna da Motta (SALA 16)

[Applications for xAI]

Chair: Giulia Vilone

Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal

Laura State, Hadrien Salat, Stefania Rubrichi, Zbigniew SmoredaSophia de Mello Bregner Andresen Teather

Fri 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

A Novel Architecture for Robust Explainable AI Approaches in Critical Object Detection Scenarios Based on Bayesian Neural Networks

Daniel Gierse, Felix Neuburger, Thomas Kopinski1Sophia de Mello Bregner Andresen Teather

Fri 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Is LIME appropriate to explain polypharmacy prediction model ?

Lynda Dib, Richard KhourySophia de Mello Bregner Andresen Teather

Fri 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark

Mohamed Karim Belaid, Richard Bornemann, Maximilian Rabus, Ralf Krestel, Eyke HüllermeierSophia de Mello Bregner Andresen teather

Fri 4:30 pm - 5:30 pm
Sophia de Mello Bregner Andresen teather (SALA 7)

[Surveys, benchmarks and visual representations for xAI]

Chair: Weiru Liu

Natural Example-Based Explainability: a Survey

Antonin Poche, Lucas Hervier, Mohamed-Chafik BakkayMaria Helena Vieira da Silva

Fri 4:30 pm - 5:30 pm
Maria Helena Vieira da Silva (SALA 9)

Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features

Igor Cheperanov, David Sessler, Alex Ulmer, Hendrik Lücke-Tieke, Jörn KohlhammerMaria Helena Vieira da Silva

Fri 4:30 pm - 5:30 pm
Maria Helena Vieira da Silva (SALA 9)

Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards

Xiaowei Liu, Kevin McAreavey, Weiru LiuMaria Helena Vieira da Silva

Fri 4:30 pm - 5:30 pm
Maria Helena Vieira da Silva (SALA 9)

Explainable Artificial Intelligence in Education: A Comprehensive Review

Blerta Abazi Chaushi, Besnik Selimi, Agron Chaushi, Marika ApostolovaMaria Helena Vieira da Silva

Fri 4:30 pm - 5:30 pm
Maria Helena Vieira da Silva (SALA 9)

7:30 pm - 10:00 pm (GALA DINNER - Conference closing + AWARDS)

SALA VM - Vitorino nemesio

*note the order of articles might slightly change over time