2nd World Conference on eXplainable Artificial Intelligence

Call for papers

Artificial intelligence has seen a significant shift in focus towards designing and developing intelligent systems that are interpretable and explainable. This is due to the complexity of the models, built from data, and the legal requirements imposed by various national and international parliaments. This has echoed both in the research literature and the press, attracting scholars worldwide and a lay audience. An emerging field with AI is eXplainable Artificial Intelligence (xAI), devoted to producing intelligent systems that allow humans to understand their inferences, assessments, predictions, recommendations and decisions. Initially devoted to designing post-hoc methods for explainability, eXplainable Artificial Intelligence (xAI) is rapidly expanding its boundaries to neuro-symbolic methods for producing self-interpretable models. Research has also shifted the focus on the structure of explanations and human-centred Artificial Intelligence since the ultimate users of interactive technologies are humans.

The World Conference on Explainable Artificial Intelligence is an annual event that aims to bring together researchers, academics, and professionals, promoting the sharing and discussion of knowledge, new perspectives, experiences, and innovations in the field of Explainable Artificial Intelligence (xAI). This event is multidisciplinary and interdisciplinary, bringing together academics and scholars of different disciplines, including Computer Science, Psychology, Philosophy, Law and Social Science, to mention a few, and industry practitioners interested in the practical, social and ethical aspects of the explanation of the models emerging from the discipline of Artificial intelligence (AI).

The conference organisation encourages submissions related to eXplainable AI and contributions from academia, industry, and other organizations discussing open challenges or novel research approaches related to the explainability and interpretability of AI systems. Topics include, and are not limited to:

Technical methods for XAI

Action Influence GraphsAgent-based explainable systemsAnte-hoc approaches for interpretability
Argumentative-based approaches for xAIArgumentation theory for xAIAttention mechanisms for xAI
Automata for explaining RNN modelsAuto-encoders & latent spaces explainabilityBayesian modelling for interpretability
Black-boxes vs white-boxesCase-based explanations for AI systemsCausal inference & explanations
Constraints-based explanationsDecomposition of NNET-models for XAIDeep learning & XAI methods
Defeasible reasoning for explainabilityEvaluation approaches for XAI-based systemsExplainable methods for edge computing
Expert systems for explainabilitySample-centric and dataset-centric explanationsExplainability of signal processing methods
Finite state machines for explainabilityFuzzy systems & logic for explainabilityGraph neural networks for explainability
Hybrid & transparent black box modellingInterpreting & explaining CNN NetworksInterpretable representational learning
Explainability & the Semantic WebModel-specific vs model-agnostic methods Neuro-symbolic reasoning for XAI
Natural language processing for explanationsOntologies & taxonomies for supporting XAIPruning methods with XAI
Post-hoc methods for explainabilityReinforcement learning for enhancing XAIReasoning under uncertainty for explanations
Rule-based XAI systemsRobotics & explainabilitySample-centric & Dataset-centric explanations
Self-explainable methods for XAISentence embeddings to xAI semantic featuresTransparent & explainable learning methods
User interfaces for explainabilityVisual methods for representational learningXAI Benchmarking
XAI methods for neuroimaging & neural signalsXAI & reservoir computing

Ethical Considerations for XAI

Accountability & responsibility in XAIAddressing user-centric requirements for XAITrade-off model accuracy & interpretability
Explainable Bias & fairness of XAI systemsExplainability for discovering, improving, controlling & justifyingMoral Principles & dilemma for XAI
Explainability & data fusionExplainability/responsibility in policy guidelinesExplainability pitfalls & dark patterns in XAI
Historical foundations of XAIMoral principles & dilemma for XAIMultimodal XAI approaches
Philosophical consideration of synthetic explanationsPrevention/detection of deceptive AI explanationsSocial implications of synthetic explanations
Theoretical foundations of XAITrust & explainable AIThe logic of scientific explanation for/in AI
Expected epistemic & moral goods for XAI XAI for fairness checkingXAI for time series-based approaches

Psychological Notions & concepts for XAI

Algorithmic transparency & actionabilityCognitive approaches for explanationsCognitive relief in explanations
Contrastive nature of explanationsComprehensibility vs interpretabilityCounterfactual explanations
Designing new explanation stylesExplanations for correctabilityFaithfulness & intelligibility of explanations
Interpretability vs traceabilityexplanations Interestingness & informativeness Irrelevance of probabilities to explanations
Iterative dialogue explanationsLocal vs. global interpretability & explainabilityLocal vs global interpretability & explainability
Methods for assessing explanations qualityNon-technical explanations in AI systemsNotions and metrics of/for explainability
Persuasiveness & robustness of explanationsPsychometrics of human explanationsQualitative approaches for explainability
Questionnaires & surveys for explainabilityScrutability & diagnosis of XAI methodsSoundness & stability of XAI methods

Social examinations of XAI

Adaptive explainable systemsBackwards & forward-looking responsibility forms to XAIData provenance & explainability
Explainability for reputationEpistemic and non-epistemic values for XAIHuman-centric explainable AI
Person-specific XAI systemsPresentation & personalization of AI explanations for target groupsSocial nature of explanations

Legal & administrative considerations of/for XAI

Black-box model auditing & explanationExplainability in regulatory complianceHuman rights for explanations in AI systems
Policy-based systems of explanationsThe potential harm of explainability in AITrustworthiness of XAI for clinicians/patients
XAI methods for model governanceXAI in policy developmentXAI for situational awareness/compliance behavior

Safety & security approaches for XAI

Adversarial attacks explanationsExplanations for risk assessmentExplainability of federated learning
Explainable IoT malware detectionPrivacy & agency of explanationsXAI for Privacy-Preserving Systems
XAI techniques of stealing attack & defenceXAI for human-AI cooperationXAI & models output confidence estimation

Applications of XAI-based systems

Application of XAI in cognitive computingDialogue systems for enhancing explainability
Explainable methods for medical diagnosisBusiness & MarketingXAI systems for healthcare
Explainable methods for HCIExplainability in decision-support systemsExplainable recommender systems
Explainable methods for finance & automatic trading systemsExplainability in agricultural AI-based methodsExplainability in transportation systems
Explainability for unmanned aerial vehicles Explainability in brain-computer interfacesInteractive applications for XAI
Manufacturing chains & application of XAIModels of explanations in criminology, cybersecurity & defenceXAI approaches in Industry 4.0
XAI systems for health-careXAI technologies for autonomous drivingXAI methods for bioinformatics
XAI methods for linguistics/machine translationXAI methods for neuroscienceXAI models & applications for IoT
XAI methods for XAI for terrestrial, atmospheric, & ocean remote sensingXAI in sustainable finance & climate financeXAI in bio-signals analysis


Submitted manuscripts must be novel and not substantially duplicate existing work. Manuscripts must be written using Springer’s Lecture Notes in Computer Science (LNCS) in the format provided here. Latex and word files are admitted; however, the former is preferred. All submissions and reviews will be handled electronically. The conference has a no dual submission policy, so submitted manuscripts must not be currently under review at another publication venue.

Articles must be submitted using the easy-chair platform here.

While registering on the platform, the contact author must provide the following information: paper title, all author names, affiliations, postal address, e-mail address, and at least three keywords.

The conference will not require a strict page number, as we believe authors have different writing styles and would like to produce scientific material differently. However, the following types of articles are admitted:

full articlesbetween 12 and 24 pages (including references)
short articlesbetween 8 and 12 pages (including references)

Full articles should report on original and substantial contributions of lasting value, and the work should concern the theory and/or practice of Explainable Artificial Intelligence (xAI). Moreover, manuscripts showcasing the innovative use of xAI methods, techniques, and approaches and exploring the benefits and challenges of applying xAI-based technology in real-life applications and contexts are welcome. Evaluations of proposed solutions and applications should be commensurate with the claims made in the article. Full articles should reflect more complex innovations or studies and have a more thorough discussion of related work. Research procedures and technical methods should be presented sufficiently to ensure scrutiny and reproducibility. We recognise that user data may be proprietary or confidential; therefore, we encourage sharing (anonymized, cleaned) data sets, data collection procedures, and code. Results and findings should be communicated clearly, and implications of the contributions for xAI as a field and beyond should be explicitly discussed.
Shorter articles should generally report on advances that can be described, set into context, and evaluated concisely. These articles are not ‘work-in-progress’ reports but complete studies focused on smaller but complete research work, simple to describe. For these articles, the discussion of related work and contextualisation in the wider body of knowledge can be smaller than that of full articles.

Appendixes and supplemental material

Appendixes and supplemental material must be placed within the article and the maximum amount of pages mentioned above. In other words, everything must be within 24 pages (for full articles) and 12 pages (for short articles).

Special track articles

The article submitted to the special tracks follows the submission procedure of the main track and must be submitted via easy-chair, as mentioned above. The types of articles admitted are full and shorter, as described above. The authors of an article to be associated with a special track must select the name of such special track in the list of topics in easy-chair, along with other relevant topics.

Authors commit to reviewing

By submitting to the conference, each senior author of a manuscript volunteer to be added to the pool of potential PC members/reviewers for the conference and may be asked to review manuscripts. This does not apply to authors who have already agreed to contribute to the conference in some capacity (e.g., as PC/SPC members of the main conference or special tracks, area chairs, or members of the organizing committee) and authors who are not qualified to be in the programme committee.

Ethical & Human Subjects Considerations

The conference organisers expect authors to discuss the ethical considerations and the impact of the presented work and/or its intended application, where appropriate. Additionally, all authors must comply with ethical standards and regulatory guidelines associated with human subjects research, including using personally identifiable data and research involving human participants. Manuscripts reporting on human subjects research must include a statement identifying any regulatory review the research is subject to (and identifying the form of approval provided) or explaining the lack of required review.

Submission and publication of multiple articles

Each author is limited to no more than a combined limit of 4 submissions to the main conference track, and authors may not be added or deleted from papers following submission.

Important dates

*All dates are Anywhere on Earth time (AoE)

Articles (main track & special tracks)

Authors/title registration deadline on submission platform (easy-chair)*:March, 1st, March, 8th, 2024
Article upload deadline on submission platform (easy-chair)*:March, 5, March 15th, 2024
Notification of acceptance*:April, 5, 2024
Registration (payment) and camera-ready* (upload to easy-chair):April, 10, April 15th, 2024
Article presentation instructions notificationJune, 2024
Accepted article presentation (at xAI-2024)17-19 July, 2024
Publication (Springer CCIS series)September/October, 2024
*full, short and special track articles

Late-breaking work & demos

Late-breaking work & demo author/title/abstract registration on submission platform (easy-chair):April, 12th, 20th 2024
Late-breaking work & demo article upload deadline on submission platform (easy-chair):April, 16th, 20th 2024
Notification of acceptance:April, 30th, 5th of May, 2024
Registration (payment) & Late-breaking work & demo camera-ready (upload to easy-chair):May 5th 15th, 2024
Late-breaking work & demo presentation instructions notificationJune, 2024
Late-breaking work & demo presentations (at xAI-2024)17-19 July, 2024
Publication (planned with CEUR-WS.org*)September/October, 2024
*Proceedings shall be submitted to CEUR-WS.org for online publication

Doctoral consortium (DC) proposals

DC Proposal author/title registration deadline on submission platform (easy-chair):April, 12th, 16th 2024
DC Proposal uploads deadline on the submission platform (easy-chair):April, 16th, 26th 2024
Notification of acceptance:May, 3rd, 2024
Registration (payment) May, 7th, 2024
DC presentation and meeting instructions notificationJune, 2024
Doctoral consortium meeting (at xAI-2024)17-19 July, 2024
Publication (planned with CEUR-WS.org*)September/October, 2024
*Proceedings shall be submitted to CEUR-WS.org for online publication

Special track proposals

Proposal submission (contact):10 January, 17th of January, 2024


The World Conference on eXplainable AI17-19 July, 2024

Review process

The Peer-Review process

All articles submitted within the deadlines and per the guidelines will be subjected to a single-blind review. Authors can also opt-out to disclose their names. However, authors will not know the names of their reviewers. Papers that are out of scope, incomplete, or lack sufficient evidence to support the basic claims may be rejected without full review. Manuscripts that do not conform to the specified formatting style will be desk-rejected. A non-dual policy submission exists, and articles submitted parallel to another conference/journal will be desk-rejected. Furthermore, reviewers will be asked to comment on whether the length is appropriate for the contribution. Each submitted article will be reviewed by at least two appropriate committee members (main/special track programme committee, late-breaking work/demo/DC committee).

After completion of the review process, the authors will be informed about the acceptance or rejection of the submitted work. The reviewers’ comments will be available to the authors if they are not desk-rejected. In case of acceptance, authors must meet the recommendations for improvement and prepare and submit the definitive version of the work up to the camera-ready paper submission deadline. In case of failure to consider the recommendations made by the reviewers, the organizing committee, the chairs and the editors reserve the right not to include these works in any of the planned conference proceedings.

The article’s final version must follow the appropriate style guide and contain the authors’ data (names, institutions and emails) and the ORCID details. Submitted articles will be evaluated according to their originality, technical soundness, significance of findings, contribution to knowledge, clarity of exposition and organisation and replicability.

Code of Ethics

Inspired by the code of ethics put forward by the Association of Computing Machinery, the programme committee, supervised by the general conference chairs and organisers, has the right to desk-reject manuscripts that perpetuate harmful stereotypes, employ unethical research practices, or uncritically present outcomes or implications that disadvantage minoritized communities. Further, reviewers of the scientific committee will be explicitly asked to consider whether the research was conducted in compliance with professional, ethical standards and applicable regulatory guidelines. Failure to do so could lead to a desk rejection.

Publication & indexing

Each accepted and presented full, short paper (for the main and special tracks), presented either as an oral presentation or as a poster, will be included in the conference proceedings by Springer in Communications in Computer and Information Science, edited by the general/PC chairs.  At least one author of each accepted paper must pay the related fees and register for the conference by the deadline. The official publication date is when the publisher makes the proceedings available online. This date will be after the conference and can take some weeks.

If authors want to publish their article OPEN ACCESS (upon a fee with Springer), please refer to this page.