{"id":593,"date":"2022-12-23T11:48:20","date_gmt":"2022-12-23T11:48:20","guid":{"rendered":"http:\/\/xaiworldconference.org\/?page_id=593"},"modified":"2023-10-05T21:00:56","modified_gmt":"2023-10-05T21:00:56","slug":"call-for-papers","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2023\/call-for-papers\/","title":{"rendered":"Call for papers"},"content":{"rendered":"\n<h3 class=\"wp-block-heading has-text-align-center has-large-font-size\"><strong>1st International Conference on eXplainable Artificial Intelligence (xAI 2023)<\/strong><\/h3>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"75\" height=\"72\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2022\/12\/XAIConferenceIcon.png\" alt=\"\" class=\"wp-image-409\"\/><\/figure>\n<\/div>\n\n\n<p class=\"has-text-align-center has-vivid-cyan-blue-color has-text-color has-medium-font-size\"><strong>Call for papers<\/strong><\/p>\n\n\n\n<p class=\"has-text-align-center has-medium-font-size\">(26\/28 July 2023, Lisbon, Portugal)<\/p>\n\n\n\n<p style=\"font-size:14px\">Artificial intelligence has seen a significant shift in focus towards designing and developing intelligent systems that are interpretable and explainable. This is due to the complexity of the models, built from data, and the legal requirements imposed by various national and international parliaments. This has echoed both in the research literature and in the press, attracting scholars from around the world and a lay audience. An emerging field with AI is <strong>eXplainable Artificial Intelligence (xAI)<\/strong>, devoted to the production of intelligent systems that allow humans to understand their inferences, assessments, prediction, recommendation and decisions. Initially devoted to designing post-hoc methods for explainability, <strong>eXplainable Artificial Intelligence (xAI)<\/strong> is rapidly expanding its boundaries to neuro-symbolic methods for producing self-interpretable models. Research has also shifted the focus on the structure of explanations and human-centred Artificial Intelligence since the ultimate users of interactive technologies are humans. <\/p>\n\n\n\n<p style=\"font-size:14px\"><strong>The&nbsp;World Conference on Explainable Artificial Intelligence (xAI 2023)<\/strong> is an annual event that aims to bring together researchers, academics, and professionals, promoting the sharing and discussion of knowledge, new perspectives, experiences, and innovations in the field of Explainable Artificial Intelligence (xAI). This event is multidisciplinary and interdisciplinary, bringing together academics and scholars of different disciplines, including Computer Science, Psychology, Philosophy, Law and Social Science, to mention a few, and industry practitioners interested in the practical, social and ethical aspects of the explanation of the models emerging from the discipline of Artificial intelligence (AI).<\/p>\n\n\n\n<p style=\"font-size:14px\"><strong>xAI 2023<\/strong> encourages submissions related to eXplainable AI and contributions from academia, industry, and other organizations discussing open challenges or novel research approaches related to the explainability and interpretability of AI systems. Topics include, and are not limited to:<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Technical methods for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:12px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Action Influence Graphs<\/td><td>Agent-based explainable systems<\/td><td>Ante-hoc approaches for interpretability<\/td><\/tr><tr><td>Argumentative-based approaches for xAI<\/td><td>Argumentation theory for xAI<\/td><td>Attention mechanisms for xAI<\/td><\/tr><tr><td>Automata for explaining RNN models<\/td><td>Auto-encoders &amp; latent spaces explainability<\/td><td>Bayesian modelling for interpretability<\/td><\/tr><tr><td>Black-boxes vs white-boxes<\/td><td>Case-based explanations for AI systems<\/td><td>Causal inference &amp; explanations<\/td><\/tr><tr><td>Constraints-based explanations<\/td><td>Decomposition of NNET-models for XAI<\/td><td>Deep learning &amp; XAI methods<\/td><\/tr><tr><td>Defeasible reasoning for explainability<\/td><td>Evaluation approaches for XAI-based systems<\/td><td>Explainable methods for edge computing<\/td><\/tr><tr><td>Expert systems for explainability<\/td><td>Explainability &amp; the semantic web<\/td><td>Explainability of signal processing methods<\/td><\/tr><tr><td>Finite state machines for explainability<\/td><td>Fuzzy systems &amp; logic for explainability<\/td><td>Graph neural networks for explainability<\/td><\/tr><tr><td>Hybrid &amp; transparent black box modelling<\/td><td>Interpreting &amp; explaining CNN Networks<\/td><td>Interpretable representational learning<\/td><\/tr><tr><td>Methods for latent spaces interpretations<\/td><td>Model-specific vs model-agnostic methods <\/td><td>Neuro-symbolic reasoning for XAI<\/td><\/tr><tr><td>Natural language processing for explanations<\/td><td>Ontologies &amp; taxonomies for supporting XAI<\/td><td>Pruning methods with XAI<\/td><\/tr><tr><td>Post-hoc methods for explainability<\/td><td>Reinforcement learning for enhancing XAI<\/td><td>Reasoning under uncertainty for explanations<\/td><\/tr><tr><td>Rule-based XAI systems<\/td><td>Robotics &amp; explainability<\/td><td>Sample-centric &amp; Dataset-centric explanations<\/td><\/tr><tr><td>Self-explainable methods for XAI<\/td><td>Sentence embeddings to xAI semantic features<\/td><td>Transparent &amp; explainable learning methods<\/td><\/tr><tr><td>User interfaces for explainability<\/td><td>Visual methods for representational learning<\/td><td>XAI Benchmarking<\/td><\/tr><tr><td>XAI methods for neuroimaging &amp; neural signals<\/td><td>XAI &amp; reservoir computing<\/td><td><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><strong>Ethical considerations for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:12px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Accountability &amp; responsibility in XAI<\/td><td>Addressing user-centric requirements for XAI<\/td><td>Trade-off model accuracy &amp; interpretability<\/td><\/tr><tr><td>Explainable Bias &amp; fairness of XAI systems<\/td><td>Explainability for discovering, improving, controlling &amp; justifying<\/td><td>Explainability as prerequisite for responsible AI<\/td><\/tr><tr><td>Explainability &amp; data fusion<\/td><td>Explainability\/responsibility in policy guidelines<\/td><td>Explainability pitfalls &amp; dark patterns in XAI<\/td><\/tr><tr><td>Historical foundations of XAI<\/td><td>Moral principles &amp; dilemma for XAI<\/td><td>Multimodal XAI approaches<\/td><\/tr><tr><td>Philosophical consideration of synthetic explanations<\/td><td>Prevention\/detection of deceptive AI explanations<\/td><td>Social implications of synthetic explanations<\/td><\/tr><tr><td>Theoretical foundations of XAI<\/td><td>Trust &amp; explainable AI<\/td><td>The logic of scientific explanation for\/in AI<\/td><\/tr><tr><td>Expected epistemic &amp; moral goods for XAI <\/td><td>XAI for fairness checking<\/td><td>XAI for time series-based approaches<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><strong>Psychological notions &amp; concepts for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:12px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Algorithmic transparency &amp; actionability<\/td><td>Cognitive approaches for explanations<\/td><td>Cognitive relief in explanations<\/td><\/tr><tr><td>Contrastive nature of explanations<\/td><td>Comprehensibility vs interpretability<\/td><td>Counterfactual explanations<\/td><\/tr><tr><td>Designing new explanation styles<\/td><td>Explanations for correctability<\/td><td>Faithfulness &amp; intelligibility of explanations<\/td><\/tr><tr><td>Interpretability vs traceability<\/td><td>explanations Interestingness &amp; informativeness <\/td><td>Irrelevance of probabilities to explanations<\/td><\/tr><tr><td>Iterative dialogue explanations<\/td><td>Justification &amp; explanations in AI systems<\/td><td>Local vs global interpretability &amp; explainability<\/td><\/tr><tr><td>Methods for assessing explanations quality<\/td><td>Non-technical explanations in AI systems<\/td><td>Notions and metrics of\/for explainability<\/td><\/tr><tr><td>Persuasiveness &amp; robustness of explanations<\/td><td>Psychometrics of human explanations<\/td><td>Qualitative approaches for explainability<\/td><\/tr><tr><td>Questionnaires &amp; surveys for explainability<\/td><td>Scrutability &amp; diagnosis of XAI methods<\/td><td>Soundness &amp; stability of XAI methods<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><strong>Social examinations of XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:12px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Adaptive explainable systems<\/td><td>Backwards &amp; forward-looking responsibility forms to XAI<\/td><td>Data provenance &amp; explainability<\/td><\/tr><tr><td>Explainability for reputation<\/td><td>Epistemic and non-epistemic values for XAI<\/td><td>Human-centric explainable AI<\/td><\/tr><tr><td>Person-specific XAI systems<\/td><td>Presentation &amp; personalization of AI explanations for target groups<\/td><td>Social nature of explanations<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><strong>Legal &amp; administrative considerations of\/for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:11px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Black-box model auditing &amp; explanation<\/td><td>Explainability in regulatory compliance<\/td><td>Human rights for explanations in AI systems<\/td><\/tr><tr><td>Policy-based systems of explanations<\/td><td>The potential harm of explainability in AI<\/td><td>Trustworthiness of XAI for clinicians\/patients<\/td><\/tr><tr><td>XAI methods for model governance<\/td><td>XAI in policy development<\/td><td>XAI for situational awareness\/compliance behavior<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><strong>Safety &amp; security approaches for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:12px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Adversarial attacks explanations<\/td><td>Explanations for risk assessment<\/td><td>Explainability of federated learning<\/td><\/tr><tr><td>Explainable IoT malware detection<\/td><td>Privacy &amp; agency of explanations<\/td><td>XAI for Privacy-Preserving Systems<\/td><\/tr><tr><td>XAI techniques of stealing attack &amp; defence<\/td><td>XAI for human-AI cooperation<\/td><td>XAI &amp; models output confidence estimation<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><strong>Applications of XAI-based systems<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:12px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Application of XAI in cognitive computing<\/td><td>Dialogue systems for enhancing explainability<\/td><td><\/td><\/tr><tr><td>Explainable methods for medical diagnosis<\/td><td>Business &amp; Marketing<\/td><td>Biomedical knowledge discovery &amp; explainability<\/td><\/tr><tr><td>Explainable methods for HCI<\/td><td>Explainability in decision-support systems<\/td><td>Explainable recommender systems<\/td><\/tr><tr><td>Explainable methods for finance &amp; automatic trading systems<\/td><td>Explainability in agricultural AI-based methods<\/td><td>Explainability in transportation systems<\/td><\/tr><tr><td>Explainability for unmanned aerial vehicles <\/td><td>Explainability in brain-computer interfaces<\/td><td>Interactive applications for XAI<\/td><\/tr><tr><td>Manufacturing chains &amp; application of XAI<\/td><td>Models of explanations in criminology, cybersecurity &amp; defence<\/td><td>XAI approaches in Industry 4.0<\/td><\/tr><tr><td>XAI systems for health-care<\/td><td>XAI technologies for autonomous driving<\/td><td>XAI methods for bioinformatics<\/td><\/tr><tr><td>XAI methods for linguistics\/machine translation<\/td><td>XAI methods for neuroscience<\/td><td>XAI models &amp; applications for IoT<\/td><\/tr><tr><td>XAI methods for XAI for terrestrial, atmospheric, &amp; ocean remote sensing<\/td><td>XAI in sustainable finance &amp; climate finance<\/td><td>XAI in bio-signals analysis<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-large-font-size\"><strong>Important dates<\/strong><\/p>\n\n\n<div data-post-id=\"23\" class=\"insert-page insert-page-23 \">\n<p style=\"font-size:10px\">*<em>All dates are Anywhere on Heart time (<strong>AoE<\/strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Anywhere_on_Earth\">)<\/a><\/em><\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Article<\/strong> (main track &amp; special tracks)<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-left\" data-align=\"left\">Authors\/title\/abstract registration deadline on submission platform (<em>easy-chair<\/em>)*:<\/td><td><s>April 15<\/s>, <s>April 20, 2023<\/s><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Article upload deadline on submission platform (<em>easy-chair<\/em>)*:<\/td><td><s>April 20<\/s> <s>April 25, 2023<\/s><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Notification of acceptance:<\/td><td><s>May 12<\/s>, <s>May 19th,2023<\/s><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Registration (<strong><a href=\"https:\/\/xaiworldconference.com\/registration\/\" data-type=\"page\" data-id=\"39\">payment)<\/a><\/strong> and camera-ready (<em>upload to easy-chair<\/em>):<\/td><td><s>May 19<\/s>, <s>May 29<\/s>, <s>June 6th, 2023<\/s><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Article presentation instructions notification<\/td><td><s>June, 26th, 2023<\/s><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Accepted article presentation (at xAI-2023)<\/td><td>July, 26-28th, 2023<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Publication (Springer CCIS series)<\/td><td>August 2023<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\">*full, short and extended abstract articles<\/figcaption><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Doctoral consortium (DC) proposal<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table class=\"has-fixed-layout\"><tbody><tr><td>DC Proposal author\/title registration deadline on submission platform (<em>easy-chair<\/em>):<\/td><td><s>April 16th<\/s>, <s>April 25, 2023<\/s><\/td><\/tr><tr><td>DC Proposal upload deadline on submission platform (<em>easy-chair<\/em>):<\/td><td><s>April 30,<\/s> <s>May, 5, 2023<\/s><\/td><\/tr><tr><td>Notification of acceptance:<\/td><td><s>April 30,<\/s> <s>May 19th<\/s>, <s>May, 22nd, 2023<\/s><\/td><\/tr><tr><td>Registration (<strong><a href=\"https:\/\/xaiworldconference.com\/registration\/\" data-type=\"page\" data-id=\"39\">payment)<\/a><\/strong><\/td><td><s>May 7,<\/s> <s>May 29, 2023<\/s><\/td><\/tr><tr><td>DC proposal camera-ready (<em>upload to easy-chair<\/em>): <\/td><td><s>June 16, 2023<\/s><\/td><\/tr><tr><td>DC presentation and meeting instructions notification<\/td><td><s>June, 26th, 2023<\/s><\/td><\/tr><tr><td>Doctoral consortium meeting (at xAI-2023)<\/td><td>July, 26-28th, 2023<\/td><\/tr><tr><td>Publication (planned with CEUR-WS.org*)<\/td><td>August 2023<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><em>*Proceedings shall be submitted to CEUR-WS.org for online publication<\/em><\/figcaption><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Late-breaking work &amp; demos<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Late-breaking work &amp; demo author\/title\/abstract registration on submission platform (<em>easy-chair<\/em>):<\/td><td><s>May 23, 2023<\/s><\/td><\/tr><tr><td>Late-breaking work &amp; demo article upload deadline on submission platform (<em>easy-chair<\/em>):<\/td><td><s>May 28, 2023<\/s><\/td><\/tr><tr><td>Notification of acceptance:<\/td><td><s>June 06,<\/s> <s>June 10<\/s>, <s>June 13, 2023<\/s><\/td><\/tr><tr><td>Registration (<strong><a href=\"https:\/\/xaiworldconference.com\/registration\/\" data-type=\"page\" data-id=\"39\">payment)<\/a><\/strong> &amp; Late-breaking work &amp; demo camera-ready (<em>upload to easy-chair<\/em>):<\/td><td><s>June 16, June 20, 2023<\/s><\/td><\/tr><tr><td>Late-breaking work &amp; demo presentation instructions notification<\/td><td><s>June, 26th, 2023<\/s><\/td><\/tr><tr><td>Late-breaking work &amp; demo presentations (at xAI-2023)<\/td><td>July, 26-28th, 2023<\/td><\/tr><tr><td>Publication (planned with CEUR-WS.org*)<\/td><td>August 2023<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><em>*Proceedings shall be submitted to CEUR-WS.org for online publication<\/em><\/figcaption><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Special track proposal<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Proposal submission (<em><a href=\"https:\/\/xaiworldconference.com\/contacts\/\" data-type=\"page\" data-id=\"819\">contact<\/a>)<\/em>:<\/td><td>Anytime before <s>February 08<\/s> <s>February 15 2023<\/s><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Panel discussion<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table class=\"has-fixed-layout\"><tbody><tr><td>Panel Discussion proposals:<\/td><td><s>May 21, 2023<\/s><\/td><\/tr><tr><td>Notification of acceptance:<\/td><td><s>May 28, 2023<\/s><\/td><\/tr><tr><td>Registration of Panel Discussions facilitators:<\/td><td><s>June 06, 2023<\/s><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Conference<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table class=\"has-fixed-layout\"><tbody><tr><td>The World Conference on eXplainable AI<\/td><td>26-28 July 2023<\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div>\n\n\n<p class=\"has-large-font-size\"><strong>Submission<\/strong><\/p>\n\n\n<div data-post-id=\"27\" class=\"insert-page insert-page-27 \">\n<p style=\"font-size:14px\">Submitted manuscripts must be novel and not substantially duplicate existing work.  Manuscripts must be written using Springer&#8217;s Lecture Notes in Computer Science (LNCS) in the&nbsp;<strong><a href=\"http:\/\/www.springer.com\/computer\/lncs?SGWID=0-164-6-793341-0\">format provided here<\/a>.<\/strong> Latex and word files are admitted: however, the former is preferred (<a href=\"https:\/\/resource-cms.springernature.com\/springer-cms\/rest\/v1\/content\/19238706\/data\/v1\">word template<\/a>, <a href=\"https:\/\/resource-cms.springernature.com\/springer-cms\/rest\/v1\/content\/19238648\/data\/v6\">latex template<\/a>, <a href=\"https:\/\/www.overleaf.com\/latex\/templates\/springer-lecture-notes-in-computer-science\/kzwwpvhwnvfj\">latex in overleaf<\/a>). All submissions and reviews will be handled electronically. The conference has a <strong>no dual submission policy<\/strong>, so submitted manuscripts should not be currently under review at another publication venue. <\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td>Articles must be submitted using the easy-chair platform <a href=\"https:\/\/easychair.org\/conferences\/?conf=xai2023\"><strong>here<\/strong><\/a>. <\/td><td><img decoding=\"async\" style=\"\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2022\/12\/easychair.png\" alt=\"\"><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p style=\"font-size:14px\">The contact author must provide the following information: paper title, all author names, affiliations, postal address, e-mail address, and at least three keywords.<\/p>\n\n\n\n<p style=\"font-size:14px\">The conference will not require a strict page number, as we believe authors have different writing styles and would like to produce scientific material differently. However, the following types of articles are admitted:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table><tbody><tr><td><strong>full articles<\/strong><\/td><td>between 12 and 24 pages (including references)<\/td><\/tr><tr><td><strong>short articles<\/strong><\/td><td>between 8 and 12 pages (including references)<\/td><\/tr><tr><td><strong>extended abstracts<\/strong><\/td><td>between 4 and 8 pages (including references)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:14px\"><table><tbody><tr><td><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/full_article-1.png\" alt=\"\" style=\"width: 400px;\"><\/td><td><strong>Full articles<\/strong> should report on <strong>original and substantial contributions of lasting value<\/strong>, and the work should concern the theory and\/or practice of Explainable Artificial Intelligence (xAI). Moreover, manuscripts showcasing the innovative use of xAI methods, techniques, and approaches and exploring the benefits and challenges of applying xAI-based technology in real-life applications and contexts are welcome. Evaluations of proposed solutions and applications should be commensurate with the claims made in the article. <strong>Full articles should reflect more complex innovations or studies and have a more thorough discussion of related work<\/strong>. Research procedures and technical methods should be presented sufficiently to ensure <strong>scrutiny and reproducibility<\/strong>. We recognise that user data may be proprietary or confidential, therefore we encourage sharing (anonymized, cleaned) data sets, data collection procedures, and code. Results and findings should be communicated clearly, and implications of the contributions for xAI as a field and beyond should be explicitly discussed.<\/td><\/tr><tr><td><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/short_article-3.png\" alt=\"\" style=\"width: 400px;\"><\/td><td><strong>Shorter articles <\/strong>should generally report on <strong>advances that can be described, set into context, and evaluated concisely<\/strong>. These articles <strong>are not &#8216;work-in-progress&#8217;<\/strong> reports but complete studies focused on smaller but complete research work, simple to describe. For these articles, the discussion of related work and contextualisation in the wider body of knowledge can be smaller than that of full articles. <\/td><\/tr><tr><td><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/extended_abstract.png\" alt=\"\" style=\"width: 400px;\"><\/td><td><strong>Extended abstracts<\/strong> should contain the definition of a problem and the presentation of a solution, comparisons to related work, and other details expected in a research manuscript but not in an abstract. They <strong>are not simply long abstracts or &#8216;work-in-progress&#8217;.<\/strong> An extended abstract is a research article whose ideas and significance can be understood in less than an hour. Producing an extended abstract can be more demanding than producing a full or short research article. Some things that can be omitted from an extended abstract, such as future work, details of proofs or implementation that should seem plausible to reviewers, and ramifications not relevant to the key ideas of the abstract. It should also contain enough bibliographic references to follow the main argument of the proposed research.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p style=\"font-size:14px\"><\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Special track articles<\/strong><\/p>\n\n\n\n<p style=\"font-size:14px\">The article submitted to the special tracks follows the normal procedure and must be submitted via easy-chair, as mentioned above. The types of articles admitted are full articles, shorter articles and extended abstracts, as described above. The author of an article for a special track must select the name of such special track in the list of topics in easy-chair, along with other relevant topics.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-small-font-size\"><strong>Ethical &amp; Human Subjects Considerations<\/strong><\/h3>\n\n\n\n<p style=\"font-size:14px\">The conference organisers expect authors to discuss the ethical considerations and the impact of the presented work and\/or its intended application, where appropriate. Additionally, all authors must comply with ethical standards and regulatory guidelines associated with human subjects research, including using personally identifiable data and research involving human participants. Manuscripts reporting on human subjects research must include a statement identifying any regulatory review the research is subject to (and identifying the form of approval provided) or explaining the lack of required review.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Further style instructions<\/strong><\/p>\n\n\n\n<p style=\"font-size:14px\">We ask the authors to start the reference section on a new page. Appendices count toward the page limit. Use an even number of pages (4,6,8..,22, 24)<\/p>\n<\/div>\n\n\n<p class=\"has-large-font-size\">Review process<\/p>\n\n\n<div data-post-id=\"385\" class=\"insert-page insert-page-385 \">\n<p style=\"font-size:16px\"><strong>The Peer-Review process<\/strong><\/p>\n\n\n\n<p style=\"font-size:14px\">All articles submitted within the deadlines and per the guidelines will be subjected to a <strong>single-blind<\/strong> review. Papers that are out of scope, incomplete, or lack sufficient evidence to support the basic claims may be rejected without full review. Furthermore, reviewers will be asked to comment on whether the length is appropriate for the contribution.\u00a0Each of the submitted articles will be reviewed by <strong>at least three members of the Scientific Committee<\/strong>. <\/p>\n\n\n\n<p style=\"font-size:14px\">After completion of the review process, the authors will be informed about the <strong>acceptance or rejection <\/strong>of the submitted work. The reviewers\u2019 comments will be available to the authors in both cases. In case of acceptance, authors must meet the recommendations for improvement and prepare and submit the definitive version of the work up to the camera-ready paper submission deadline. In case of failure to consider the recommendations made by the reviewers, the <strong>organizing committee and the editors reserve the right not to include these works<\/strong> in the conference proceedings.<\/p>\n\n\n\n<p style=\"font-size:14px\">The article&#8217;s final version <strong>must follow the appropriate style guide <\/strong>and contain the authors\u2019 data (names, institutions and emails) and the ORCID details. Submitted articles will be evaluated according to their originality, technical soundness, significance of findings, contribution to knowledge, and clarity of exposition and organisation.<\/p>\n\n\n\n<p style=\"font-size:14px\">According to the quality of the accepted article and its rank among all the other accepted manuscripts, it can be accepted for a full or short presentation or as a poster.<\/p>\n\n\n\n<p style=\"font-size:16px\"><strong>Code of Ethics<\/strong><\/p>\n\n\n\n<p style=\"font-size:14px\">Inspired by the <a href=\"http:\/\/Association for Computing Machineryhttps:\/\/ethics.acm.org\/\">code of ethics<\/a> put forward by the Association of Computing Machinery, the programme committee, supervised by the general conference chairs and organisers, have the right to desk-reject manuscripts that perpetuate harmful stereotypes, employ unethical research practices, or uncritically present outcomes or implications that disadvantage minoritized communities. Further, reviewers of the scientific committee will be explicitly asked to consider whether the research was conducted in compliance with professional, ethical standards and applicable regulatory guidelines. Failure to do so could lead to a desk-rejection<\/p>\n<\/div>\n\n\n<p class=\"has-large-font-size\">Publication<\/p>\n\n\n<div data-post-id=\"29\" class=\"insert-page insert-page-29 \">\n<p class=\"has-medium-font-size\">Proceedings publication<\/p>\n\n\n\n<p style=\"font-size:14px\">Each accepted and presented full, short and extended abstract manuscript, either as an oral presentation or as a poster, will be included in the conference proceedings by Springer in <a href=\"https:\/\/www.springer.com\/series\/7899\">Communications in Computer and Information Science<\/a>, edited by the general chair.&nbsp; At least one author must register for the conference by the early registration deadline. The official publication date is when the publisher makes the proceedings available online. This date will be after the conference and can take a number of weeks. <strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">If authors would like to publish their article open access (upon a fees with Springer), please refer to <a href=\"https:\/\/xaiworldconference.com\/springer-open-access\/\" data-type=\"page\" data-id=\"2121\">this page<\/a>.<\/mark><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/springer-1.png\" alt=\"\"><\/td><td><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/image.png\" alt=\"\" width=\"165\" height=\"139\"><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\">Indexing<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/Scopus_logo.png\" alt=\"\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/EI_COMPENDEX.png\" alt=\"\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/scimago-1.png\" alt=\"\"><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/google-scholar.png\" alt=\"\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/NSDlogo.png\" alt=\"\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/DBLP.png\" alt=\"\"><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/zbMATH@2x.gif\" alt=\"\" width=\"288\" height=\"59\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/jst-1024x229.png\" alt=\"\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><img decoding=\"async\" src=\"https:\/\/xaiworldconference.com\/2023\/wp-content\/uploads\/2023\/01\/inspec-1024x172.png\" alt=\"\"><\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>1st International Conference on eXplainable Artificial Intelligence (xAI 2023) Call for papers (26\/28 July 2023, Lisbon, Portugal) Artificial intelligence has seen a significant shift in focus towards designing and developing intelligent systems that are interpretable and explainable. This is due to the complexity of the models, built from data, and the legal requirements imposed by &hellip; <\/p>\n","protected":false},"author":1,"featured_media":1012,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-593","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/pages\/593","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/comments?post=593"}],"version-history":[{"count":29,"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/pages\/593\/revisions"}],"predecessor-version":[{"id":2432,"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/pages\/593\/revisions\/2432"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/media\/1012"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2023\/wp-json\/wp\/v2\/media?parent=593"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}