{"id":16,"date":"2022-12-16T21:00:29","date_gmt":"2022-12-16T21:00:29","guid":{"rendered":"http:\/\/xaiconference.lucalongo.eu\/?page_id=16"},"modified":"2024-10-21T17:06:14","modified_gmt":"2024-10-21T17:06:14","slug":"topics","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2026\/topics\/","title":{"rendered":"Topics"},"content":{"rendered":"\n<p class=\"has-medium-font-size\"><strong>Technical methods for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Action Influence Graphs<\/td><\/tr><tr><td>Agent-based explainable systems<\/td><\/tr><tr><td>Ante-hoc approaches for interpretability<\/td><\/tr><tr><td>Argumentative-based approaches for explanations <\/td><\/tr><tr><td>Argumentation theory for explainable AI<\/td><\/tr><tr><td>Attention mechanisms for XAI<\/td><\/tr><tr><td>Automata for explaining Recurrent Neural Network models<\/td><\/tr><tr><td>Auto-encoders &amp; explainability of latent spaces<\/td><\/tr><tr><td>Bayesian modelling for interpretable models<\/td><\/tr><tr><td>Black-boxes vs white-boxes<\/td><\/tr><tr><td>Case-based explanations for AI systems<\/td><\/tr><tr><td>Causal inference &amp; explanations<\/td><\/tr><tr><td>Constraints-based explanations<\/td><\/tr><tr><td>Decomposition of neural network-based models for XAI<\/td><\/tr><tr><td>Deep learning &amp; XAI methods<\/td><\/tr><tr><td>Defeasible reasoning for explainability<\/td><\/tr><tr><td>Evaluation approaches for XAI-based systems<\/td><\/tr><tr><td>Explainable methods for edge computing<\/td><\/tr><tr><td>Expert systems for explainability<\/td><\/tr><tr><td>Explainability &amp; the semantic web<\/td><\/tr><tr><td>Explainability of signal processing methods<\/td><\/tr><tr><td>Finite state machines for enabling explainability<\/td><\/tr><tr><td>Fuzzy systems &amp; logic for explainability<\/td><\/tr><tr><td>Graph neural networks for explainability<\/td><\/tr><tr><td>Hybrid &amp; transparent black box modelling<\/td><\/tr><tr><td>Interpreting &amp; explaining Convolutional Neural Network<\/td><\/tr><tr><td>Interpretable representational learning<\/td><\/tr><tr><td>Methods for latent spaces interpretations<\/td><\/tr><tr><td>Model-specific vs model-agnostic methods for XAI<\/td><\/tr><tr><td>Neuro-symbolic reasoning for XAI<\/td><\/tr><tr><td>Natural language processing for explanations<\/td><\/tr><tr><td>Ontologies &amp; taxonomies for supporting XAI<\/td><\/tr><tr><td>Pruning methods with XAI<\/td><\/tr><tr><td>Post-hoc methods for explainability<\/td><\/tr><tr><td>Reinforcement learning for enhancing XAI systems<\/td><\/tr><tr><td>Reasoning under uncertainty for explanation<\/td><\/tr><tr><td>Rule-based XAI systems<\/td><\/tr><tr><td>Robotics &amp; explainability<\/td><\/tr><tr><td>Sample-centric &amp; Dataset-centric explanations<\/td><\/tr><tr><td>Self-explainable methods for XAI<\/td><\/tr><tr><td>Sentence embeddings to explainable semantic features<\/td><\/tr><tr><td>Transparent &amp; explainable learning methods<\/td><\/tr><tr><td>User interfaces for explainability<\/td><\/tr><tr><td>Visual methods for representational learning<\/td><\/tr><tr><td>XAI Benchmarking<\/td><\/tr><tr><td>XAI methods for neuroimaging &amp; neural signals<\/td><\/tr><tr><td>XAI &amp; reservoir computing<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Ethical considerations for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Accountability &amp; responsibility in  XAI-based technologies<\/td><\/tr><tr><td>Addressing user-centric requirements for XAI systems<\/td><\/tr><tr><td>Assessment of model accuracy &amp; interpretability trade-off<\/td><\/tr><tr><td>Explainable Bias &amp; fairness of XAI-based systems<\/td><\/tr><tr><td>Explainability for discovering, improving, controlling &amp; justifying<\/td><\/tr><tr><td>Explainability as a prerequisite for responsible AI systems<\/td><\/tr><tr><td>Explainability &amp; data fusion<\/td><\/tr><tr><td>Explainability &amp; responsibility in policy guidelines<\/td><\/tr><tr><td>Explainability pitfalls &amp; dark patterns in XAI<\/td><\/tr><tr><td>Historical foundations of XAI<\/td><\/tr><tr><td>Moral principles &amp; dilemma for XAI-based systems<\/td><\/tr><tr><td>Multimodal XAI approaches<\/td><\/tr><tr><td>Philosophical consideration of synthetic explanations<\/td><\/tr><tr><td>Prevention &amp; detection of deceptive AI explanations<\/td><\/tr><tr><td>Social implications of automatically-generated explanations<\/td><\/tr><tr><td>Theoretical foundations of XAI<\/td><\/tr><tr><td>Trust &amp; explainable AI<\/td><\/tr><tr><td>The logic of scientific explanation for\/in AI<\/td><\/tr><tr><td>The epistemic &amp; moral goods expected from explaining AI<\/td><\/tr><tr><td>XAI for fairness checking<\/td><\/tr><tr><td>XAI for time series-based approaches<\/td><\/tr><tr><td>XAI for transparency &amp; unbiased decision making<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Psychological notions and concepts for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Algorithmic transparency &amp; actionability<\/td><\/tr><tr><td>Cognitive approaches &amp; architectures for explanations<\/td><\/tr><tr><td>Cognitive relief in explanations<\/td><\/tr><tr><td>Contrastive nature of explanations<\/td><\/tr><tr><td>Comprehensibility vs interpretability vs explainability<\/td><\/tr><tr><td>Counterfactual explanations<\/td><\/tr><tr><td>Designing new explanation styles<\/td><\/tr><tr><td>Explanations for correctability<\/td><\/tr><tr><td>Faithfulness &amp; intelligibility of explanations<\/td><\/tr><tr><td>Interpretability vs traceability<\/td><\/tr><tr><td>Interestingness &amp; informativeness of explanations<\/td><\/tr><tr><td>Irrelevance of probabilities to explanations<\/td><\/tr><tr><td>Iterative dialogue explanations<\/td><\/tr><tr><td>Justification &amp; explanations in AI-based systems<\/td><\/tr><tr><td>Local vs global interpretability &amp; explainability<\/td><\/tr><tr><td>Methods for assessing the quality of explanations<\/td><\/tr><tr><td>Non-technical explanations in AI-based systems<\/td><\/tr><tr><td>Notions and metrics of\/for explainability<\/td><\/tr><tr><td>Persuasiveness &amp; robustness of explanations<\/td><\/tr><tr><td>Psychometrics of human explanations<\/td><\/tr><tr><td>Qualitative approaches for explainability<\/td><\/tr><tr><td>Questionnaires &amp; surveys for explainability<\/td><\/tr><tr><td>Scrutability &amp; diagnosis of XAI methods<\/td><\/tr><tr><td>Soundness &amp; stability of XAI methods<\/td><\/tr><tr><td>Theories of explanation<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Social examinations of XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Adaptive explainable systems<\/td><\/tr><tr><td>Backward &amp; forward-looking responsibility forms to XAI<\/td><\/tr><tr><td>Data provenance &amp; explainability<\/td><\/tr><tr><td>Explainability for reputation<\/td><\/tr><tr><td>Epistemic and non-epistemic values for XAI<\/td><\/tr><tr><td>Human-centric explainable AI<\/td><\/tr><tr><td>Person-specific XAI systems<\/td><\/tr><tr><td>Presentation &amp; personalization of AI explanations for target groups<\/td><\/tr><tr><td>Social nature of explanations<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Legal and administrative considerations of\/for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Black-box model auditing &amp; explanation<\/td><\/tr><tr><td>Explainability in regulatory compliance<\/td><\/tr><tr><td>Human rights for explanations in AI systems<\/td><\/tr><tr><td>Policy-based systems of explanations<\/td><\/tr><tr><td>The potential harm of explainability in AI<\/td><\/tr><tr><td>Trustworthiness of explanations for clinicians &amp; patients<\/td><\/tr><tr><td>XAI methods for model governance<\/td><\/tr><tr><td>XAI in policy development<\/td><\/tr><tr><td>XAI to increase situational awareness &amp; compliance behaviour<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Safety &amp; security approaches for XAI<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Adversarial attacks explanations<\/td><\/tr><tr><td>Explanations for risk assessment<\/td><\/tr><tr><td>Explainability of federated learning<\/td><\/tr><tr><td>Explainable IoT malware detection<\/td><\/tr><tr><td>Privacy &amp; agency of explanations<\/td><\/tr><tr><td>XAI for Privacy-Preserving Systems<\/td><\/tr><tr><td>XAI techniques of stealing attack &amp; defence<\/td><\/tr><tr><td>XAI for human-AI cooperation<\/td><\/tr><tr><td>XAI &amp; models output confidence estimation<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Applications of XAI-based systems<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Application of XAI in cognitive computing<\/td><\/tr><tr><td>Dialogue systems for enhancing explainability<\/td><\/tr><tr><td>Explainable methods for medical diagnosis<\/td><\/tr><tr><td>Business &amp; Marketing applications of XAI<\/td><\/tr><tr><td>Biomedical knowledge discovery &amp; explainability<\/td><\/tr><tr><td>Explainable methods for Human-computer Interaction<\/td><\/tr><tr><td>Explainability in decision-support systems<\/td><\/tr><tr><td>Explainable recommender systems<\/td><\/tr><tr><td>Explainable methods for finance &amp; automatic trading systems<\/td><\/tr><tr><td>Explainability in agricultural AI-based methods<\/td><\/tr><tr><td>Explainability in transportation systems<\/td><\/tr><tr><td>Explainability for unmanned aerial vehicles (UAV)<\/td><\/tr><tr><td>Explainability in brain-computer interface systems<\/td><\/tr><tr><td>Interactive applications for XAI<\/td><\/tr><tr><td>Manufacturing chains &amp; application of XAI systems<\/td><\/tr><tr><td>Models of explanations in criminology, cybersecurity &amp; defence<\/td><\/tr><tr><td>XAI approaches in Industry 4.0<\/td><\/tr><tr><td>XAI systems for health-care<\/td><\/tr><tr><td>XAI technologies for autonomous driving<\/td><\/tr><tr><td>XAI methods for bioinformatics<\/td><\/tr><tr><td>XAI methods for linguistics &amp; machine translation<\/td><\/tr><tr><td>XAI methods for neuroscience<\/td><\/tr><tr><td>XAI models &amp; applications for IoT<\/td><\/tr><tr><td>XAI methods for XAI for terrestrial, atmospheric, &amp; ocean remote sensing<\/td><\/tr><tr><td>XAI in sustainable finance &amp; climate finance<\/td><\/tr><tr><td>XAI in bio-signals analysis<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Technical methods for XAI Action Influence Graphs Agent-based explainable systems Ante-hoc approaches for interpretability Argumentative-based approaches for explanations Argumentation theory for explainable AI Attention mechanisms for XAI Automata for explaining Recurrent Neural Network models Auto-encoders &amp; explainability of latent spaces Bayesian modelling for interpretable models Black-boxes vs white-boxes Case-based explanations for AI systems Causal inference &hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-16","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages\/16","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/comments?post=16"}],"version-history":[{"count":149,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages\/16\/revisions"}],"predecessor-version":[{"id":5198,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages\/16\/revisions\/5198"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/media?parent=16"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}