{"id":6601,"date":"2025-11-29T10:43:06","date_gmt":"2025-11-29T10:43:06","guid":{"rendered":"https:\/\/xaiworldconference.com\/2026\/?page_id=6601"},"modified":"2025-12-09T10:08:42","modified_gmt":"2025-12-09T10:08:42","slug":"uncertainty-aware-explainable-ai","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2026\/uncertainty-aware-explainable-ai\/","title":{"rendered":"Uncertainty-Aware Explainable AI"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai-1024x1024.png\" alt=\"\" class=\"wp-image-6740\" srcset=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai-300x300.png 300w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai-150x150.png 150w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai-768x768.png 768w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai-470x470.png 470w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/12\/10-XAI-2026-uncertainty-aware-explainable-ai.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p>Uncertainty-Aware Explainable AI (UAXAI) is an emerging, fast-moving frontier in AI that tackles a crucial gap in today\u2019s explainability techniques. While traditional XAI focuses on why a model made a decision, recent insights highlight that this alone is insufficient for trustworthy AI \u2013 users also need to know how sure the model is about its explanation and prediction.&nbsp;<\/p>\n<\/div>\n<\/div>\n\n\n\n<p>There is growing recognition that conveying a model\u2019s confidence alongside its reasoning is essential for robust, human-centered AI. However, the young field of UAXAI faces clear signs of a maturing discipline in need of synthesis and standards. Evaluation practices remain heterogeneous and largely model-centered, with few user studies and no agreed-upon metrics for uncertainty in explanations. Reliability measures like calibration, coverage, explanation variance, or stability are inconsistently reported, often conflated with raw model performance. These gaps underscore the need for community convergence and visibility.&nbsp;<\/p>\n\n\n\n<p>This special session will bring together researchers and practitioners to accelerate progress in UAXAI, emphasizing both theoretical advances and real-world applications. It will bridge interpretability and reliability by showcasing methods that tell users not only what the model says, but also how certain it is, fostering appropriate trust in AI. By highlighting new frameworks, evaluation protocols, uncertainty-aware deferral\/abstention strategies, and domain-specific case studies, the session aims to chart a roadmap for uncertainty-aware explanations that improve transparency, safety, and user trust in high-stakes AI systems.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><code><strong>Keywords<\/strong>: <mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-green-cyan-color\">Uncertainty-Aware XAI, Uncertainty Quantification, Aleatoric Uncertainty, Epistemic Uncertainty, Calibration, Coverage, Conformal Prediction, Bayesian Methods, Monte Carlo Dropout, Counterfactual Explanations, Abstention\/Deferral, Explanation Stability, Explainer Variance, Evaluation Protocols, Human-in-the-Loop, Appropriate Trust, Trust Calibration, High-Stakes AI, Autonomous Systems.<\/mark><\/code><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Topics<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Aleatoric vs epistemic uncertainty in explanations. Clarify and communicate their roles. Applications: diagnosis, credit risk. Novel: decomposition at feature or modality level.<\/li>\n\n\n\n<li>Confidence-calibrated explanations. Pair each rationale with calibrated confidence. Applications: clinical decision support, fraud. Novel: explanation-level reliability reporting.<\/li>\n\n\n\n<li>Integrated frameworks for prediction quality, explanation quality, and trust. Jointly optimize performance, calibration, and interpretability. Applications: regulated finance, safety-critical ops. Novel: multi-objective training using UQ signals.<\/li>\n\n\n\n<li>Evaluation protocols for UAXAI. Standardize coverage, calibration, stability, and human impact. Applications: cross-domain benchmarks. Novel: unified triad of metrics plus user studies.<\/li>\n\n\n\n<li>Explanation method variance and stability. Quantify explainer uncertainty and aggregate robustly. Applications: SHAP\/LIME in healthcare, GNN explainers. Novel: variance-aware explainer selection.<\/li>\n\n\n\n<li>Reject and defer with explanations. Explain abstention and handoff rules under high uncertainty. Applications: clinical triage, credit decisions. Novel: policy-aware deferral calibrated to user risk.<\/li>\n\n\n\n<li>Conformal prediction and set-valued explanations. Distribution-free intervals and sets paired with explanations. Applications: diagnosis with guarantees, time-series forecasting. Novel: Venn-Abers and predictive distributions linked to XAI.<\/li>\n\n\n\n<li>Bayesian and probabilistic approaches. Use BNNs, GPs, ensembles to express epistemic uncertainty in explanations. Applications: forecasting, perception. Novel: posteriors over attributions.<\/li>\n\n\n\n<li>Counterfactuals with uncertainty. Generate recourse with confidence bounds. Applications: loans, safety cases. Novel: robust counterfactuals under epistemic constraints.<\/li>\n\n\n\n<li>Human-in-the-loop and appropriate trust calibration. Design UIs that communicate uncertainty to prevent over\/under-trust. Applications: HCI studies, operational consoles. Novel: adaptive uncertainty presentation by user role.<\/li>\n\n\n\n<li>Domain applications and deployment. Case studies in healthcare, finance, autonomous systems, energy. Focus on data shift, latency, and governance. Novel: field-validated UAXAI playbooks.<\/li>\n\n\n\n<li>Emerging trends and scalability. UAXAI for LLMs, streaming and real-time systems. Novel: online calibration and latency-aware explanations at scale.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"757\" src=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/allesandra_stramiglio_institution-1024x757.png\" alt=\"\" class=\"wp-image-6605\" style=\"width:248px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/allesandra_stramiglio_institution-1024x757.png 1024w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/allesandra_stramiglio_institution-300x222.png 300w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/allesandra_stramiglio_institution-768x568.png 768w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/allesandra_stramiglio_institution-1536x1136.png 1536w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/allesandra_stramiglio_institution-2048x1515.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"914\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution-914x1024.png\" alt=\"\" class=\"wp-image-6609\" style=\"width:227px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution-914x1024.png 914w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution-268x300.png 268w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution-768x861.png 768w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution-1371x1536.png 1371w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution-1828x2048.png 1828w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/henrik_bostrom_institution.png 1867w\" sizes=\"auto, (max-width: 914px) 100vw, 914px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"568\" src=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/helena_lofstrom-cavallin_institution-1024x568.png\" alt=\"\" class=\"wp-image-6606\" style=\"width:264px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/helena_lofstrom-cavallin_institution-1024x568.png 1024w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/helena_lofstrom-cavallin_institution-300x167.png 300w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/helena_lofstrom-cavallin_institution-768x426.png 768w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/helena_lofstrom-cavallin_institution-1536x853.png 1536w, https:\/\/xaiworldconference.com\/2026\/wp-content\/uploads\/2025\/11\/helena_lofstrom-cavallin_institution-2048x1137.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Uncertainty-Aware Explainable AI (UAXAI) is an emerging, fast-moving frontier in AI that tackles a crucial gap in today\u2019s explainability techniques. While traditional XAI focuses on why a model made a decision, recent insights highlight that this alone is insufficient for trustworthy AI \u2013 users also need to know how sure the model is about its &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-6601","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages\/6601","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/comments?post=6601"}],"version-history":[{"count":6,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages\/6601\/revisions"}],"predecessor-version":[{"id":6741,"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/pages\/6601\/revisions\/6741"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2026\/wp-json\/wp\/v2\/media?parent=6601"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}