{"id":5546,"date":"2024-12-24T20:32:54","date_gmt":"2024-12-24T20:32:54","guid":{"rendered":"https:\/\/xaiworldconference.com\/2025\/?page_id=5546"},"modified":"2024-12-24T20:34:49","modified_gmt":"2024-12-24T20:34:49","slug":"uncertainty-in-explainable-ai","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2025\/uncertainty-in-explainable-ai\/","title":{"rendered":"Uncertainty in Explainable AI"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-1024x1024.png\" alt=\"\" class=\"wp-image-5554\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-300x300.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-150x150.png 150w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-768x768.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-1536x1536.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-2048x2048.png 2048w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/XAI-2025-uncertainty-in-explainable-ai-470x470.png 470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p style=\"font-size:14px\">Understanding and managing uncertainty in AI models is increasingly important to ensure transparency, appropriate trust, and reliability. This track explores diverse approaches to integrating uncertainty quantification into explainable AI (XAI) frameworks, emphasizing how explanations can communicate both model confidence and prediction reliability. Topics of interest include methods for representing and interpreting aleatoric and epistemic uncertainties, as well as techniques that use these insights to guide decision-making processes in high-stakes environments. Submissions addressing uncertainty-aware explanations in areas such as reject\/defer options, time series, and human-centric systems are particularly welcome. The track also seeks novel evaluation metrics and domain-specific applications of uncertainty-aware explanations, with the goal of advancing actionable and interpretable AI systems.<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Aleatoric and epistemic uncertainty representation in model explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Methods for uncertainty quantification in inherently interpretable models<\/li>\n\n\n\n<li style=\"font-size:14px\">Explanations for reject and defer options in predictive systems<\/li>\n\n\n\n<li style=\"font-size:14px\">Time-series uncertainty explanation methods<\/li>\n\n\n\n<li style=\"font-size:14px\">Frameworks for multi-level uncertainty representation in hierarchical systems<\/li>\n\n\n\n<li style=\"font-size:14px\">Human-centric approaches to calibrating uncertainty explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Uncertainty-aware explanations based on Bayesian theory<\/li>\n\n\n\n<li style=\"font-size:14px\">Conformal prediction-based explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Metrics for evaluating uncertainty-aware explanations in XAI<\/li>\n\n\n\n<li style=\"font-size:14px\">Feature importance explanations incorporating model uncertainty<\/li>\n\n\n\n<li style=\"font-size:14px\">Methods for explaining uncertainty in real-time systems<\/li>\n\n\n\n<li style=\"font-size:14px\">Exploring uncertainty dynamics in sequential decision systems<\/li>\n\n\n\n<li style=\"font-size:14px\">Scalable frameworks for large-scale uncertainty-aware explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Domain-specific studies of uncertainty-aware XAI (in domains driven by decision confidence)<\/li>\n\n\n\n<li style=\"font-size:14px\">Contextual uncertainty explanations tailored to fairness and bias mitigation<\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li style=\"font-size:14px\">Calibration strategies for enhancing trust through uncertainty representation<\/li>\n\n\n\n<li style=\"font-size:14px\">Integration of uncertainty in federated and decentralized AI systems<\/li>\n\n\n\n<li style=\"font-size:14px\">Techniques for simultaneous explanation and uncertainty quantification in AI pipelines<\/li>\n\n\n\n<li style=\"font-size:14px\">Practical tools for uncertainty-aware model debugging<\/li>\n\n\n\n<li style=\"font-size:14px\">Uncertainty-aware explanations for multimodal data systems<\/li>\n\n\n\n<li style=\"font-size:14px\">Leveraging uncertainty explanations to improve model validation processes<\/li>\n\n\n\n<li style=\"font-size:14px\">Impact of uncertainty-aware XAI on appropriate trust and reliance<\/li>\n\n\n\n<li style=\"font-size:14px\">Novel visualization techniques for uncertainty in explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Combining uncertainty with counterfactual explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Frameworks for domain adaptation with uncertainty-aware interpretations<\/li>\n\n\n\n<li style=\"font-size:14px\">Explainable reinforcement learning under uncertainty<\/li>\n\n\n\n<li style=\"font-size:14px\">Enhancing safety in critical AI systems through uncertainty explanations<\/li>\n\n\n\n<li style=\"font-size:14px\">Tools for user-adaptive explanations based on uncertainty or confidence levels<\/li>\n\n\n\n<li style=\"font-size:14px\">Explanation methods focusing on uncertainty in low-resource settings<\/li>\n<\/ul>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"914\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH-914x1024.png\" alt=\"\" class=\"wp-image-5547\" style=\"width:165px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH-914x1024.png 914w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH-268x300.png 268w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH-768x861.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH-1371x1536.png 1371w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH-1828x2048.png 1828w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/KTH.png 1867w\" sizes=\"auto, (max-width: 914px) 100vw, 914px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"568\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Jonkoping-University-1024x568.png\" alt=\"\" class=\"wp-image-5548\" style=\"width:218px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Jonkoping-University-1024x568.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Jonkoping-University-300x167.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Jonkoping-University-768x426.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Jonkoping-University-1536x853.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Jonkoping-University-2048x1137.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"366\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ETH-zurich.png\" alt=\"\" class=\"wp-image-5549\" style=\"width:236px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ETH-zurich.png 600w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/ETH-zurich-300x183.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/figure>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"757\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/University-of-Bologna-1024x757.png\" alt=\"\" class=\"wp-image-5550\" style=\"width:222px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/University-of-Bologna-1024x757.png 1024w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/University-of-Bologna-300x222.png 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/University-of-Bologna-768x568.png 768w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/University-of-Bologna-1536x1136.png 1536w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/University-of-Bologna-2048x1515.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"388\" height=\"140\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Collegium_Helveticum.jpg\" alt=\"\" class=\"wp-image-5551\" style=\"width:261px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Collegium_Helveticum.jpg 388w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Collegium_Helveticum-300x108.jpg 300w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/Collegium_Helveticum-384x140.jpg 384w\" sizes=\"auto, (max-width: 388px) 100vw, 388px\" \/><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"258\" height=\"257\" src=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/saab.png\" alt=\"\" class=\"wp-image-5552\" style=\"width:192px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/saab.png 258w, https:\/\/xaiworldconference.com\/2025\/wp-content\/uploads\/2024\/12\/saab-150x150.png 150w\" sizes=\"auto, (max-width: 258px) 100vw, 258px\" \/><\/figure>\n<\/div><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Understanding and managing uncertainty in AI models is increasingly important to ensure transparency, appropriate trust, and reliability. This track explores diverse approaches to integrating uncertainty quantification into explainable AI (XAI) frameworks, emphasizing how explanations can communicate both model confidence and prediction reliability. Topics of interest include methods for representing and interpreting aleatoric and epistemic uncertainties, &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-5546","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5546","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/comments?post=5546"}],"version-history":[{"count":2,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5546\/revisions"}],"predecessor-version":[{"id":5555,"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/pages\/5546\/revisions\/5555"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2025\/wp-json\/wp\/v2\/media?parent=5546"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}