{"id":2725,"date":"2023-12-30T14:36:51","date_gmt":"2023-12-30T14:36:51","guid":{"rendered":"https:\/\/xaiworldconference.com\/2024\/?page_id=2725"},"modified":"2024-02-17T19:03:55","modified_gmt":"2024-02-17T19:03:55","slug":"computational-argumentation-for-xai","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2024\/computational-argumentation-for-xai\/","title":{"rendered":"Computational Argumentation for xAI"},"content":{"rendered":"\n<div class=\"wp-block-media-text is-stacked-on-mobile\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai-1024x1024.png\" alt=\"\" class=\"wp-image-2731 size-full\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai-300x300.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai-150x150.png 150w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai-768x768.png 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai-470x470.png 470w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2023\/12\/ST-computational_argumentation_for_xai.png 1500w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>This track focuses on the emerging field of Computational Argumentation and its intersection with Explainable AI, exploring novel approaches that harness the power of argumentation to enhance the interpretability and explainability of AI systems. Examples include implementing interpretable machine learning models using rule-based approaches and expressing rules in a structured argumentative form, as well as models that incorporate feature importance explanation through argumentation by providing arguments for and against the relevance of each feature. Other examples include using argumentative structures for visual explanations, such as interactive graphs or diagrams, representing a decision-making process. Argumentation can also help identify biases within a model by evaluating the supporting and opposing evidence for different predictions or improving performance via error analysis. When a model makes errors, argumentation can be used to analyze and present the reasons behind those errors. Argumentation can be employed to create interactive explanation interfaces, allowing users to interact with a model&#8217;s explanations and challenge or question its decisions. Eventually, argumentation can be adapted to incorporate expert opinions and domain-specific knowledge into the AI model. This track invites submissions contributing to this interdisciplinary domain&#8217;s theoretical foundations, methodological advancements, and practical applications. Researchers and practitioners are encouraged to share their insights, methodologies, and findings, fostering a collaborative environment to propel the field forward as a tangible tool for Explainable Artificial Intelligence. <\/p>\n<\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Topics<\/h2>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>explainable rule-based structured argumentation<\/td><\/tr><tr><td>feature importance through computational argumentation for xAI<\/td><\/tr><tr><td>arguments for\/against features&#8217; relevance<\/td><\/tr><tr><td>argumentative structures for visual explanations <\/td><\/tr><tr><td>argument-based interactive graphs\/diagrams for explainable decision-making<\/td><\/tr><tr><td>computational argumentation for bias identification<\/td><\/tr><tr><td>evaluation of machine-learned predictions via supporting\/opposing arguments<\/td><\/tr><tr><td>computational argumentation as a tool for error analysis of predictive performance<\/td><\/tr><tr><td>interactive explanations via argumentation<\/td><\/tr><tr><td>interactive visual argumentation for knowledge-base improvement<\/td><\/tr><tr><td>model enhancement via argumentative expert opinions<\/td><\/tr><tr><td>model performance via integration with argument-based domain-specific knowledge <\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Supported by<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/www.ucy.ac.cy\/\"><img loading=\"lazy\" decoding=\"async\" width=\"450\" height=\"183\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/bio_center_ucy.png\" alt=\"\" class=\"wp-image-3778\" style=\"width:307px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/bio_center_ucy.png 450w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/bio_center_ucy-300x122.png 300w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/a><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"http:\/\/lucalongo.eu\/LongoLab.php\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"320\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/logo_AICL_Labs_transparent.png\" alt=\"\" class=\"wp-image-3420\" style=\"width:164px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/logo_AICL_Labs_transparent.png 320w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/logo_AICL_Labs_transparent-300x300.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/logo_AICL_Labs_transparent-150x150.png 150w\" sizes=\"auto, (max-width: 320px) 100vw, 320px\" \/><\/a><\/figure>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/intelligence.csd.auth.gr\/\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/isl_logo_blue-3.png\" alt=\"\" class=\"wp-image-3970\" style=\"width:152px;height:auto\"\/><\/a><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/aircresearch.ie\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"373\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/AIRC-Logo-1024x373.jpg\" alt=\"\" class=\"wp-image-3808\" style=\"width:357px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/AIRC-Logo-1024x373.jpg 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/AIRC-Logo-300x109.jpg 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/AIRC-Logo-768x280.jpg 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/AIRC-Logo.jpg 1462w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>This track focuses on the emerging field of Computational Argumentation and its intersection with Explainable AI, exploring novel approaches that harness the power of argumentation to enhance the interpretability and explainability of AI systems. Examples include implementing interpretable machine learning models using rule-based approaches and expressing rules in a structured argumentative form, as well as &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-2725","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2725","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/comments?post=2725"}],"version-history":[{"count":12,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2725\/revisions"}],"predecessor-version":[{"id":4008,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2725\/revisions\/4008"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/media?parent=2725"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}