{"id":2828,"date":"2024-01-10T17:02:42","date_gmt":"2024-01-10T17:02:42","guid":{"rendered":"https:\/\/xaiworldconference.com\/2024\/?page_id=2828"},"modified":"2024-02-17T19:08:28","modified_gmt":"2024-02-17T19:08:28","slug":"explainable-ai-for-privacy-preserving-machine-learning","status":"publish","type":"page","link":"https:\/\/xaiworldconference.com\/2024\/explainable-ai-for-privacy-preserving-machine-learning\/","title":{"rendered":"Explainable AI for Privacy-Preserving Machine Learning"},"content":{"rendered":"\n<div class=\"wp-block-media-text is-stacked-on-mobile\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-1024x1024.png\" alt=\"\" class=\"wp-image-2886 size-full\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-1024x1024.png 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-300x300.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-150x150.png 150w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-768x768.png 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-1536x1536.png 1536w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-2048x2048.png 2048w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/01\/ST-Explainable-AI-for-Privacy-Preserving-Machine-Learning-1-470x470.png 470w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>After a decade of application of deep learning, it is no surprise that these resulting models may suffer from difficulties in interpretability. Yet, a risk possibly more imminent is the potential training data leakage of models. Past years of research have shown that by systematically exploiting pre-trained models with various degrees of detail, including confidence values output, statistical assumptions about the training data, direct access to neural network weights or access to data that shares characteristics with the original training data, it is in surprisingly many cases possible to infer if an individual data observation has been included in the training data (membership attacks). Increasingly severe <em>mode<\/em>l inference attacks methods showcase reconstruction of the actual values of training data. Despite the continuous loop of the invention of new model attack methods and the subsequent release of risk mitigation strategies, there is a clear need for explainable AI methods for assessing and understanding the risk of successful attacks. Moreover, research on explainable AI methods is also needed to create knowledge on the effectiveness and influence of model risk mitigation strategies in various aspects, including vulnerabilities in parts of input feature space or risks associated with different data modalities. Legal frameworks and guidelines regarding AI link strongly to preserving the privacy of individuals training data; therefore, research on explainable AI methods for understanding how legal\/technical challenges can be handled is welcomed.<\/p>\n<\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Topics<\/h2>\n\n\n\n<figure class=\"wp-block-table is-style-stripes has-small-font-size\"><table><tbody><tr><td>Explainable ML-based data\/model inference attack detection (e.g. explainable supervised\/unsupervised ML-based detection methods, statistical change-point detection methods) <\/td><\/tr><tr><td>Privacy-preserving explainable models (e.g. resistant to model inversion attack methods, membership attack methods)<\/td><\/tr><tr><td>Applications of xAI methods for understanding\/quantifying attack risk mitigation strategies for individual predictions (e.g. via local XAI methods, individual Shapley values)<\/td><\/tr><tr><td>Applications of xAI methods for understanding privacy attacks in federated Learning (e.g. visualizations of data leakage with attacks for vulnerabilities understanding, error bounds, similarity-based metrics)<\/td><\/tr><tr><td>Novel development of xAI methods for understanding\/quantifying privacy attack risk &amp; performance<\/td><\/tr><tr><td>Explainable-based defence mechanisms for attacks on anonymization processes (e.g. neuro-symbolic techniques with automated-reasoning capabilities)<\/td><\/tr><tr><td>Explainable models to facilitate the &#8216;right for data contributors to be forgotten&#8217; (e.g. via models able to forget, unlearning methods, model specific\/agnostic approximate unlearning)<\/td><\/tr><tr><td><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Supported by<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.hh.se\/english\/research\/our-research\/research-at-the-school-of-information-technology\/center-for-applied-intelligent-systems-research-caisr.html\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"207\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR-1024x207.png\" alt=\"\" class=\"wp-image-3962\" style=\"width:388px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR-1024x207.png 1024w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR-300x61.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR-768x156.png 768w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR-1536x311.png 1536w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR.png 1748w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/www.hh.se\/caisr-health\/caisr-health.html\"><img loading=\"lazy\" decoding=\"async\" width=\"890\" height=\"204\" src=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR_health-1.png\" alt=\"\" class=\"wp-image-3965\" style=\"width:383px;height:auto\" srcset=\"https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR_health-1.png 890w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR_health-1-300x69.png 300w, https:\/\/xaiworldconference.com\/2024\/wp-content\/uploads\/2024\/02\/CAISR_health-1-768x176.png 768w\" sizes=\"auto, (max-width: 890px) 100vw, 890px\" \/><\/a><\/figure>\n<\/div><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>After a decade of application of deep learning, it is no surprise that these resulting models may suffer from difficulties in interpretability. Yet, a risk possibly more imminent is the potential training data leakage of models. Past years of research have shown that by systematically exploiting pre-trained models with various degrees of detail, including confidence &hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-2828","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2828","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/comments?post=2828"}],"version-history":[{"count":37,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2828\/revisions"}],"predecessor-version":[{"id":4018,"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/pages\/2828\/revisions\/4018"}],"wp:attachment":[{"href":"https:\/\/xaiworldconference.com\/2024\/wp-json\/wp\/v2\/media?parent=2828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}