
The digitization of public administration increasingly involves AI applications that are integrated into a wide variety of processes. However, integration has to be done with the utmost care for safety and security due to the potential to cause serious harm to citizens and governmental bodies with faulty decision making. XAI has the potential to help foster AI applications that are built around principles like human oversight, transparency, and the possibility for objection. This potential can only be realized, if XAI methods are developed to address the needs of administrative employees working on tasks within an administrative process or citizens interacting with public administrations. Authorities also have to ensure that their AI-supported processes comply with legal requirements. To this end, they can use XAI methods which have to withstand in the event of a judicial review. Furthermore, applicable methods have to be developed with safety in mind, ensuring that explanations provided by these methods do not cause harm or violate rights. Of equal importance is the security of the methods itself. Many XAI-methods are susceptible to manipulation. A manipulated explanation cannot guarantee to accurately depict the decision process of an AI model. Thus XAI can only be integrated into public administration if the systems and explanations cannot easily be attacked or manipulated by malicious actors. These malicious actors can be located outside the administrative system. However, XAI methods also need to be robust against attacks and manipulations from malicious actors within the administrative system.
- XAI methods in tax audit processes (e.g. PDP, ICE, LIME, SHAP, Explanation Graphs)
- design approaches for the public sector with XAI (e.g. User-Centric XAI Framework)
- co-creative application design and XAI techniques for public administration (e.g. user studies with target group)
- explaining models for predicting sewer overflows (e.g. LIME, SHAP, RuleMatrix)
- application of XAI methods to integrated quality management processes (e.g. analyzing compliance with XAI)
- the tradeoff between transparency and explainability in rigorous AI audits (e.g. white-box access, outside-the-box access)
- enhancement of information governance with explainable AI (e.g. SHAP, LIME)
- explainability methods for cybersecurity in the public sector (e.g. SHAP, LIME, ELI5, Skater, DALEX, ALE)
- enhancing trust and transparency in handling customer data with XAI methods (e.g. enhancing information governance frameworks with XAI)
- deceptive XAI in the public sector (e.g. Perturbed Counterfactuals, Score-Based Deception)
- XAI methods for quantifying risk of fair washing (e.g. FaiRS)
- proposing solutions to the XAI disagreement problem (e.g. FD-Trees, REPID)
- models extraction attacks with explainable AI methods (e.g. AUTOLYCUS)
- XAI for adversarial attacks and defences (e.g. Data Poisioning, Backdoor Attack, Neighbourhood SHAP, Smoothed SHAP)
- Novel XAI methods for specific processes in public administration (e.g. plausibility check for requests).
- XAI methods in administration processes considering the needs of administrative employees (e.g grounding XAI with domain knowledge)
- improvement of digital citizen services with XAI methods (e.g. XAgent)
- attack-proofs XAI methods from inside/outside the administrative system (e.g. TSP-Explanation)
- application of XAI methods to fulfill transparency obligations (e.g. SHAP, SAGE, shapiq, SHAP-IQ)
- the limitations of XAI techniques (such as the disagreement problem) in the event of a judicial review (e.g. techno-legal analysis)