This special track brings together researchers, policymakers and legal experts to address the complex challenges and opportunities at the intersection of explainable artificial intelligence (XAI) and the legal domain. The focus is to promote the development of innovative XAI methodologies that integrate technical and legal perspectives. By fostering interdisciplinary collaboration, this track highlights technical advancements in model interpretability for complex AI model architectures and the creation of robust legal compliance frameworks for such XAI tools. Within this track, a critical theme explores how XAI tools like SHAP, LIME, causal models, and partial dependence can adapt to industry-specific interpretability needs while ensuring compliance with EU regulations in areas like healthcare, finance, and autonomous systems. Key topics include alignment with GDPR, AI Act, and strategies to bridge gaps between developers, regulators, industry professionals, and users, fostering unified frameworks for explainability reports addressing technical and legal requirements across sectors.
This track will also focus on advancing innovative XAI methodologies to transform complex AI models into interpretable compliance-ready systems aligned with regulatory requirements. It will explore the interoperability of XAI tools to enable seamless integration across diverse industrial applications while ensuring adherence to legal and regulatory standards. Finally, the track will focus on establishing universal benchmarks for XAI by defining global technical and legal standards. These collaborative efforts aim to ensure XAI’s reliable and ethical deployment, fostering trust and legitimacy in AI-driven legal reasoning and governance systems. This track aspires to generate quantitative guidelines towards developing legally sound and trustworthy XAI solutions.

  • Analysis of laws and regulations like GDPR and their impact on explainable AI development.
  • Evaluation metrics, benchmarks, and toolkits for testing explainability and interpretability tailored to legal standards.
  • Methods to comprehensively communicate model uncertainty, reliability, and rationale to judges, lawyers, and law enforcement.
  • Developing standardised XAI reports for communicating with industry and generic users.
  • Cross-disciplinary design frameworks integrating legal theory and technical XAI methodologies for robust, domain-informed model explanations.
  • Proposed legal standards, codes of conduct, or certification schemes for AI models in legal contexts.
  • Proposals for legal reforms to support explainable AI adoption in diverse legal processes.
  • Governance frameworks for ensuring ongoing oversight, auditing, and compliance of AI-based legal tools.
  • Human-centric studies on how different stakeholders (i.e., judges and lawyers) interact with explanations.
  • Philosophical analyses of “explanation” in legal contexts — when is an explanation valid, sufficient, or persuasive?
  • Empirical studies on effectiveness, usability, and acceptance of legal XAI systems in real-world contexts.
  • Bias and discrimination issues in AI: XAI tools and solutions for identifying and mitigating them.
  • Tools for aiding judges in explaining sentencing decisions transparently.
  • XAI applications elucidating risk factors and supporting due process in bail, parole, and probation determinations.
  • Regulatory bodies’ use of systems with integrated explainability features, to enable transparent rule application and compliance checks.
  • Explainable tools for contract analysis, property rights disputes, or consumer protection issues.
  • Financial regulation and compliance: Explainable AI in fraud detection, anti-money laundering systems, and credit scoring.
  • Healthcare law, medical malpractice claims: diagnostic XAI clarifying causal reasoning behind treatments or liability decisions.
  • Competition law and antitrust investigations: Explainable predictive modelling and market analysis aiding regulators in enforcement.
  • Employment law: Explainable AI-driven tools that illustrate the rationale behind candidate recommendations.
  • Practical barriers to implementing XAI solutions in legal practice (e.g. lack of technical expertise in law firms).
  • Producing human-readable legal justifications using NLP, knowledge representation, User experience (UX) and interface design.
  • Benchmark datasets, simulation environments that encourage community-wide collaboration and reproducible research in XAI for law