The World Conference on Explainable Artificial Intelligence is an annual event that aims to bring together researchers, academics, and professionals to promote the sharing and discussion of knowledge, new perspectives, experiences, and innovations in explainable Artificial Intelligence (XAI). This event is multidisciplinary and interdisciplinary, bringing together academics and scholars of different disciplines, including Computer Science, Psychology, Philosophy, and Social Science, to mention a few, and industry practitioners interested in the practical, social and ethical aspects of the explanation of the models emerging from the discipline of Artificial Intelligence (AI).

The World Conference on Explainable Artificial Intelligence takes place within a rapidly evolving global regulatory landscape. In the European Union, the AI Act (Regulation, EU-2024/1689) establishes a risk-based regime with transparency, documentation, and human oversight obligations for high-risk systems. In the United States, federal policy is steering AI governance through the October 30, 2023, Executive Order on “Safe, Secure, and Trustworthy” AI, while states have moved faster on narrower protections (for example, Tennessee’s ELVIS Act to curb AI voice/likeness misuse and California’s recent SB-53 proposals on frontier model transparency). Canada has been developing the Artificial Intelligence and Data Act (AIDA) alongside the Consumer Privacy Protection Act (CPPA) to require accountability, safety, and transparency in AI use, while provinces and agencies are already applying explainability as part of procedural fairness requirements. Japan follows a “human-centric” and soft-law approach: the AI Promotion Act and national governance guidelines emphasise voluntary compliance, explainability and human oversight rather than heavy statutory penalties. Other jurisdictions are actively tackling XAI too: the UK pursues a principles-based model (2023 White Paper) with regulators (ICO, Ofcom, FCA) applying transparency and explainability within their sectors; Brazil has advanced PL/PL-2338 (the proposed “Marco Legal da IA”) and leverages LGPD data-protection rights to require information and review for automated decisions; China issued the Interim Measures for the Management of Generative AI Services (2023) to require provider duties such as labeling and content controls; and Australia has released voluntary standards and agency guidance stressing risk-based guardrails, transparency and privacy protections.

Taken together, these laws, orders and guidelines make clear that explainability and Explainable Artificial Intelligence (XAI) are no longer only a technical pursuit: it is central to legal accountability, consumer rights protections and cross-sector trust. That regulatory diversity — from binding rules to sectoral/state measures and soft-law approaches — makes a multidisciplinary forum on XAI both timely and necessary.