
The rapid expansion of Artificial Intelligence (AI) applications in finance necessitates the introduction of statistical methods that can assess their quality, not only from a technical viewpoint (accuracy, sustainability) but also from an ethical viewpoint (explainability, fairness).
In this special track, we contribute to filling this gap by calling for papers that develop consistent statistical metrics to measure the sustainability, accuracy, fairness, and explainability of AI applications in finance. We also call for work that shows their practical applications and showcases software packages for their implementations.
All areas of finance are considered, including credit lending, asset management and insurance.
- XAI methods aimed at reconciling predictive forecast models and evaluating their robustness
- general model-agnostic tools to measure accuracy and distance between predictive distributions
- techniques for measuring explainability and fairness with post-processing tools (e.g. Shapley Values, features importance)
- XAI methods aimed at measuring the sustainability of AI models of financial investments
- cyber and operational risks assessment of financial investments with XAI methods
- Explainability vs accuracy/robustness/fairness/privacy in finance
- Explainable AI for financial time series
- XAI techniques for financial geographical data
- applications of XAI methods to credit lending, credit scoring, credit ratings, and financial inclusion
- application of XAI methods to AI-based models of asset management, robot advisory and portfolio allocation
- XAI methods for explaining AI-based models built on distributed ledger technology in finance, crypto assets, stablecoins and central bank digital currencies
- application of explainability principles to insurance pricing, customer segmentation in insurance and insurance claim management
