In the landscape of Artificial Intelligence (AI), the pursuit of Explainable AI (XAI) has predominantly thrived on methodologies reliant on labelled data, enabling the elucidation of model decisions in supervised settings. However, this reliance on annotated data presents a formidable challenge in unsupervised learning, where the absence of explicit labels renders traditional XAI techniques largely unfeasible. This specialized track aims to spotlight the intricate challenges and pioneering advancements crucial for unravelling the opaque workings of unsupervised learning systems. Unlike supervised approaches, unsupervised learning operates without explicit labels, posing a unique set of obstacles for applying conventional XAI methods, which heavily lean on annotated data for explanation generation. We cordially invite researchers, practitioners, and experts across diverse domains to delve into the complexities inherent in rendering unsupervised learning both transparent and interpretable. The goal is to foster discussions and explore methodologies that address the intrinsic opacity of unsupervised learning algorithms, seeking novel ways to provide meaningful explanations without the crutch of labelled data.

Topics

Approaches to providing interpretability to clustering algorithms (eg. K-Means, Hierarchical Clustering, DBSCAN)
Novel interpretable clustering algorithms
Novel interpretable methods for neural network-based clustering approaches
Explainable unsupervised anomaly and outlier detection
Explainable extensions of unsupervised methods such as Isolation Forest, Local outlier factor, Angle-based Outlier Detection
Novel explainable unsupervised anomaly and outlier detection approaches
Visual explanations to aid comprehension of unsupervised outcomes (features importance, effect on anomaly scores)
Interpretability of unsupervised models via a gradual shift to semi and weakly supervised models
Transparent Dimensionality Reduction in Unsupervised Settings
XAI methods for denoising, sparse, variational, stacked autoencoders
XAI for Feature Relevance in Unsupervised Tasks (Model agnostic approaches and model specific ones, such as Depth-based feature importance in Isolation Forest)
Development of XAI methods for word embedding (Glove, Word2Vec) and conversational agents
Feature extraction methods for local interpretability in unsupervised settings
Cluster interpretation in customer segmentation via XAI methods
Explainable topic modelling in unsupervised document analysis
XAI methods for denoising sparse, variational, stacked autoencoders
XAI unsupervised approaches in Internet of Things applications