Rule-based systems ensure transparency and clarity in decision-making. Systems defined by explicit rules allow human users to trace the origin of each decision, which is determined by the occurrence of specific conditions. In contrast to black-box machine learning models, the behaviour of a rule-based system is understandable not only for AI practitioners but also for a broader audience, including domain experts who can contribute to developing more effective models with their experience. This collaboration ensures a faithful representation of domain knowledge, aligning the system’s decisions with human expertise. Furthermore, thanks to the transparent identification of root causes, predictions from rule-based systems can be effortlessly translated into actionable insights. This, in turn, facilitates the implementation of concrete actions to enhance the effectiveness of the studied system. Various approaches have been proposed to extract rules from machine learning models or to generate them directly from data, such as Decision Trees and Fuzzy Systems. This special track focuses on methods that achieve explainability through rule-based models. Specifically, it encourages contributions about new methods to generate rules from data, integrate domain-related rules (expert systems) with machine learning approaches, transform predictions from rules into actionable insights, and apply rule-based systems in specific domains.


Transparent decision-making with prescriptive rule-based systems (e.g. Rule-based Control)
Methods to extract rules from trained models (e.g. decompositional rule extraction, pedagogical rule extraction)
IF-THEN format rule generation methods (e.g., Genetic Rule Extraction, Logic Learning Machine)
Explainable ensemble methods using rule-based systems (e.g. Random Forests, Gradient-boosted Trees)
Voting mechanisms in Random Forests to improve interoperability (e.g. cautious weighted random forests, adjusted weight voting algorithm for random forests)
Fuzzy rule-based systems (IF-THEN rules with fuzzy variables as antecedents/consequents)
Functional fuzzy rule-based XAI (for subspaces of the input with the output as a function)
Expert systems models defined by a priori knowledge (e.g. expert rule-based antifraud engines in insurance, expert rule-based systems for control of machinery)
Hybrid explainable rule-based systems (e.g. ML pre-processes input data for Rule-Based inference (MLRB), Parallel Ensemble of Rules and ML (PERML))
Feature engineering of rule-based methods for improved explainability (e.g. Deep Feature Synthesis, One Button Machines)
Transparent decision-making with prescriptive rule-based systems (e.g Rule-based Control)
Explainable rule-based technique for credit-related applications in banking (e.g. transparent bankruptcy prediction, credit scoring, non-performing loans handling)
Transparent anomaly detection via rule-based systems for data quality
Explainable predictive maintenance via rules (e.g. for critical situation identification in industrial plants, for promoting maintenance actions)
Improving the reliability of dependable systems via rule-based explainable methods (e.g. in autonomous safety-critical contexts)
Explainable rule-based methods for time series (for derivation of rules about temporal patterns/abstractions, e.g. seasonality)
Transparent detection of cyber threats and their mitigations via rule-based methods
Traceable rule-based methods for computer-aided diagnosis in healthcare
Explainable rule-based fraud detection (e.g. fraud in insurance claims and financial transactions)

Supported by