
Rule-based systems ensure transparency and clarity in decision-making. In fact, systems defined by explicit rules allow human users to trace the origin of each decision, which is determined by the occurrence of specific conditions. In contrast to black-box machine learning models, the behavior of a rule-based system is understandable not only for AI practitioners but also for a broader audience, including domain experts who can contribute to the development of more effective models with their experience. This collaboration ensures a faithful representation of domain knowledge, aligning the system’s decisions with human expertise. Furthermore, thanks to the transparent identification of root causes, predictions from rule-based systems can be effortlessly translated into actionable insights. This, in turn, facilitates the implementation of concrete actions aimed at enhancing the effectiveness of the studied system. Various approaches have been proposed to extract rules from machine learning models or to generate them directly from data, such as Decision Trees and Fuzzy Systems.
This special track focuses on methods that achieve explainability through rule-based models. Specifically, it encourages contributions about new methods to generate rules from data, integrate domain-related rules (expert systems) with machine learning approaches, transform predictions from rules into actionable insights, and apply rule-based systems in specific domains.
Topics
- Methods to improve accuracy/interpretability of tree-based models (e.g. gradient-based trees, optimal trees)
- Methods to extract rules from trained models (e.g. decompositional rule extraction, pedagogical rule extraction)
- IF-THEN format rule generation methods (e.g., Genetic Rule Extraction, Logic Learning Machine)
- Explainable ensemble methods using rule-based systems (e.g. Random Forests, gradient-boosted trees)
- Voting mechanisms in Random Forests to improve interoperability (e.g. cautious weighting, adjusted weight voting)
- Fuzzy rule-based systems (IF-THEN rules with fuzzy variables as antecedents/consequents)
- Functional fuzzy rule-based XAI (for subspaces of the input with the output as a function)
- Expert systems defined by a priori knowledge (e.g. antifraud engines in insurances, machinery control systems)
- Hybrid explainable rule-based systems (e.g. ML pre-processes input data for Rule-Based inference (MLRB), Parallel -Ensemble of Rules and ML (PERML))
- Feature engineering of rule-based methods for improved explainability (e.g. Deep Feature Synthesis, One Button Machines)
- Transparent decision-making with prescriptive rule-based systems (e.g Rule-based Control)
- Explainable rule-based techniques for credit-related applications (e.g. bankruptcy prediction, credit scoring, non-performing loans handling)
- Transparent anomaly detection via rule-based systems for data quality
- Explainable predictive maintenance via rules (e.g. for identifying critical situations or promoting maintenance actions)
- Improving the reliability of dependable systems via rule-based explainable methods (e.g. in autonomous safety-critical contexts)
- Explainable rule-based methods for time series (for derivation of rules about temporal patterns/abstractions, e.g. seasonality)
- Transparent detection of cyber threats and their mitigations via rule-based methods
- Traceable rule-based methods for computer-aided diagnosis in healthcare
- Explainable rule-based fraud detection (e.g. fraud in insurance claims and financial transactions)

