
Rule-based systems play a central role in ensuring transparency, interpretability and clarity within decision-making processes, especially in domains where understanding why a system reaches a certain conclusion is as important as the conclusion itself. Their defining feature is the presence of explicit, human-readable rules: each decision results from well-identified conditions that can be individually examined. This means that users can easily trace the logical pathway behind any output, without needing to rely on complex mathematical abstractions or opaque model representations.
Unlike black-box machine learning models—whose internal mechanisms often remain inaccessible even to experienced practitioners—rule-based systems provide a level of comprehensibility that extends beyond the AI community. Domain experts, such as clinicians, engineers or policy makers, can directly understand the logic that drives the system and therefore actively participate in its construction and refinement. This collaborative development fosters models that reflect real-world knowledge more faithfully, ensuring that the encoded rules align with established professional practices and human reasoning.
Another major advantage of rule-based approaches lies in their ability to translate predictions into clear, actionable insights. Because each outcome is linked to a transparent set of conditions, users can immediately identify the root causes that triggered a result, leading to interventions that are both targeted and effective. This direct connection between cause and effect is particularly valuable in operational contexts, where decisions must be justified and supported by easily interpretable evidence.
Over the years, various techniques have been proposed to extract rules from complex machine learning systems or to learn them directly from data. Classical approaches such as Decision Trees offer a structured way to partition the feature space into understandable segments, each mapped to a specific decision. Fuzzy Systems further extend this concept by allowing rules to incorporate degrees of uncertainty and linguistic variables, making them suitable for modelling more nuanced relationships. Beyond these traditional paradigms, more recent research explores hybrid models capable of combining symbolic reasoning with statistical learning, allowing rule-based systems to benefit from the flexibility and predictive power of modern AI while preserving interpretability.
This special session is dedicated to methodologies that use rule-based representations as a means to achieve explainability. It welcomes contributions proposing innovative mechanisms for generating rule sets from data, advancing rule-extraction techniques, or integrating expert-defined rules with machine learning frameworks. Submissions focusing on the transformation of rule-based predictions into actionable recommendations are also encouraged, as this step is crucial for bridging the gap between explainability and operational decision support. Finally, the session is open to domain-specific applications of rule-based explainable models, providing concrete demonstrations of how such systems can enhance trust, transparency and accountability across different fields—from healthcare and robotics to environmental monitoring, finance, and industrial processes.
Keywords: Rule-Based Decision Systems, Rule Extraction, Fuzzy Rule-Based Reasoning, Hybrid Models, Actionable AI, Expert-Guided Rule Integration, Human-Centric Decision Logic, Rule-based Systems Applications
Topics
- Methods to improve accuracy/interpretability of tree-based models (e.g. gradient-based trees, optimal trees)
- Methods to extract rules from trained models (e.g. decompositional rule extraction, pedagogical rule extraction)
- IF-THEN format rule generation methods (e.g., Genetic Rule Extraction, Logic Learning Machine)
- Explainable ensemble methods using rule-based systems (e.g. Random Forests, gradient-boosted trees)
- Voting mechanisms in Random Forests to improve interoperability (e.g. cautious weighting, adjusted weight voting)
- Fuzzy rule-based systems (IF-THEN rules with fuzzy variables as antecedents/consequents)
- Functional fuzzy rule-based XAI (for subspaces of the input with the output as a function)
- Expert systems defined by a priori knowledge (e.g. anti-fraud engines in insurance, machinery control systems)
- Hybrid explainable rule-based systems (e.g. ML pre-processes input data for Rule-Based inference (MLRB), Parallel Ensemble of Rules and ML (PERML))
- Feature engineering of rule-based methods for improved explainability (e.g. Deep Feature Synthesis, One Button Machines)
- Transparent decision-making with prescriptive rule-based systems (e.g Rule-based Control)
- Explainable rule-based techniques for credit-related applications (e.g. bankruptcy prediction, credit scoring, and non-performing loans handling)
- Transparent anomaly detection via rule-based systems for data quality
- Explainable predictive maintenance via rules (e.g. for identifying critical situations or promoting maintenance actions)
- Improving the reliability of dependable systems via rule-based explainable methods (e.g. in autonomous safety-critical contexts)
- Explainable rule-based methods for time series (for derivation of rules about temporal patterns/abstractions, e.g. seasonality)
- Transparent detection of cyber threats and their mitigations via rule-based methods
- Traceable rule-based methods for computer-aided diagnosis in healthcare
- Explainable rule-based fraud detection (e.g. fraud in insurance claims and financial transactions)

