
Machine Learning (ML) and AI become increasingly embedded in daily life, the need to hold ML/AI accountable is also growing. This is particularly true in sensitive domains such as healthcare and law, as well as in some business applications, such as lending, where bias mitigation and fairness are critical. Interpretability is also paramount in industrial applications, such as fault detection and machinery maintenance, where root cause analysis is a goal.
Neuro-Symbolic AI promises to deliver inherently interpretable models while preserving the strengths of purely sub-symbolic ones (ie, deep learning). This promise is not always met, however, due to challenges such as computational complexity and the model’s architecture. For example, systems which allow post hoc logical reasoning, and thus inherent interpretability, are often limited in the expressivity of the logic due to computational explosion.
Domains involving time-series data and temporal logic, have received comparatively little attention in the NESY literature. These areas are particularly challenging yet increasingly relevant in the IoT era, where large volumes of often unlabeled noisy data must be processed. These domains include industrial process automation, digital twins, robotics, self-driving vehicles, and wearable devices among others.
Keywords: Neuro-symbolic AI, Knowledge integration, Differentiable Logic, Rule Extraction, Cognitive AI, Neuro-symbolic AI Benchmarks, Neuro-symbolic AI Validation
Topics
- Interpretable time-series analysis (eg, IoT device-based data, biomedical signals) using logic- or constraint-based NeSy models; Signal temporal logic (STL) or other temporal logics integrated into the model architecture.
- NESY integration with expert domain knowledge; where symbolic knowledge (rules, ontologies, physics-PDEs, etc..) is integrated into the model architecture resulting in inherently interpretable models.
- Interpretable NESY models for root cause analysis including but not limited to fault detection and diagnosis (e.g., Bayesian Probabilistic Models, other Graphical models)
- Uncertainty-aware explanations grounded in symbolic dependencies (Probabilistic Logic integration with neural networks)
- Logic-Guided Neural Networks with explicit logical operators, such as via logical activation functions (e.g., first-order logic) for enhanced explainability (e.g. LNN, NRN)
- Constraint-Driven Neuro-Symbolic models (e.g., PINN).
- Differentiable Logic and Differentiable Knowledge-Bases with interpretable soft-logic rules & structured reasoning paths (e.g. Logic Tensor Networks, DeepProbLog, Neural Theorem Provers).
- Modular NESY such as Cooperative and Nested systems where logical and neural modules are learned jointly (e.g., via Reinforcement Learning, AlphaGo) resulting in explainable inferences.
- Program Synthesis Guided by Neural Models (e.g. Neural Program Induction, Neural Program Synthesis, DreamCoder) for interpretable Neuro-Symbolic reasoning (submissions should not include general program induction).
- Cognitive Architectures with neural and symbolic components (e.g., ACT-R + deep learning) resulting in explainable inferences.
- Neuro-Fuzzy Systems with interpretable symbolic fuzzy rules (submission should not include standalone symbolic XAI).
- Integration of monotonic rule-based systems with non-monotonic paradigms (eg, expert systems with computational argumentation) resulting in explainable inferences.
- Neuro-Symbolic Planning with neural nets for environment understanding (e.g., STRIPS, PDDL) (submissions should not include Agentic AI or sequential decision-making).
- Benchmarks and validation methods for NESY XAI.
- Human in the loop and active learning from domain knowledge.




