
Explainable AI (XAI) for time series data is an emerging yet rapidly growing research area that seeks to make deep learning models more transparent in domains such as healthcare monitoring, finance, industry, and environmental systems. Unlike images or text, time series data are characterized by temporal dependencies, irregular sampling, and multivariate interactions, making traditional explanation methods — originally developed for tabular or visual data — only partially applicable. Current approaches to time series explainability can be characterized along several complementary dimensions: Scope, integration, model dependence, and granularity. Scope concerns locality (instance-level) or globality (model-level) of explanations. These are commonly separated by their model dependence (model-specific and model-agnostic) and integration (ante-hoc and post-hoc).
Post-hoc attribution methods such as saliency maps, LIME, and SHAP variants have been adapted to highlight influential time-points or sensor channels. Ante-hoc approaches integrate interpretability directly into the architecture, for instance through attention mechanisms, temporal decomposition, or concept-based layers that associate learned features with human-understandable patterns. Granularity is a multi-faceted dimension, targeting the level of detail attributions offer regarding individual time-points, channels, frequencies, concepts or instances. Current approaches in the field aim at integrating these characteristics such as frequencies (e.g., DFT-LRP) or counterfactual and instance-based explanations, allowing to explore how small changes to a sequence affect model predictions, offering intuitive ”what-if“ insights.
Despite promising progress, several challenges remain. The sequential nature of time series data makes it difficult to define meaningful perturbations or reference points without distorting temporal structure. Temporal dependencies can obscure causal attribution, as the importance of one point in time often depends on its historical or future context. Furthermore, systematic evaluation of explanation quality is still lacking — metrics for faithfulness, stability, and human interpretability are far less mature than in vision or text domains.
Recent trends point toward domain-aware and concept-level explanations that go beyond identifying important time-steps to describing interpretable patterns, motifs, or regimes that drive model behavior. Moreover, different predictive tasks — such as classification, anomaly detection, and forecasting — pose distinct explanatory and temporal requirements that are still insufficiently addressed. There is also a growing interest in ante-hoc explainable architectures that integrate interpretability from the outset, and in human-centered evaluation that assesses whether explanations support expert decision-making.
In summary, XAI for time series data has progressed from adapting general-purpose methods to developing task-specific, domain-aware solutions. However, a unified methodological framework and standardized evaluation protocols are still missing. The field now stands at a pivotal stage — ready to move from adaptation to principled design of trustworthy, interpretable, and context-sensitive models for temporal data.
Keywords: Time series, ante-hoc and post-hoc explanations, domain-aware explanations, evaluation and benchmarking, human-centered evaluation
Topics
- Novel XAI methods specifically targeted at or applicable to time series data.
- Novel evaluation method or approach targeting time series XAI methods or models.
- Evaluation of explanatory performance of XAI methods applied to time series tasks such as classification, anomaly detection or forecasting.
- Evaluation benchmarks, frameworks, datasets, or contributions to existing benchmarks (e.g., Quantus and OpenXAI).
- Applications of XAI to time series models, e.g., in biomedical or industrial signal analysis, for classification, anomaly detection or forecasting.
- Application of XAI w.r.t. different data modalities, such multi-variate time series.
- Applications of intrinsically interpretable models, e.g., for biosignals such as ProtoEEG-kNN.
- XAI for time series and their interpretation, including user studies.
- Counterfactual and example-based explanations for time series models.







Submit an article
Submitted manuscripts must be novel and not substantially duplicate existing work. Manuscripts must be written using Springer’s Lecture Notes in Computer Science (LNCS) in the format provided here. Latex and word files are admitted; however, the former is preferred. All submissions and reviews will be handled electronically. The conference has a no dual submission policy, so submitted manuscripts must not be currently under review at another publication venue.
| Articles must be submitted using the easy-chair platform here. | ![]() |
While registering on the platform, the contact author must provide the following information: paper title, all author names, affiliations, postal address, e-mail address, and at least three keywords.
The conference will not require a strict page number, as we believe authors have different writing styles and would like to produce scientific material differently. However, the following types of articles are admitted:
| full articles | between 14 and 24 pages (including references) |
| short articles | between 10 and 14 pages (including references) |
![]() | Full articles should report on original and substantial contributions of lasting value, and the work should concern the theory and/or practice of Explainable Artificial Intelligence (xAI). Moreover, manuscripts showcasing the innovative use of xAI methods, techniques, and approaches and exploring the benefits and challenges of applying xAI-based technology in real-life applications and contexts are welcome. Evaluations of proposed solutions and applications should be commensurate with the claims made in the article. Full articles should reflect more complex innovations or studies and have a more thorough discussion of related work. Research procedures and technical methods should be presented sufficiently to ensure scrutiny and reproducibility. We recognise that user data may be proprietary or confidential; therefore, we encourage sharing (anonymized, cleaned) data sets, data collection procedures, and code. Results and findings should be communicated clearly, and implications of the contributions for xAI as a field and beyond should be explicitly discussed. |
![]() | Shorter articles should generally report on advances that can be described, set into context, and evaluated concisely. These articles are not ‘work-in-progress’ reports but complete studies focused on smaller but complete research work, simple to describe. For these articles, the discussion of related work and contextualisation in the wider body of knowledge can be smaller than that of full articles. |
Appendixes and supplemental material
Appendices and supplemental material must be placed within the article and the maximum number of pages mentioned above.
Special session articles
The article submitted to the special sessions follows the submission procedure of the main track and must be submitted via EasyChair, as mentioned above. The types of articles admitted are full and shorter, as described above. The authors of an article to be associated with a special session must select the name of such special session in the list of topics in EasyChair, along with other relevant topics.
Authors commit to reviewing
By submitting to the conference, each senior manuscript author (holding at least a PhD) volunteers to be added to the pool of potential PC members/reviewers for the conference and may be asked to review manuscripts. This does not apply to authors who have already agreed to contribute to the conference in some capacity (e.g., as PC/SPC members of the main conference or special tracks, area chairs, or members of the organising committee) and authors who are not qualified to be on the programme committee.
Ethical & Human Subjects Considerations
The conference organisers expect authors to discuss the ethical considerations and the impact of the presented work and/or its intended application, where appropriate. Additionally, all authors must comply with the ethical standards and regulatory guidelines associated with human subjects research, including the use of personally identifiable data and research involving human participants. Manuscripts reporting on human subjects research must include a statement identifying any regulatory review the research is subject to (and identifying the form of approval provided) or explaining the lack of required review.
Submission and publication of multiple articles
Each author is limited to a combined maximum of 4 submissions to the main conference track, and authors may not be added or deleted from papers following submission.
Use of Generative AI
Generative AI models such as LLMs, including ChatGPT, BARD, LLaMA, and similar, are against the criteria for authorship of scientific manuscripts submitted and published in the conference. If authors use any of these tools while writing their manuscript, they assume full responsibility for all content. This includes verifying its correctness and assessing plagiarism of any part of their work. Suppose the text generated by the above generative AI models is the subject of scientific inquiry as part of the manuscript’s methodology or analysis. In that case, it must be adequately described, documented and made explicit in the paper.
Important dates
*All dates are Anywhere on Earth time (AoE)
Articles (main track & special sessions)
| Authors/title registration on submission platform – (easy-chair)* (it remains open until the paper submission deadline below): | January, 15, 2026 |
| Article upload deadline on submission platform (easy-chair)*: | February 1, 2026 |
| Paper bidding for reviewers | February 2-4, 2026 |
| Review submission deadlines for reviewers | February 20, 2026 |
| Notification of acceptance*: | February 22, 2026 |
| Registration (payment) and camera-ready* (upload to easy-chair): | February 28, 2026 |
| Article presentation instructions notification | June, 2026 |
| Publication (Springer CCIS series) | September/October, 2026 |
Late-breaking work & demos
| Late-breaking work & demo author/title/abstract registration opens on submission platform (easy-chair): | March 01, 2026 |
| Late-breaking work & demo article upload deadline on submission platform (easy-chair): | March 07, 2026 |
| Notification of acceptance: | March 31, 2026 |
| Registration (payment) & Late-breaking work & demo camera-ready (upload to easy-chair): | April, 10, 2026 |
| Late-breaking work & demo presentation instructions notification | June, 2026 |
| Publication (planned with CEUR-WS.org*) | September/October, 2026 |
Doctoral consortium (DC) proposals
| DC Proposal author/title registration deadline opens on submission platform (easy-chair): | March 01, 2026 |
| DC Proposal uploads deadline on the submission platform (easy-chair): | March 07, 2026 |
| Notification of acceptance: | March 31, 2026 |
| Registration (payment) | April 10, 2026 |
| DC presentation and meeting instructions notification | June, 2026 |
| Publication (planned with CEUR-WS.org*) | September/October, 2026 |
Special session proposals
| Proposal submission (contact): | November 15, 2025 |
| Notification of acceptance & final instructions | November 31, 2025 |



