About the workshop
Quantitative systems—such as timed systems, weighted automata, hybrid automata, and probabilistic models—are fundamental in modeling and analyzing real-world systems where time, resource constraints, or probabilistic behavior are central. The design and verification of such systems demand rigorous formal methods to ensure correctness, efficiency, and robustness in the presence of quantitative parameters.
Formal reasoning techniques in this space have matured significantly over the past two decades. These include automata-theoretic methods for weighted and timed systems, logical foundations for hybrid and probabilistic systems, and algorithmic tools for synthesis and verification. The recent surge in interest around learning-based methods for system modeling and controller synthesis has further enriched the landscape. Moreover, quantitative verification is increasingly being used to evaluate and ensure the trustworthiness and safety of AI-driven systems, helping provide strong guarantees in applications where reliability is essential.
This workshop aims to bring together researchers and practitioners interested in the interplay between formal methods and quantitative models. By focusing on foundational issues, tool development, and emerging directions, this event will provide a platform for exchanging ideas, fostering collaboration, and showcasing advances in formal analysis of quantitative systems.
Schedule
TBA
Topics of Interest
The workshop welcomes contributions from both theoretical and applied perspectives, including but not limited to:
- Formal verification for timed, hybrid, and probabilistic models such as timed automata, hybrid automata, weighted automata, and Markov decision processes
- Active and passive learning of models
- Reinforcement learning in formal verification
- Logics for quantitative systems (e.g., quantitative temporal logics, probabilistic computational tree logic)
- Automata-theoretic and game-theoretic approaches to quantitative reasoning
- Applications and case studies involving tools for verification or synthesis of quantitative systems
- Benchmarks, challenge problems, and experience reports
- Applications of quantitative verification to safe and trustworthy AI
Confirmed Speakers
Neural Stochastic Control and Verification with Supermartingale Certificates
Abstract
Learning-based methods, such as reinforcement learning, are receiving increasing attention for solving challenging control tasks. However, the lack of safety assurances about learned controllers poses a significant barrier to their practical deployment. In this talk, we will present a framework for learning and formally verifying neural controllers. The framework is applicable to stochastic control systems, thus also considering environment uncertainty. Given a specification, the framework jointly learns and formally verifies a controller together with a formal certificate of the specification being satisfied, both parametrized as neural networks. Certificates are supermartingale-like objects that can be effectively used to formally reason about stochastic systems in a fully automated fashion. We will show how the framework can be applied to solve reachability, safety, reach-avoidance and stability tasks, as well as a compositional framework allowing us to reason about a more expressive logic of specifications.
Specification‑Based Testing of Cyber‑Physical Systems via Black-Box Checking
Abstract
Cyber‑physical systems (CPS), which integrate computational control with physical processes, require high reliability because they are often deployed in safety-critical domains. Ensuring that such systems satisfy their formal specifications is challenging, since modern CPS often contain complex or opaque components, such as deep neural networks, whose internal structures are not fully accessible. Black-box checking (BBC) is a workflow for systematic testing of black-box systems by approximating the system under test as an automaton by active automata learning and verifying the approximate automaton by model checking. This talk presents how to utilize BBC for specification-based testing of CPS, several techniques to improve its efficiency, and a Java toolkit FalCAuN for applying BBC of CPS. FalCAuN is publicly available: https://github.com/MasWag/FalCAuN. An extension of BBC for probabilistic systems will also be discussed.
Bio
Masaki Waga, Ph.D., is an Assistant Professor at the Graduate School of Informatics, Kyoto University, Japan. He received his Ph.D. in Informatics from the Graduate University for Advanced Studies (SOKENDAI), Tokyo, under the supervision of Prof. Ichiro Hasuo, earning the Dean's Award in 2020 for his doctoral research. His research focuses on formal methods for cyber-physical and AI systems, spanning automata theory, runtime verification, testing, and model checking. He is a recipient of Best Paper Awards at ICECCS 2018 (with Étienne André and Ichiro Hasuo) and FORMATS 2019 (the Oded Maler Award), and has held JST research fellowships including ACT-X (2020–2023), PRESTO (2022–2026), and BOOST (2025–2030).
More speakers to be added...
Registration
TBA