QuantFormal

Formal Verification and Learning for Quantitative Systems
An FSTTCS 2025 pre-conference workshop
Date: December 15-16, 2025
Venue: Lecture Hall DLT10, BITS Goa

About the workshop

Quantitative systems—such as timed systems, weighted automata, hybrid automata, and probabilistic models—are fundamental in modeling and analyzing real-world systems where time, resource constraints, or probabilistic behavior are central. The design and verification of such systems demand rigorous formal methods to ensure correctness, efficiency, and robustness in the presence of quantitative parameters.

Formal reasoning techniques in this space have matured significantly over the past two decades. These include automata-theoretic methods for weighted and timed systems, logical foundations for hybrid and probabilistic systems, and algorithmic tools for synthesis and verification. The recent surge in interest around learning-based methods for system modeling and controller synthesis has further enriched the landscape. Moreover, quantitative verification is increasingly being used to evaluate and ensure the trustworthiness and safety of AI-driven systems, helping provide strong guarantees in applications where reliability is essential.

This workshop aims to bring together researchers and practitioners interested in the interplay between formal methods and quantitative models. By focusing on foundational issues, tool development, and emerging directions, this event will provide a platform for exchanging ideas, fostering collaboration, and showcasing advances in formal analysis of quantitative systems.

Topics of Interest

The workshop welcomes contributions from both theoretical and applied perspectives, including but not limited to:

Schedule

Monday, December 15

09:15 - 09:30
Opening Remarks
09:30 - 10:30
Invited talk 1
Chair: Akshay S.
10:30 - 11:00
Coffee break
11:00 - 12:00
Invited talk 2
Chair: Shibashis Guha
12:15 - 12:45
Short talk 1
Chair: Shibashis Guha
12:45 - 14:30
Lunch break
14:30 - 15:30
Invited talk 3
Chair: Pascal Weil
15:30 - 16:00
Coffee break
16:00 - 16:30
Short talk 2
Chair: Rajarshi Ray
16:30 - 17:00
Short talk 3
Chair: Rajarshi Ray

Tuesday, December 16

09:30 - 10:30
Invited talk 4
Chair: C. Aiswarya
10:30 - 11:00
Coffee break
11:00 - 12:00
Invited talk 5
Chair: Supratik Chakraborty
12:15 - 12:45
Short talk 4
Chair: Kumar Madhukar
12:45 - 14:30
Lunch break
14:30 - 15:30
Invited talk 6
Chair: Bernd Finkbeiner
15:30 - 16:00
Short talk 5
Chair: Bernd Finkbeiner
16:00 - 16:30
Coffee break

Invited talks

Learning One-counter automata

Abstract

Active automata learning is a key technique in formal methods, particularly for model inference and verification. In the classical setting introduced by Angluin's L* algorithm, a learner interacts with a teacher through membership and equivalence queries to infer a minimal deterministic finite automaton for an unknown regular language L. Membership queries check word inclusion in L, while equivalence queries verify whether a proposed hypothesis dfa matches L, returning a counter-example if not. This approach requires only semantic information about L and runs in time polynomial in the minimal dfa size and the longest counterexample.

In this talk, we extend this framework to deterministic one-counter automata (doca). doca are finite-state machines equipped with a non-negative integer counter that can be incremented, decremented, or reset. The presence of the counter allows doca to recognise certain non-regular, context-free languages, making them a natural candidate for modelling program behaviours that arise in formal verification.

We first present an active learning algorithm for doca called OL*. This is the first polynomial time algorithm. However, this algorithm is not scalable in practice. We then present a few practical algorithms for learning doca. Though exponential in the worst case, these algorithms scale better than OL* in data sets.

In the end, we will discuss approaches and challenges to extending these results to weighted/probabilistic models.

Bio

Sreejith is an Associate Professor at IIT Goa. He is interested in theoretical computer science especially in the areas of logic, automata theory and computational complexity. Sreejith did his Ph.D. under Prof. Kamal Lodaya from Institute of Mathematical Sciences, Chennai in 2014. The thesis received the ACM doctoral dissertation award, Honorable mention.

Slides ↓ Download PDF
Singapore Management University

Neural Stochastic Control and Verification with Supermartingale Certificates

Abstract

Learning-based methods, such as reinforcement learning, are receiving increasing attention for solving challenging control tasks. However, the lack of safety assurances about learned controllers poses a significant barrier to their practical deployment. In this talk, we will present a framework for learning and formally verifying neural controllers. The framework is applicable to stochastic control systems, thus also considering environment uncertainty. Given a specification, the framework jointly learns and formally verifies a controller together with a formal certificate of the specification being satisfied, both parametrized as neural networks. Certificates are supermartingale-like objects that can be effectively used to formally reason about stochastic systems in a fully automated fashion. We will show how the framework can be applied to solve reachability, safety, reach-avoidance and stability tasks, as well as a compositional framework allowing us to reason about a more expressive logic of specifications.

Bio

Djordje Zikelic is an assistant professor at the School of Computing and Information Systems, Singapore Management University. Prior to joining SMU, he obtained his PhD in computer science at the Institute of Science and Technology Austria (ISTA) in 2023 and bachelor’s and master’s degrees in mathematics at the University of Cambridge in 2018. His research focuses on formal verification and synthesis of software and AI systems and lies at the interplay of formal methods, trustworthy AI and programming languages research. His works were awarded the Distinguished Paper Award at FM 2024 and the Distinguished Tool Paper Award at ATVA 2025. For his PhD work, he received the Outstanding PhD Thesis award and the Outstanding Scientific Achievement award at ISTA.

Chalmers University of Technology

Runtime Monitoring under Uncertainty in Cyber‑Physical Systems

Abstract

Runtime monitors (RM) are essential for ensuring the safety and reliability of cyber-physical systems (CPS) throughout their operational lifetime. They play a crucial role in detecting irregularities during execution and in assessing system performance. In modern autonomous cyber‑physical systems RMs become even more essential to ensure safety, in particular when integrating machine learning (ML) components whose behavior may be difficult to predict or verify formally. A central challenge in designing effective runtime monitors lies in achieving robustness to uncertainty in the information obtained from the environment. Monitors rely on sensor data, which is often noisy or incomplete, and with system dynamics that are typically probabilistic, the true state of the environment may only be partially observable. Consequently, runtime monitors must account for these uncertainties when predicting potential safety violations.

In this talk, we address the problem of runtime monitoring under uncertainty, focusing on settings where uncertainty is modeled using Markov models. We present recent results, including algorithmic approaches for monitoring in such probabilistic settings and methods for learning uncertainty models to enhance monitoring accuracy and robustness.

Bio

Hazem Torfah is a Tenure-Track Assistant Professor at Chalmers University of Technology in Gothenburg, Sweden, where he leads the Safe and Trustworthy Autonomous Reasoning Lab (starlab.systems). His research interests include the formal specification, verification, and synthesis of learning-enabled cyber-physical systems, with a focus on quantitative methods for verification and runtime assurance. Hazem's research is supported by the Wallenberg AI, Autonomous Systems, and Software Program (WASP). Before joining Chalmers, he was a postdoctoral researcher in the EECS Department at UC Berkeley. He earned his Ph.D. in Computer Science from Saarland University, Germany, in December 2019.

IIT Bombay

Expressive Timed Logics and Automata‑Based Verification of Real‑Time Systems

Abstract

Timed logics provide a foundational framework for specifying and verifying quantitative properties of real time systems. This talk discusses recent progress on the expressive power, decidability, and verification algorithms for such logics, with an emphasis on metric temporal formalisms and their extensions. We examine the relationships between metric temporal logic (MTL), its past‑time and Pnueli‑modality variants, and freeze‑time extensions that enable binding and comparing time values. I will briefly talk about expressive equivalence and separation theorems for these formalisms. On the algorithmic front, I will touch upon an automata‑based model‑checking procedure for MITL with past and Pnueli modalities over finite and infinite timed words. The corresponding implementation is significantly faster than the state-of-the-art MightyL tool on instances involving only future modalities.

Bio

Krishna is a faculty member at the department of computer science and engineering at IIT Bombay. Her main areas of interest are concurrency, automata theory, logic and games as well as quantitative verification.

Slides ↓ Download PDF
Kyoto University

Specification‑Based Testing of Cyber‑Physical Systems via Black‑Box Checking

Abstract

Cyber‑physical systems (CPS), which integrate computational control with physical processes, require high reliability because they are often deployed in safety-critical domains. Ensuring that such systems satisfy their formal specifications is challenging, since modern CPS often contain complex or opaque components, such as deep neural networks, whose internal structures are not fully accessible. Black‑box checking (BBC) is a workflow for systematic testing of black‑box systems by approximating the system under test as an automaton by active automata learning and verifying the approximate automaton by model checking. This talk presents how to utilize BBC for specification-based testing of CPS, several techniques to improve its efficiency, and a Java toolkit FalCAuN for applying BBC of CPS. FalCAuN is publicly available: https://github.com/MasWag/FalCAuN. An extension of BBC for probabilistic systems will also be discussed.

Bio

Masaki Waga, Ph.D., is an Assistant Professor at the Graduate School of Informatics, Kyoto University, Japan. He received his Ph.D. in Informatics from the Graduate University for Advanced Studies (SOKENDAI), Tokyo, under the supervision of Prof. Ichiro Hasuo, earning the Dean's Award in 2020 for his doctoral research. His research focuses on formal methods for cyber-physical and AI systems, spanning automata theory, runtime verification, testing, and model checking. He is a recipient of Best Paper Awards at ICECCS 2018 (with Étienne André and Ichiro Hasuo) and FORMATS 2019 (the Oded Maler Award), and has held JST research fellowships including ACT-X (2020–2023), PRESTO (2022–2026), and BOOST (2025–2030).

Slides ↓ Download PDF
CNRS & ENS Paris-Saclay

Weighted first-order logic, weighted aperiodic automata, and more

Abstract

The goal is to lift to the quantitative setting the famous equivalences between

  1. Aperiodic languages (AP)
  2. Star-free languages (SF)
  3. Languages definable in first-order logic (FO)
  4. Languages definable in linear temporal logic (LTL)

In this talk, we define weighted first-order logic (wFO) and aperiodic weighted automata (wA). We give examples showing that, surprisingly, aperiodic wA are more expressive than wFO over various classical semirings (natural numbers with (+,×) operations, or (min,+) operations, or (max,+) operations). To recover equivalence with wFO we have to restrict to polynomially ambiguous aperiodic wA. We also study fragments with lower ambiguity. Then, we introduce a weighted extension of linear temporal logic (wLTL). We discuss some of its properties and explain the difficulties that arise when trying to extend Kamp's theorem (FO=LTL) to the weighted setting.

Bio

Paul Gastin is an Emeritus Professor of Computer Science at École Normale Supérieure Paris-Saclay (Univ. Paris-Saclay), France. His research domain is the verification and control of safety critical systems, with a focus on concurrent systems, real-time systems and quantitative systems. He develops formal methods (models, logics, algorithms and tools) for these systems. He is a member of the Formal Methods Lab (LMF, CNRS and Univ. Paris-Saclay) and of the Indo-French International Research Lab in Computer Sciences (ReLAX, IRL CNR). He is a long-term visiting faculty at IIT Bombay.

Short talks

TIFR Mumbai

Synthesizing POMDP Policies: Sampling Meets Model-checking via Learning

Abstract

Partially Observable Markov Decision Processes (POMDPs) are a central model for decision-making under uncertainty. While sampling-based techniques achieve scalability in solving them, they provide no formal correctness guarantees and are thus less suitable for safety-critical settings. Conversely, formal synthesis methods offer correctness by construction but typically do not scale to realistic POMDPs. To combine the strengths of these two approaches, we propose a synthesis framework that integrates sampling, automata learning, and model checking. Inspired by Angluin's L* algorithm, our approach uses sampling as a membership oracle and model checking as an equivalence oracle, enabling the synthesis of finite-state controllers with formal guarantees whenever the sampling-induced policy is regular. We establish a relative completeness result and demonstrate that our method effectively solves threshold-safety problems.

This talk is based on joint work (currently under submission) with Debraj Chakraborty (Nanyang Technological University, Singapore), Sayan Mukherjee (INRIA Rennes, France), and Jean-François Raskin (Université Libre de Bruxelles, Belgium).

Bio

Anirban Majumdar is currently a post-doctoral Researcher in TIFR Mumbai. Earlier he was a post-doctoral researcher in the Verification group at Université Libre de Bruxelles, working with Jean-François Raskin. He did his PhD from ENS Paris-Saclay under the supervision of Patricia Bouyer-Decitre and Nathalie Bertrand. His research interests include synthesis of reactive systems, automata learning, distributed systems.

Slides ↓ Download PDF

Institute of Mathematical Sciences

Model Checking for Real-time Systems using Generalized Timed Automata

Abstract

The model checking problem for real-time systems asks whether a system (modelled as a network of automata) satisfies a logical specification. Metric Interval Temporal Logic (MITL) is one of the most widely used formalisms for expressing real-time properties, and a central challenge lies in obtaining efficient translations from MITL formulas to automata. Recently, we introduced Generalized Timed Automata (GTAs), a unified model that incorporates features of several established automata formalisms for real-time systems such as timed automata, event-clock automata, and automata with timers. Despite these powerful features, GTAs still support zone-based reachability and liveness with the same complexity as in classical timed automata.

In this talk, we present a new direct concise translation from MITL to GTAs. A key feature of our translation is that they are largely deterministic: the past fragment of MITL translates in linear time into synchronous networks of deterministic timed automata, while determinism is also preserved under certain top-level future modalities. Moreover, for the timed until modality, our translation offers an exponential improvement w.r.t. the state-of-the-art. We implement our translation in a prototype tool which supports both satisfiability checking and model checking over finite and infinite timed words, and experimental evaluation demonstrates that it significantly outperforms the current state-of-the-art tool.

Bio

Govind is a faculty member at the Institute of Mathematical Sciences (IMSc), Chennai. Before joining IMSc, he was a postdoctoral researcher at the Algorithmic Verification group at the Department of Information Technology at Uppsala University, and at IIT Bombay. He obtained a joint PhD degree from LaBRI, Université de Bordeaux and Chennai Mathematical Institute (CMI). His research is in formal methods, where, in particular, he is interested in automata theory, logic, and automated synthesis, and its applications in formal verification.

Max Planck Institute for Software Systems (MPI-SWS)

LP-Based Weighted Model Integration over Non-Linear Real Arithmetic

Abstract

Weighted Model Integration (WMI) has emerged as a powerful unifying framework for probabilistic inference, allowing complex probabilistic reasoning tasks to be reduced to the integration of weight functions over regions defined by logical constraints. While existing WMI methods provide strong support for linear and polynomial weight functions, many real-world applications naturally give rise to rational or radical functions, for which current tools offer limited applicability and often no formal guarantees.

In this talk, I will present our recent work that significantly broadens the expressive reach of WMI by introducing a novel approximation algorithm that handles general semi-algebraic weight functions, including rational and radical forms. Our method leverages classical results from real algebraic geometry, like Farkas’ lemma and Handelman’s theorem, to reduce WMI queries to a sequence of linear programming problems, yielding an approximation with a formally controlled error bound that can be made arbitrarily small.

To illustrate the necessity of this extended expressiveness, I will discuss a simple example of Lévy flights used in describing asset price dynamics. Even in this simple probabilistic program, the inference queries naturally involve rational functions, highlighting the limitations of current polynomial-only methods and emphasizing the importance of our generalization.

This talk is based on our paper accepted at IJCAI 2025, a joint work with S. Akshay, Supratik Chakraborty, Soroush Farokhnia, Amir Goharshady, and Đorđe Žikelić.

Bio

Harshit Jitendra Motwani completed his bachelor’s and master’s degrees in Mathematics and Computing at IIT Kharagpur, graduating in 2020. He received the Best Master’s Thesis Award and the Institute Silver Medal for achieving the highest GPA in his graduating batch. He then pursued a Ph.D. in Applied Algebraic Geometry at Ghent University, Belgium, where his doctoral work focused on developing algebro-geometric algorithms for program synthesis, tensor networks, and conditional independence models. During his Ph.D., he was awarded both the BOF PhD Fellowship and the FWO Sofina-Boël Fellowship.

After completing his Ph.D. in 2023, he joined the Department of Computer Science and Engineering at HKUST, Hong Kong, as a postdoctoral researcher supported by the HKUST Postdoctoral Matching Fund, before moving to the Max Planck Institute for Software Systems (MPI-SWS) as a postdoctoral researcher. His current work explores the application of algebraic geometry to formal verification, with a particular emphasis on the verification of probabilistic programs. In addition, he has been developing parameterized algorithms for applications in computational chemistry and quantum physics.

Namrita Varshney
IIT Bombay

Towards trustworthy AI

Abstract

Machine-learning models are increasingly being deployed in safety-critical and high-stakes environments- such as healthcare, finance, transportation, and legal decision-making- where even small errors can lead to serious real-world consequences. Thus, ensuring the reliability and trustworthiness of these models is essential to guarantee consistent and fair behavior.

In this talk, we present a data-aware and scalable framework for sensitivity analysis, which enables us to understand how model predictions change when specific input features vary and generate meaningful counterexamples. We then shift our focus to glitches, a newly identified class of output inconsistencies that reveal abrupt and unexpected behavior in ML models. Together, these methods enable us to formally reason about reliability in machine-learning systems and contribute to the pursuit of trustworthy AI.

Bio

Namrita Varshney is a PhD student in Computer Science at IIT Bombay, supervised by Prof. Ashutosh Gupta and Prof. Krishna S. Her research focuses on the formal verification and reliability of machine-learning models.

Vaishnavi Vishwanath
Chennai Mathematical Institute

Model-checking real-time systems: revisiting the alternating automaton route

Abstract

Alternating Timed Automata (ATA) are an extension of timed automata where the effect of each transition is described by a positive boolean combination of (state-reset) pairs. Unlike classical timed automata, ATAs are closed under complementation and are therefore better suited for logic-to-automata translations. Several timed logics can be converted to equivalent 1-clock ATA (1-ATAs). But currently, model-checking tools do not use 1-ATAs directly for analysis, and instead use sophisticated conversions of logical fragments to timed automata. A reason for this could be that zone-based methods do not exist for 1-ATAs, unlike those extensively studied for timed automata.

In this talk, we present a direct zone-based algorithm for model-checking timed automata models against 1-ATA specifications. A key ingredient in zone-based algorithms is an entailment relation between zones that guarantees termination of the algorithm. We study this entailment relation closely, prove NP-hardness in general, and identify subclasses of models and specifications that result in a polynomial-time complexity.

This is joint work with Patricia Bouyer and B. Srivathsan.

Bio

Vaishnavi Vishwanath is a PhD student at Chennai Mathematical Institute working under B. Srivathsan and Patricia Bouyer. Her research is in formal verification, specifically in timed automata and related models like alternating timed automata and weighted timed automata.

Slides ↓ Download PDF

Registration

Please register for the workshop at the following link:

https://guptasid.bitbucket.io/fsttcs2025/registration.html

Organizers

S. Akshay
IIT Bombay
Shibashis Guha
TIFR Mumbai
Suguman Bansal
Georgia Tech