FMAS 2023 is a two-day peer-reviewed international workshop that brings together researchers working on a range of techniques for the formal verification of autonomous systems, to present recent work in the area, discuss key challenges, and stimulate collaboration between autonomous systems and formal methods researchers. Previous editions are listed on DBLP:

Our twitter account is: and posts about this year’s workshop use the tag #FMAS2023

Autonomous systems present unique challenges for formal methods. They are often embodied in robotic systems that can interact with the real world, and they make independent decisions. Amongst other categories, they can be viewed as safety-critical, cyber-physical, hybrid, and real-time systems.

Key challenges for applying formal methods to autonomous systems include:

  • the system’s dynamic deployment environment;
  • verifying the system’s decision making capabilities – including planning, ethical, and reconfiguration choices; and
  • using formal methods results as evidence given to certification or regulatory organisations.

FMAS welcomes submissions that use formal methods to specify, model, or verify autonomous systems; in whole or in part. We are especially interested in work using integrated formal methods, where multiple (formal or non-formal) methods are combined during the software engineering process. We encourage submissions that are advancing the applicability of formal methods for autonomous systems, for example improving integration or explainability, automation or knowledge transfer of these technique; a wider discussion of these principles can be found in ‘A Manifesto for Applicable Formal Methods’.

Autonomous systems are often embedded in robotic or cyber-physical systems, and they share many features (and verification challenges) with automated systems. FMAS welcomes submissions with applications to:

  • automated systems,
  • semi-autonomous systems, or
  • fully-autonomous systems.


Topics of interest include (but are not limited to):

  • Applicable, tool-supported Formal Methods that are suited to Autonomous Systems,
  • Runtime Verification or other formal approaches to deal with the reality gap (the gap between models/simulations and the real world),
  • Verification against safety assurance arguments or standards documents,
  • Formal specification and requirements engineering approaches for autonomous systems,
  • Case Studies that identify challenges when applying formal methods to autonomous systems,
  • Experience Reports that provide guidance for tackling challenges with formal methods or tools, or
  • Discussions of the future directions of the field.

Because the above list is not exhaustive, if you are unsure if your paper is in scope for FMAS please feel free to email us (details below) to discuss it.


The workshop’s proceedings are now available via EPTCS.

Submission and Publication

There are four categories of submission:

  • Vision papers 6 pages (excluding references) describe directions for research into Formal Methods for Autonomous Systems;
  • Research previews 6 pages (excluding references) describe well-defined research ideas that are in their early stages, and my not be fully developed yet. Work from PhD students is particularly welcome;
  • Experience report papers 15 pages (excluding references) report on practical experiences in applying Formal Methods to Autonomous Systems, focussing on the experience and lessons to be learnt;
  • Regular papers 15 pages (excluding references) describe completed applications of Formal Methods to an Autonomous System, new or improved approaches, evaluations of existing approaches, and so on.

You will need to select one of the four submission categories during submission.

These categories are intended to help you show your intent for your paper, and to allow a fairer comparison of papers. For example, a Research Preview wont be judged as not developed enough for acceptance, just because it is being compared to a Standard Paper. The category descriptions are not exhaustive and should be interpreted broadly. If you are unsure if your paper clearly fits into one of these categories, please feel free to email us (details below) to discuss it.

Submissions must be prepared using the EPTCS LaTeX style. Each submission will receive at least three, single-blind reviews. If a paper is accepted, at least one of the authors must attend the workshop to present their work.

Important Dates

  • Submission: 17th of August 2023 Extended to 31st of August 2023 (AOE)
  • Notification: 15th of September 2023 ~~Extended to 29th of September 2023~
  • Final Version due: 20th of October 2023
  • Workshop: 15th and 16th of November 2023

Venue and Registration

FMAS 2023 will be held on the 15th and 16th of November 2023, co-located with the International Conference on Integrated Formal Methods (iFM) 2023, hosted by Leiden University, The Netherlands.

We will accept participation in-person and remotely:

  • At least one author per paper must register for on-site attendance at FMAS through the iFM registration system, even if the paper will be presented remotely – this is to ensure we cover the costs of running FMAS as a satellite workshop at iFM.
  • If you want to participate on-line, please register for a free “On-line ticket FMAS 2023 workshop” through the iFM registration system.
  • If you are presenting your paper and wish to do so remotely, please contact us at  to let us know so that we can make the necessary arrangements.

Formal Methods Europe Logo

FMAS 2023 is able to reimburse the registration fee for FMAS only, for authors who are currently students. This is possible thanks to sponsorship by the Formal Methods Europe association. Please contact us at to organise reimbursement. We are only able to offer a limited number of reimbursements so please contact us as soon as possible.

Programme Information


To Be Announced.

Invited Talks

Prof. Alice Miller is a Professor of Computing Science who has worked in Glasgow’s School of Computing Science since 1997, as a Post doc, Daphne Jackson Fellow, Lecturer and Senior Lecturer. Before that she worked in the Universities of East Anglia, Western Australia and Stirling. Her PhD is in Number Theory, but she has worked in Formal Verification for most of her research career. Essentially, Formal Verification uses mathematical techniques to try to prove things about systems - in particular to try to reduce hard problems to tractable ones. She is a member of the London Mathematical Society and the IET, and is a Chartered Engineer.

  • Title: Formal methods within the TAS Governance node
  • Abstract: The TAS Governance Node part of the £33M Trustworthy Autonomous Systems Programme funded by the UKRI Strategic Priorities Fund. The aim of the node is to explore how to make autonomous systems aware of — and responsive to — changing regulations. Led by the University of Edinburgh, it brings together researchers from the universities of Edinburgh, Glasgow, Nottingham, Heriot-Watt, Sussex, and Kings College London; as well as multiple industrial partners. In this talk I will highlight some of the activities within the node, focussing on those that use Formal Methods. These include:
    • an automatic theory repair system for a legal responsibility framework for autonomous vehicles,
    • accident anticipation through reasoned simulation,
    • robot planning using in-situ model checking; and,
    • (formal aspects of) a node-wide automotive case study investigation.

Prof. Erika Ábrahám was born in Hungary and moved to Germany to study Computer Science at the University of Kiel. After her diploma studies, she started to work on deductive proof systems and received her Ph.D. from the University of Leiden in 2005. As a postdoctoral researcher, she was active in different areas of formal methods at the University of Freiburg and at Forschungszentrum Jülich before she was appointed a junior professorship at RWTH Aachen University in 2008, and became a full professor in 2013. Ábrahám’s main research interests are formal methods for the synthesis and analysis of discrete, hybrid, and probabilistic systems. Furthermore, she is also active in the development of algorithms and tools (solvers) for satisfiability checking.

  • Title: SMT: Something you Must Try [Joint talk with iFM]
  • Abstract: SMT (Satisfiability Modulo Theories) solving is a technology for the fully automated solution of logical formulas. Due to their impressive efficiency, SMT solvers are nowadays frequently used in a wide variety of applications. These tools are general purpose and as off-the-shelf solvers, their usage is truly integrated. A typical application encodes real-world problems as logical formulas, whose solutions can be decoded to solutions of the real-world problem. In this talk we give some insights into the mechanisms of SMT solving, discuss some areas of application, and present a novel application from the domain of simulation.

Accepted Papers

3vLTL: a Tool to Generate Automata for Three-valued LTL
  • Authors: Francesco Belardinelli, Angelo Ferrando, and Vadim Malvone
  • Abstract: Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.
  • Paper:
Trust Modelling and Verification Using Event-B
  • Authors: Asieh Salehi Fathabadi and Vahid Yazdanpanah
  • Abstract: Trust is a crucial component in collaborative multiagent systems (MAS) involving humans and autonomous AI agents. Rather than assuming trust based on past system behaviours, it is important to formally verify trust by modelling the current state and capabilities of agents. We argue for verifying actual trust relations based on agents' abilities to deliver intended outcomes in specific contexts. To enable reasoning about different notions of trust, we propose using the refinement-based formal method Event-B. Refinement allows progressively introducing new aspects of trust - from abstract to concrete models incorporating knowledge and runtime states. We demonstrate modelling three trust concepts and verifying associated trust properties in MAS. The formal, correctness-by-construction approach allows deducing guarantees about trustworthy autonomy in human-AI partnerships. Overall, our contribution facilitates rigorous verification of trust in multiagent systems.
  • Paper:
Runtime Verification of Learning Properties for Reinforcement Learning Algorithms
  • Authors: Tommaso Mannucci and Julio de Oliveira Filho
  • Abstract: Reinforcement learning (RL) algorithms interact with their environment in a trial-and-error fashion. Such interactions can be expensive, inefficient, and timely when learning on a physical system rather than in a simulation. This work develops new runtime verification techniques to predict when the learning phase has not met or will not meet qualitative and timely expectations. This paper presents three verification properties concerning the quality and timeliness of learning in RL algorithms. With each property, we propose design steps for monitoring and assessing the properties during the system's operation.
  • Paper:
Online Reachability Analysis and Space Convexification for Autonomous Racing
  • Authors: Sergiy Bogomolov, Taylor T Johnson, Diego Manzanas, Patrick Musau, and Paulius Stankaitis
  • Abstract: This paper presents an optimisation-based approach for an obstacle avoidance problem within an autonomous vehicle racing context. Our control regime leverages online reachability analysis and sensor data to compute the maximal safe traversable region that an agent can traverse within the environment. The idea is to first compute a non-convex safe region, which then can be convexified via a novel coupled separating hyperplane algorithm. This derived safe area is then used to formulate a nonlinear model-predictive control problem that seeks to find an optimal and safe driving trajectory. We evaluate the proposed approach through a series of diverse experiments and assess the runtime requirements of our proposed approach through an analysis of the effects of a set of varying optimisation objectives for generating these coupled hyperplanes.
  • Paper:
Towards proved formal specification and verification of STL operators as synchronous observers
  • Authors: Céline Bellanger, Pierre-Loïc Garoche, Matthieu Martel, and Célia Picard
  • Abstract: Signal Temporal Logic (STL) is a convenient formalism to express bounded horizon properties of autonomous critical systems. STL extends LTL to real-valued signals and associates a non-singleton bound interval to each temporal operators. In this work we provide a rigorous encoding of non-nested discrete-time STL formulas into Lustre synchronous observers. Our encoding provides a three-valued online semantics for the observers and therefore enables both the verification of the property and the search of counter-examples. A key contribution of this work is instrumented proof of the validity of the implementation. Each node is proved correct with respect to the original STL semantics. All the experiments are automated with the Kind2 model-checker and the Z3 SMT solver.
  • Paper:
What to tell when? -- Information Provision as a Game
  • Authors: Astrid Rakow, Akhila Bairy, and Mehrnoush Hajnorouzi
  • Abstract: Constantly informing systems (CIS), that is technical systems that provide us with information over a long period of time, face the challenge of providing us with helpful information. The information base of a human model changes over time but also his mood and his ability to accept information. An information provision strategy should hence take such aspects into account. In this paper, we describe our vision of an approach to aid the design of CIS. We envision using psychological models of the human mind and emotions in an information provision game. Its analysis gives comparative insights into design variants.
  • Paper:
Enforcing Timing Properties in Motorway Traffic
  • Authors: Christopher Bischopink
  • Abstract: In previous work, we proposed an Runtime Enforcement Approach to deal with timing properties in motorway traffic, which are present in form of Timed Multi-Lane Spatial Logic (TMLSL) formulae, a logic tailored to express both spatial and timing properties. Employing communication between the cars, we utilised a nondeterministic controller guessing which actions to execute next for each car, before asking the local monitors of the cars for permission to execute the announced actions. In this contribution, we consider a more sensible controller that only considers sequences that satisfy its own properties. This is done utilising region automata that one can generate from the cars' specifications. In the approach, we also came along a small decidability result for TMLSL.
  • Paper:
Model checking for Robot Reactive Planning
  • Authors: Christopher Chandler, Bernd Porr, Alice Miller, and Giulia Lafratta
  • Abstract: In this paper, we show how model checking can be used to create multi-step plans for a differential drive wheeled robot so that it can avoid immediate danger. Using a small, purpose built model checking algorithm in situ we generate plans in real time in a way that reflects the egocentric reactive response of simple biological agents. Our approach is based on chaining temporary control systems which are spawned to eliminate disturbances in the local environment that disrupt an autonomous agent from its preferred action (or resting state). The method involves a novel discretization of 2D LiDAR data which is sensitive to bounded stochastic variations in the immediate environment. We operationalise multi-step planning using invariant checking by forward depth-first search, using a cul-de-sac scenario as a first test case. Our results demonstrate that model checking can be used to plan efficient trajectories for local obstacle avoidance, improving on the performance of a reactive agent which can only plan one step. We achieve this in near real-time using no pre-computed data. While our method has limitations, we believe our approach shows promise as an avenue for the development of safe, reliable and transparent trajectory planning in the context of autonomous vehicles.
  • Paper:
Certified Control for Train Sign Classification
  • Authors: Jan Roßbach and Michael Leuschel
  • Abstract: The study of fully autonomous train systems for specialized applications is of high economic interest. The KI-LOK research project is involved in developing new methods for certifying such systems. Here we explore the utility of a certified control architecture for a runtime monitor that prevents false positive detection of traffic signs in an AI-based perception system. The monitor uses classical computer vision algorithms to check if the signs -- detected by an AI object detection model -- fit pre-defined specifications. We provide such specifications for some critical signs and integrate a Python prototype of the monitor with a popular object detection model to measure relevant performance metrics on generated data. Our initial results are promising, achieving considerable precision gains with only minor recall reduction; however, further investigation into Generalization possibilities will be necessary.
  • Paper:
Towards Formal Fault Injection for Safety Assessment of Automated Systems
  • Authors: Ashfaq Farooqui and Behrooz Sangchoolie
  • Abstract: Reasoning about the safety, security, and other dependability attributes of autonomous systems is a challenge that needs to be addressed before the adoption of such systems in day-to-day life. _Formal methods_ contain a class of methods that can mathematically reason about the system's behavior. Since these methods employ mathematical techniques, a correctness proof is sufficient to conclude the system's dependability. However, these methods are usually applied to abstract models of the system, which might not fully represent the actual system. _Fault injection_, on the other hand, is a class of testing methods that are often used to evaluate the dependability of systems at different phases of the development lifecycle. However, the amount of testing required to evaluate the system is rather large and often a problem. This vision paper presents the concept of _formal fault injection_, a fusion of these two techniques. Our vision involves integrating them throughout the development lifecycle to enhance the dependability of autonomous systems. We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection. By forging stronger ties between the two fields, we pave the way for developing safe and dependable autonomous systems. This paper delves into the integration's potential and outlines future research avenues, addressing open challenges along the way.
  • Paper:
Formal Verification of Long Short-Term Memory based Audio Classifiers: A Star based Approach
  • Authors: Neelanjana Pal and Taylor T Johnson
  • Abstract: Formally verifying audio classification systems is essential to ensure accurate signal classification across real-world applications like surveillance, automotive voice commands, and multimedia content management, preventing potential errors with serious consequences. Drawing from recent research, this study advances the utilization of star-set-based formal verification, extended through reachability analysis, tailored explicitly for Long Short-Term Memory architectures and their Convolutional variations within the audio classification domain. By conceptualizing the classification process as a sequence of set operations, the star set-based reachability approach streamlines the exploration of potential operational states attainable by the system. The paper serves as an encompassing case study, validating and verifying sequence audio classification analytics within real-world contexts. It accentuates the necessity for robustness verification to ensure precise and dependable predictions, particularly in light of the impact of noise on the accuracy of output classifications.
  • Paper:
Correct-by-Construction Control for Stochastic and Uncertain Dynamical Models via Formal Abstractions
  • Authors: Thom Badings, Licio Romao, Alessandro Abate, and Nils Jansen
  • Abstract: Automated synthesis of correct-by-construction controllers for autonomous systems is crucial for their deployment in safety-critical scenarios. Such autonomous systems are naturally modeled as stochastic dynamical models. The general problem is to compute a controller that provably satisfies a given task, represented as a probabilistic temporal logic specification. However, factors such as stochastic uncertainty, imprecisely known parameters, and hybrid features make this problem challenging. We have developed an abstraction framework that can be used to solve this problem under various modeling assumptions. Our approach is based on a robust finite-state abstraction of the stochastic dynamical model in the form of a Markov decision process with intervals of probabilities (iMDP). We use state-of-the-art verification techniques to compute an optimal policy on the iMDP with guarantees for satisfying the given specification. We then show that, by construction, we can refine this policy into a feedback controller for which these guarantees carry over to the dynamical model. In this short paper, we survey our recent research in this area and highlight two challenges (related to scalability and dealing with nonlinear dynamics) that we aim to address with our ongoing research.
  • Paper:
Automatic Generation of Scenarios for System-level Simulation-based Verification of Autonomous Driving Systems
  • Authors: Srajan Goyal, Alberto Griggio, Jacob Kimblad, and Stefano Tonetta
  • Abstract: With the increasing complexity of Automated Driving Systems (ADS), ensuring their safety and re- liability has become a critical challenge. The Verification and Validation (V&V) of these systems are particularly demanding when AI components are employed to implement perception and/or control functions. In the ESA-funded project VIVAS, we developed a generic framework for system-level simulation-based V&V of autonomous systems. The approach is based on a simulation model of the system, an abstract model that describes symbolically the system behavior, and formal methods to generate scenarios and to verify the simulation executions. Various coverage criteria can be defined to guide the automated generation of the scenarios. In this paper, we describe the instantiation of the VIVAS framework for an ADS case study. This is based on the integration of CARLA, a widely-used driving simulator, and its ScenarioRunner tool, which enables the creation of diverse and complex driving scenarios. This is also used in the CARLA Autonomous Driving Challenge to validate different ADS agents for perception and control based on AI, shared by the CARLA community. We describe the development of an abstract ADS model and the formulation of a coverage criterion that focuses on the behaviors of vehicles relative to the vehicle with ADS under verification. Leveraging the VIVAS framework, we generate and execute various driving scenarios, thus testing the capabilities of the AI components. The results show the effectiveness of VIVAS in automatically generating scenarios for system-level simulation- based V&V of an automated driving system using CARLA and ScenarioRunner. Therefore, they highlight the potential of the approach as a powerful tool in the future of ADS V&V methodologies.
  • Paper:
Comparing Differentiable Logics for Learning Systems: A Research Preview
  • Authors: Thomas Flinkow, Barak A. Pearlmutter, and Rosemary Monahan
  • Abstract: Extensive research on formal verification of machine learning (ML) systems indicates that learning from data alone often fails to capture underlying background knowledge. A variety of verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties, however, these verifiers typically assume a trained network with fixed weights. ML-enabled autonomous systems are required to not only detect incorrect predictions, but should also possess the ability to self-correct, continuously improving and adapting. A promising approach for creating ML models that inherently satisfy constraints is to encode background knowledge as logical constraints that guide the learning process via so-called differentiable logics. In this research preview, we compare and evaluate various logics from the literature in weakly-supervised contexts, presenting our findings and highlighting open problems for future work. Our experimental results are broadly consistent with results reported previously in literature; however, learning with differentiable logics introduces a new hyperparameter that is difficult to tune and has significant influence on the effectiveness of the logics.
  • Paper:
Extending Neural Network Verification to a Larger Family of Piece-wise Linear Activation Functions
  • Authors: László Antal, Hana Masara, and Erika Ábrahám
  • Abstract: In this paper, we extend an available neural network verification technique to support a wider class of piece-wise linear activation functions. Furthermore, we extend the algorithms, which provide in their original form exact respectively over-approximative results for bounded input sets represented as star sets, to allow also unbounded input sets. We implemented our algorithms and demonstrate their effectiveness on some case studies.
  • Paper:

Programme Committee

Organising Committee

Dr Matt Luckcuck University of Nottingham, UK

Dr Marie Farrell University of Manchester, UK

Jun.-Prof. Dr Maike Schwammberger Karlsruhe Institute of Technology, Germany

Dr Mario Gleirscher University of Bremen, Germany