FMAS 2025
The Seventh International Workshop on Formal Methods for Autonomous Systems will be a 2.5 day peer-reviewed international workshop that brings together researchers working on a range of techniques for the formal verification of autonomous systems, to present recent work in the area, discuss key challenges, and stimulate collaboration between autonomous systems and formal methods researchers. Previous editions are listed on DBLP: https://dblp.dagstuhl.de/db/conf/fmas/index.html.
Scope
Autonomous systems present unique challenges for formal methods. They are often embodied in robotic systems that can interact with the real world, and they make independent decisions. Amongst other categories, they can be viewed as safety-critical, cyber-physical, hybrid, and real-time systems.
Key challenges for applying formal methods to autonomous systems include:
- the system’s dynamic deployment environment;
- verifying the system’s decision making capabilities – including planning, ethical, and reconfiguration choices; and
- using formal methods results as evidence given to certification or regulatory organisations.
FMAS welcomes submissions that use formal methods to specify, model, or verify autonomous systems; in whole or in part. We are especially interested in work using integrated formal methods, where multiple (formal or non-formal) methods are combined during the software engineering process. We encourage submissions that are advancing the applicability of formal methods for autonomous systems, for example improving integration or explainability, automation or knowledge transfer of these technique; a wider discussion of these principles can be found in ‘A Manifesto for Applicable Formal Methods’.
Autonomous systems are often embedded in robotic or cyber-physical systems, and they share many features (and verification challenges) with automated systems. FMAS welcomes submissions with applications to:
- automated systems,
- semi-autonomous systems, or
- fully-autonomous systems.
Topics
Topics of interest include (but are not limited to):
- Applicable, tool-supported Formal Methods that are suited to Autonomous Systems
- Runtime Verification or other formal approaches to deal with the reality gap (the gap between models/simulations and the real world),
- Verification against safety assurance arguments or standards documents,
- Formal specification, modelling and requirements engineering approaches for autonomous systems,
- Case Studies that identify challenges when applying formal methods to autonomous systems,
- Experience Reports that provide guidance for tackling challenges with formal methods or tools, or
- Discussions of the future directions of the field.
For inspiration, please see the Mural board documenting our discussion session at FMAS 2024.
You can also see the open access FMAS Special Issue with previously accepted papers.
Special Topic: Human-AI Teams
In addition to the topics above we would like to invite work on formal methods for human AI-team work. We are interpreting AI broadly as autonomous, intelligent, or self-learning systems.
Papers intended for this special topic could include (but are not limited to):
- Formal modelling of human behaviour,
- Joint/shared cognition,
- Human-Robot Interaction and communication,
- Robot-driven teams,
- Human-controlled swarms,
- Application areas could include: aerospace, firefighting/search and rescue, infrastructure monitoring, hazardous environment inspection
Papers submitted for this special topic should be within the usual scope of FMAS.
Submission and Publication
Submissions must be prepared using the EPTCS LaTeX style.
There are four categories of submission:
- Vision papers 6 pages (excluding references) describe directions for research into Formal Methods for Autonomous Systems;
- Research previews 6 pages (excluding references) describe well-defined research ideas that are in their early stages, and my not be fully developed yet. Work from PhD students is particularly welcome;
- Experience report papers 15 pages (excluding references) report on practical experiences in applying Formal Methods to Autonomous Systems, focussing on the experience and lessons to be learnt;
- Regular papers 15 pages (excluding references) describe completed applications of Formal Methods to an Autonomous System, new or improved approaches, evaluations of existing approaches, and so on.
You will need to select one of the four submission categories during submission.
These categories are intended to help you show your intent for your paper, and to allow a fairer comparison of papers. For example, a Research Preview wont be judged as not developed enough for acceptance, just because it is being compared to a Standard Paper. The category descriptions are not exhaustive and should be interpreted broadly. If you are unsure if your paper clearly fits into one of these categories, please feel free to email us (details below) to discuss it.
Submission Details
- Please submit your paper via EasyChair: https://easychair.org/conferences/?conf=fmas2025.
- Submissions must be prepared using the EPTCS LaTeX style.
Best Paper
FMAS 2025 will honor the best paper selected with respect to reviews, program committee discussions, and conference presentations with an award.
Special Issue
We will organise a journal special issue to collect extensions of papers accepted in FMAS 2025. Look out for more details as we announce them.
Important Dates
- Extended Submission:
22nd 28th August 2025(AOE) - Notification:
6th October 2025 - Final Version due:
17th October 2025 - Workshop: 17th-19th November 2025
Poster Session
FMAS 2025 will have the co-located a session for demos and posters with iFM2025. More details will be given close to the time.
Venue and Registration
FMAS 2025 will be held from 17th to 19th of November 2025, co-located with the International Conference on Integrated Formal Methods (iFM) 2025, hosted this year by the Inria Paris Center.
For travel and accommodation suggestions, see the iFM venue page.
We will accept participation in-person and remotely:
- At least one author per paper must register and pay for on-site attendance at FMAS, even if the paper will be presented remotely – this is to ensure we cover the costs of running FMAS as a satellite workshop at iFM. Register for FMAS with IFM.
- Presenting or participating online will be free.
- If you are presenting your remotely, please contact us at FMASWorkshop@tutanota.com to let us know so that we can make the necessary arrangements.
Student Registration Fee Support

We are pleased to offer registration waivers to a limited number of students attending FMAS 2025. This is only possible thanks to the kind sponsorship of Formal Methods Europe. Application details will be available here after the paper notification deadline (6th October 2025).
Programme Information
Schedule
(All times are in Central European Time)
Day One:
09:00 - 12:00 : Registration
12:00 - 13:00 : Lunch
Session 0: Opening and FMAS Keynote
Chair: TBA
- 13:30 - 13:45 Welcome
- 13:45 - 14:30 Keynote
- 14:30 - 15:00 Discussion & Introduction of breakout sessions
15:00 - 15:30 : Coffee Break
Session 1: Robot Architectures
Chair: TBA
-
15:30 - 16:00
Mutation Testing for Industrial Robotic Systems
Marcela Gonçalves dos Santos, Sylvain Hallé and Fabio Petrillo -
16:00 - 16:30
Safe-ROS: An Architecture for Autonomous Robots in Safety-Critical Domains
Diana Benjumea Hernandez, Marie Farrell and Louise Dennis - 16:30 - 17:00 Author Panel & Discussion
Day Two:
Session 2: Sub-Symbolic AI meets Verification at Run-time
Chair: TBA
-
9:00 - 9:30
Achieving Safe Control Online through Integration of Harmonic Control Lyapunov–Barrier Functions with Unsafe Object-Centric Action Policies
Marlow Fawn and Matthias Scheutz -
9:30 - 9:50
Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems
Angelo Ferrando - 9:50 - 10:00 Author Panel & Discussion
10:00 - 10:30 : Coffee Break
Session 3: Formal Verification of Cyber-Physical Systems
Chair: TBA
-
10:30 - 11:00
Formal Verification of Local Robustness of a Classification Algorithm for a Spatial Use Case
Delphine Longuet, Amira Elouazzani, Alejandro Penacho Riveiros and Nicola Bastianello -
11:00 - 11:20
Abstract Scene Graphs: Formalizing and Monitoring Spatial Properties of Automated Driving Functions
Ishan Saxena, Bernd Westphal and Martin Fränzle -
11:20 - 11:40
Analyzing many simulations of hybrid programs in Lince
Reydel Arrieta Olano, Renato Neves, José Proença and Patrick Meumeu Yomsi - 11:40 - 12:00 Author Panel & Discussion
12:00 - 13:30 : Lunch
Session 4: Formal Modelling of Human Factors
Chair: TBA
-
13:30 - 14:00
Model Learning for Adjusting the Level of Automation in HCPS
Mehrnoush Hajnorouzi, Astrid Rakow and Martin Fränzle -
14:00 - 14:30
Context-aware, Ante-hoc Explanations of Driving Behaviour
Dominik Grundt, Ishan Saxena, Malte Petersen, Bernd Westphal and Eike Möhlmann - 14:30 - 14:40 Author Panel & Discussion
- 14:40 - 15:00 Preparation for Breakout Session
15:00 - 15:30 : Coffee Break
15:30 - 17:00 : Breakout Activity
Day Three:
9:00 - 10:00
Invited Talk: Integrated Formal Methods for the Verification of Cyber-Physical and Autonomous Systems
Chair: tba
10:00 - 10:30 : Coffee Break
Session 6: Autonomous Systems (shared with iFM)
Chair: TBA
- 10:30 - 11:00 (IFM) Formal Modelling of Trust in Autonomous Delivery Vehicles - Manar Altamimi, Asieh Salehi Fathabadi and Vahid Yazdanpanah
-
11:00 - 11:30
Towards A Catalogue of Formalised Requirement Patterns for Robotic Space Missions
Mahdi Etumi, Hazel Taylor and Marie Farrell - 11:30 - 12:00 (IFM) The CAISAR Platform: Extending the Reach of Machine Learning Specification and Verification - Michele Alberti, François Bobot, Julien Girard-Satabin, Alban Grastien, Aymeric Varasse and Zakaria Chihani
- 12:00 - 12:10 FMAS Wrap-Up
12:10 - 13:30 : Lunch Break & Joint FMAS/iFM Poster Session
Invited Talks
Prof. Paula Herber Prof. Paula Herber is a full professor at University of Münster, Germany. Her main research interests are quality assurance for intelligent embedded systems, test automation, and formal methods.
- Title: Integrated Formal Methods for the Verification of Cyber-Physical and Autonomous Systems
- Abstract: Cyber-physical systems pose significant challenges for formal methods, due to their inherent heterogeneity and interaction with a physical environment. In addition, we see a tremendous increase in the use of learning to make autonomous decisions in dynamic environments, and an increasing demand to cope with uncertainties and unforeseen events. These developments call for the development of new theories and tools, but also for the integration of existing formal methods. In this talk, I will summarize some of our recent efforts towards developing methods for the integration of formal methods to support reusable specification and scalable verification of cyber-physical and autonomous systems
Invited Tutorials
TBA
Accepted Papers
- Authors: Ishan Saxena, Bernd Westphal and Martin Fränzle
-
Abstract
Automated Driving Functions (ADFs) need to comply with spatial properties of varied complexity while driving on public roads. Since such situations are safety-critical in nature, it is necessary to continuously check ADFs for compliance with their spatial properties. Due to their complexity, such spatial properties need to be formalized to enable their automated checking. Scene Graphs (SGs) allow for an explicit structured representation of objects present in a traffic scene and their spatial relationships to each other. In this paper, we build upon the SG construct and propose the Abstract Scene Graph (ASG) formalism to formalize spatial properties of ADFs. We show using real-world examples how spatial properties can be formalized using ASGs. Finally, we present a framework that uses ASGs to perform Runtime Monitoring of ADFs. To this end, we also show algorithmically how a spatial property formalized as an ASG can be satisfied by ADF system behaviour.
- Authors: Marlow Fawn and Matthias Scheutz
-
Abstract
We propose a method for combining Harmonic Control Lyapunov–Barrier Functions (HCLBFs) derived from Signal Temporal Logic (STL) specifications with any given robot policy to turn an unsafe policy into a safe one with formal guarantees. The two components are combined via HCLBF-derived safety certificates, thus producing commands that preserve both safety and task-driven behavior. We demonstrate the operation of the principles with a simple proof-of-concept implementation for an object-centric force-based policy trained through reinforcement learning for a movement task of a stationary robot arm that is able to avoid colliding with obstacles on a table top after combining the policy with the safety constraints. The proposed method can be generalized to more complex specifications and dynamic task settings.
- Authors: Reydel Arrieta Olano, Renato Neves, José Proença and Patrick Meumeu Yomsi
-
Abstract
Hybrid systems are increasingly used in critical applications such as medical devices, infrastructure systems, and autonomous vehicles. Lince is an academic tool for specifying and simulating such systems using a C-like language with differential equations. This paper presents recent experiments that enhance Lince with mechanisms for executing multiple simulation variants and generating histograms that quantify the frequency with which a given property holds. We illustrate our extended Lince using variations of an adaptive cruise control system.
- Authors: Dominik Grundt, Ishan Saxena, Malte Petersen, Bernd Westphal and Eike Möhlmann
-
Abstract
Autonomous vehicles (AVs) must be both safe and trustworthy to gain social acceptance and become a viable option for everyday public transportation. Explanations about the system behaviour can increase safety and trust in AVs. However, the integration of AI-based driving functions in AVs makes explainability challenging, as decision-making processes are often opaque. The field of Explainability Engineering addresses this challenge by designing a system’s explanation model already at design time, based on analysing requirements, operational domains, and stakeholder needs. Its goal is to provide explanations that are both correct and good, considering system implementation, the current traffic context, and explanation goals for different stakeholders. To address these challenges, we propose an approach that enables context-aware, ante-hoc explanations of (un)expectable driving manoeuvres at runtime. The visual yet formal language Traffic Sequence Charts (TSC) is used to formalise explanation contexts, as well as corresponding (un)expectable driving manoeuvres. A dedicated runtime monitoring enables context-recognition and ante-hoc presentation of explanations at runtime. In combination, we aim to support the bridging of correct and good explanation. Our method is demonstrated in a simulated overtaking.
- Authors: Delphine Longuet, Amira Elouazzani, Alejandro Penacho Riveiros and Nicola Bastianello
-
Abstract
Failures in satellite components are costly and challenging to address, often requiring significant human and material resources. Embedding a hybrid AI-based system for fault detection directly in the satellite can greatly reduce this burden by allowing earlier detection. However, such systems must operate with extremely high reliability. To ensure this level of dependability, we employ the formal verification tool Marabou to verify the local robustness of the neural network models used in the AI-based algorithm. This tool allows us to quantify how much a model’s input can be perturbed before its output behavior becomes unstable, thereby improving trustworthiness with respect to its performance under uncertainty.
- Authors: Mehrnoush Hajnorouzi, Astrid Rakow and Martin Fränzle
-
Abstract
The ever-increasing level of automation in human-centered systems leads to a demand for advanced design methods for automation control at the human-machine interface. The complex, multi-layered interaction between the automated system and the human must be thoroughly checked, especially in safety-critical applications, to identify potential risks to overall safety. This requires a strategy that not only optimizes the capabilities of the automation, but also takes into account the variability of human input. In this paper, we present a model-based approach for development of a shared control that takes into account human behavior and the impact of automation on humans. The approach is based on a cognitive architecture that is transformed into a finite-state automaton through active model learning. The abstraction is then integrated into a game-theoretic framework ai med at control synthesis for automation with respect to safety requirements. The learned model is incrementally refined until it represents a sufficiently precise model of human interaction.
- Authors: Marcela Gonçalves dos Santos, Sylvain Hallé and Fabio Petrillo
-
Abstract
Industrial robotic systems (IRS) are increasingly deployed in diverse environments, where failures can result in severe accidents and costly downtime. Ensuring the reliability of the software controlling these systems is therefore critical. Mutation testing, a technique widely used in software engineering, evaluates the effectiveness of test suites by introducing small faults, or mutants, into the code. However, traditional mutation operators are poorly suited to robotic programs, which involve message-based commands and interactions with the physical world. This paper explores the adaptation of mutation testing to IRS by defining domain-specific mutation operators that capture the semantics of robot actions and sensor readings. We propose a methodology for generating meaningful mutants at the level of high-level read and write operations, including movement, gripper actions, and sensor noise injection. An empirical study on a pick-and-place scenario demonstrates that our approach produces more informative mutants and reduces the number of invalid or equivalent cases compared to conventional operators. Results highlight the potential of mutation testing to enhance test suite quality and contribute to safer, more reliable industrial robotic systems.
- Authors: Diana Benjumea Hernandez, Marie Farrell and Louise Dennis
-
Abstract
Deploying autonomous robots in safety-critical domains requires architectures that ensure operational effectiveness and safety compliance. In this paper, we contribute the Safe-ROS architecture for developing reliable and verifiable autonomous robots in such domains. It features two distinct subsystems: (1) an intelligent control system that is responsible for normal/routine operations, and (2) a Safety System consisting of Safety Instrumented Functions (SIFs) that provide formally verifiable independent oversight. We demonstrate Safe-ROS on an AgileX Scout Mini robot performing autonomous inspection in a nuclear environment. One safety requirement is selected and instantiated as a SIF. To support verification, we implement the SIF as a cognitive agent, programmed to stop the robot whenever it detects that it is too close to an obstacle. We verify that the agent meets the safety requirement and integrate it into the autonomous inspection. This integration is also verified, and the full deployment is validated in a Gazebo simulation, and lab testing. We evaluate this architecture in the context of the UK nuclear sector, where safety and regulation are crucial aspects of deployment. Success criteria include the development of a formal property from the safety requirement, implementation, and verification of the SIF, and the integration of the SIF into the operational robotic autonomous system. Our results demonstrate that the Safe-ROS architecture can provide safety verifiable oversight while deploying autonomous robots in safety-critical domains, offering a robust framework that can be extended to additional requirements and various applications.
- Authors: Mahdi Etumi, Hazel Taylor and Marie Farrell
-
Abstract
In the development of safety and mission-critical systems, including autonomous space robotic missions, complex behaviour is expressed during the requirements elicitation phase. Requirements are typically expressed using natural language which is ambiguous and not amenable to formal verification methods that can provide robust guarantees of system behaviour. To support the definition of formal requirements, specification patterns provide reusable, logic-based templates. However, there remains a gap between the natural language requirements and logical specification patterns. Tools like NASA's Formal Requirements Elicitation Tool (FRET) aim to bridge this gap. A suite of robotic specification patterns, along with their FRET formalisation already exists in the literature. These pre-existing requirement patterns are domain agnostic and, in this paper we seek to explore their applicability for space missions. To achieve this we carried out a literature review of existing space missions and formalised their requirements using FRET, contributing a corpus of space mission requirements. We categorised these requirements using existing specification patterns which demonstrated the applicability of existing patterns in space missions. However, not all of the requirements that we formalised could be assigned an existing pattern so we have contributed 5 new requirement specification patterns as well as several variants of the existing and new patterns. We also conducted an expert evaluation of the new patterns, highlighting their benefits and limitations.
- Authors: Angelo Ferrando
-
Abstract
Assuring the safety and trustworthiness of autonomous systems is particularly difficult when learning-enabled components and open environments are involved. Formal methods provide strong guarantees but depend on complete models and static assumptions. Runtime verification (RV) complements them by monitoring executions at run time and, in its predictive variants, by anticipating potential violations. Large language models (LLMs), meanwhile, excel at translating natural language into formal artefacts and recognising patterns in data, yet they remain error-prone and lack formal guarantees. This vision paper argues for a symbiotic integration of RV and LLMs. RV can serve as a guardrail for LLM-driven autonomy, while LLMs can extend RV by assisting specification capture, supporting anticipatory reasoning, and helping to handle uncertainty. We outline how this mutual reinforcement differs from existing surveys and roadmaps, discuss challenges and certification implications, and identify future research directions towards dependable autonomy.
Programme Committee
- Oana Andrei University of Glasgow (UK)
- Akhila Bairy Karlsruhe Institute of Technology (Germany)
- Siddhartha Bhattacharyya Florida Institute of Technology (USA)
- Rafael C. Cardoso University of Aberdeen (UK)
- Priyank Desai Amazon.com (USA)
- Marie Farrell University of Manchester (UK)
- Fatma Faruq University of Manchester (UK)
- Angelo Ferrando University of Modena and Reggio Emilia (Italy)
- Michael Fisher University of Manchester (UK)
- Thomas Flinkow Maynooth University (Ireland)
- Mallory S. Graydon NASA Langley Research Center (USA)
- Jérémie Guiochet University of Toulouse (France)
- Ichiro Hasuo National Institute of Informatics (Japan)
- Christian Heinzemann Robert Bosch GmbH Corporate Research (Germany)
- Asmae Heydari Tabar Karlsruhe Institute of Technology (Germany)
- Taylor T. Johnson Vanderbilt University (USA)
- Verena Klös Carl von Ossietzky University Oldenburg (Germany)
- Livia Lestingi Politecnico di Milano (Italy)
- Sven Linker Kernkonzept (Germany)
- Matt Luckcuck University of Nottingham (UK)
- Anastasia Mavridou KBR Wyle/NASA Ames Research Center (USA)
- Claudio Menghi University of Bergamo (Italy)
- Alice Miller University of Glasgow (UK)
- Alvaro Miyazawa University of York (UK)
- Rosemary Monahan Maynooth University (Ireland)
- Natasha Neogi NASA Langley Research Center (USA)
- Colin Paterson University of York (UK)
- Baptiste Pelletier ONERA - The French Aerospace Lab (France)
- Andrea Pferscher University of Oslo (Norway)
- Astrid Rakow German Aerospace Center (DLR) e.V. (Germany)
- Maike Schwammberger Karlsruhe Institute of Technology (Germany)
- Sanaz Sheikhi Stony Brook University (USA)
- Oisín Sheridan Maynooth University (Ireland)
- Neeraj Kumar Singh INPT-ENSEEIHT / IRIT, University of Toulouse (France)
- James Stovold Lancaster University Leipzig (Germany)
- Silvia Lizeth Tapia Tarifa University of Oslo (Norway)
- Stefano Tonetta Fondazione Bruno Kessler (FBK) (Italy)
- Elena Troubitsyna KTH Royal Institute of Technology (Sweden)
- Hao Wu Maynooth University (Ireland)
- Mengwei Xu University of Newcastle (UK)
Organising Committee
- General Chair: Dr Matt Luckcuck University of Nottingham, UK
- Programme Committee Co-chair: Jun.-Prof. Dr Maike Schwammberger Karlsruhe Institute of Technology, Germany
- Programme Committee Co-chair: Dr Mengwei Xu University of Newcastle, UK
- Dr Marie Farrell University of Manchester, UK
- Dr Mario Gleirscher University of Bremen, Germany
- Diana Carolina Benjumea Hernandez University of Manchester, UK
- Akhila Bairy Karlsruhe Institute of Technology, Germany
- Ray Limarga University of Manchester, UK
- Thomas Flinkow Maynooth University, Ireland
- Simon Kolker University of Manchester, UK
Dates:
- Extended Submission:
22nd 28th August 2025(AOE) - Notification:
6th October 2025 - Final Version due:
17th October 2025 - Workshop: 17th-19th November 2025
Join the LinkedIn Group
- BlueSky: https://bsky.app/profile/fmasworkshop.bsky.social
- Mastadon: https://mastodon.acm.org/@FMASWorkshop
- Twitter: https://twitter.com/FMASWorkshop