Citing these papers

When a citing paper published in FMAS through the Electronic Proceedings in Theoretical Computer Science (EPTCS) please use the full title of the proceedings, do not cite it as an arXiv paper. Using the BibTex citation from arXiv hides the workshop’s name, so it should not be used.

Instead, please use the FMAS proceedings index on DBLP. They provide a BibTex export with the full proceedings title. Alternatively, use the BibTex buttons below which provide the full proceedings title from EPTCS.

FMAS 2025

The seventh edition, co-located with the International Conference on Integrated Formal Methods (iFM) 2025, hosted by the Inria Paris Center, France.

Proceedings upcoming.

Abstract Scene Graphs: Formalizing and Monitoring Spatial Properties of Automated Driving Functions (short)
  • Authors: Ishan Saxena, Bernd Westphal and Martin Fränzle
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.3
  • Cite:
  • Abstract Automated Driving Functions (ADFs) need to comply with spatial properties of varied complexity while driving on public roads. Since such situations are safety-critical in nature, it is necessary to continuously check ADFs for compliance with their spatial properties. Due to their complexity, such spatial properties need to be formalized to enable their automated checking. Scene Graphs (SGs) allow for an explicit structured representation of objects present in a traffic scene and their spatial relationships to each other. In this paper, we build upon the SG construct and propose the Abstract Scene Graph (ASG) formalism to formalize spatial properties of ADFs. We show using real-world examples how spatial properties can be formalized using ASGs. Finally, we present a framework that uses ASGs to perform Runtime Monitoring of ADFs. To this end, we also show algorithmically how a spatial property formalized as an ASG can be satisfied by ADF system behaviour.
Achieving Safe Control Online through Integration of Harmonic Control Lyapunov–Barrier Functions with Unsafe Object-Centric Action Policies (regular)
  • Authors: Marlow Fawn and Matthias Scheutz
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.7
  • Cite:
  • Abstract We propose a method for combining Harmonic Control Lyapunov–Barrier Functions (HCLBFs) derived from Signal Temporal Logic (STL) specifications with any given robot policy to turn an unsafe policy into a safe one with formal guarantees. The two components are combined via HCLBF-derived safety certificates, thus producing commands that preserve both safety and task-driven behavior. We demonstrate the operation of the principles with a simple proof-of-concept implementation for an object-centric force-based policy trained through reinforcement learning for a movement task of a stationary robot arm that is able to avoid colliding with obstacles on a table top after combining the policy with the safety constraints. The proposed method can be generalized to more complex specifications and dynamic task settings.
Analyzing many simulations of hybrid programs in Lince (short)
  • Authors: Reydel Arrieta Olano, Renato Neves, José Proença and Patrick Meumeu Yomsi
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.9
  • Cite:
  • Abstract Hybrid systems are increasingly used in critical applications such as medical devices, infrastructure systems, and autonomous vehicles. Lince is an academic tool for specifying and simulating such systems using a C-like language with differential equations. This paper presents recent experiments that enhance Lince with mechanisms for executing multiple simulation variants and generating histograms that quantify the frequency with which a given property holds. We illustrate our extended Lince using variations of an adaptive cruise control system.
Context-aware, Ante-hoc Explanations of Driving Behaviour (regular)
  • Authors: Dominik Grundt, Ishan Saxena, Malte Petersen, Bernd Westphal and Eike Möhlmann
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.11
  • Cite:
  • Abstract Autonomous vehicles (AVs) must be both safe and trustworthy to gain social acceptance and become a viable option for everyday public transportation. Explanations about the system behaviour can increase safety and trust in AVs. However, the integration of AI-based driving functions in AVs makes explainability challenging, as decision-making processes are often opaque. The field of Explainability Engineering addresses this challenge by designing a system’s explanation model already at design time, based on analysing requirements, operational domains, and stakeholder needs. Its goal is to provide explanations that are both correct and good, considering system implementation, the current traffic context, and explanation goals for different stakeholders. To address these challenges, we propose an approach that enables context-aware, ante-hoc explanations of (un)expectable driving manoeuvres at runtime. The visual yet formal language Traffic Sequence Charts (TSC) is used to formalise explanation contexts, as well as corresponding (un)expectable driving manoeuvres. A dedicated runtime monitoring enables context-recognition and ante-hoc presentation of explanations at runtime. In combination, we aim to support the bridging of correct and good explanation. Our method is demonstrated in a simulated overtaking.
Formal Verification of Local Robustness of a Classification Algorithm for a Spatial Use Case (regular)
  • Authors: Delphine Longuet, Amira Elouazzani, Alejandro Penacho Riveiros and Nicola Bastianello
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.4
  • Cite:
  • Abstract Failures in satellite components are costly and challenging to address, often requiring significant human and material resources. Embedding a hybrid AI-based system for fault detection directly in the satellite can greatly reduce this burden by allowing earlier detection. However, such systems must operate with extremely high reliability. To ensure this level of dependability, we employ the formal verification tool Marabou to verify the local robustness of the neural network models used in the AI-based algorithm. This tool allows us to quantify how much a model’s input can be perturbed before its output behavior becomes unstable, thereby improving trustworthiness with respect to its performance under uncertainty.
Model Learning for Adjusting the Level of Automation in HCPS (regular)
  • Authors: Mehrnoush Hajnorouzi, Astrid Rakow and Martin Fränzle
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.10
  • Cite:
  • Abstract The ever-increasing level of automation in human-centered systems leads to a demand for advanced design methods for automation control at the human-machine interface. The complex, multi-layered interaction between the automated system and the human must be thoroughly checked, especially in safety-critical applications, to identify potential risks to overall safety. This requires a strategy that not only optimizes the capabilities of the automation, but also takes into account the variability of human input. In this paper, we present a model-based approach for development of a shared control that takes into account human behavior and the impact of automation on humans. The approach is based on a cognitive architecture that is transformed into a finite-state automaton through active model learning. The abstraction is then integrated into a game-theoretic framework ai med at control synthesis for automation with respect to safety requirements. The learned model is incrementally refined until it represents a sufficiently precise model of human interaction.
Mutation Testing for Industrial Robotic Systems (regular)
  • Authors: Marcela Gonçalves dos Santos, Sylvain Hallé and Fabio Petrillo
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.5
  • Cite:
  • Abstract Industrial robotic systems (IRS) are increasingly deployed in diverse environments, where failures can result in severe accidents and costly downtime. Ensuring the reliability of the software controlling these systems is therefore critical. Mutation testing, a technique widely used in software engineering, evaluates the effectiveness of test suites by introducing small faults, or mutants, into the code. However, traditional mutation operators are poorly suited to robotic programs, which involve message-based commands and interactions with the physical world. This paper explores the adaptation of mutation testing to IRS by defining domain-specific mutation operators that capture the semantics of robot actions and sensor readings. We propose a methodology for generating meaningful mutants at the level of high-level read and write operations, including movement, gripper actions, and sensor noise injection. An empirical study on a pick-and-place scenario demonstrates that our approach produces more informative mutants and reduces the number of invalid or equivalent cases compared to conventional operators. Results highlight the potential of mutation testing to enhance test suite quality and contribute to safer, more reliable industrial robotic systems.
Safe-ROS: An Architecture for Autonomous Robots in Safety-Critical Domains (regular)
  • Authors: Diana Benjumea Hernandez, Marie Farrell and Louise Dennis
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.6
  • Cite:
  • Abstract Deploying autonomous robots in safety-critical domains requires architectures that ensure operational effectiveness and safety compliance. In this paper, we contribute the Safe-ROS architecture for developing reliable and verifiable autonomous robots in such domains. It features two distinct subsystems: (1) an intelligent control system that is responsible for normal/routine operations, and (2) a Safety System consisting of Safety Instrumented Functions (SIFs) that provide formally verifiable independent oversight. We demonstrate Safe-ROS on an AgileX Scout Mini robot performing autonomous inspection in a nuclear environment. One safety requirement is selected and instantiated as a SIF. To support verification, we implement the SIF as a cognitive agent, programmed to stop the robot whenever it detects that it is too close to an obstacle. We verify that the agent meets the safety requirement and integrate it into the autonomous inspection. This integration is also verified, and the full deployment is validated in a Gazebo simulation, and lab testing. We evaluate this architecture in the context of the UK nuclear sector, where safety and regulation are crucial aspects of deployment. Success criteria include the development of a formal property from the safety requirement, implementation, and verification of the SIF, and the integration of the SIF into the operational robotic autonomous system. Our results demonstrate that the Safe-ROS architecture can provide safety verifiable oversight while deploying autonomous robots in safety-critical domains, offering a robust framework that can be extended to additional requirements and various applications.
Towards A Catalogue of Formalised Requirement Patterns for Robotic Space Missions (regular)
  • Authors: Mahdi Etumi, Hazel Taylor and Marie Farrell
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.12
  • Cite:
  • Abstract In the development of safety and mission-critical systems, including autonomous space robotic missions, complex behaviour is expressed during the requirements elicitation phase. Requirements are typically expressed using natural language which is ambiguous and not amenable to formal verification methods that can provide robust guarantees of system behaviour. To support the definition of formal requirements, specification patterns provide reusable, logic-based templates. However, there remains a gap between the natural language requirements and logical specification patterns. Tools like NASA's Formal Requirements Elicitation Tool (FRET) aim to bridge this gap. A suite of robotic specification patterns, along with their FRET formalisation already exists in the literature. These pre-existing requirement patterns are domain agnostic and, in this paper we seek to explore their applicability for space missions. To achieve this we carried out a literature review of existing space missions and formalised their requirements using FRET, contributing a corpus of space mission requirements. We categorised these requirements using existing specification patterns which demonstrated the applicability of existing patterns in space missions. However, not all of the requirements that we formalised could be assigned an existing pattern so we have contributed 5 new requirement specification patterns as well as several variants of the existing and new patterns. We also conducted an expert evaluation of the new patterns, highlighting their benefits and limitations.
Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems (short)
  • Authors: Angelo Ferrando
  • Paper: https://dx.doi.org/10.4204/EPTCS.436.8
  • Cite:
  • Abstract Assuring the safety and trustworthiness of autonomous systems is particularly difficult when learning-enabled components and open environments are involved. Formal methods provide strong guarantees but depend on complete models and static assumptions. Runtime verification (RV) complements them by monitoring executions at run time and, in its predictive variants, by anticipating potential violations. Large language models (LLMs), meanwhile, excel at translating natural language into formal artefacts and recognising patterns in data, yet they remain error-prone and lack formal guarantees. This vision paper argues for a symbiotic integration of RV and LLMs. RV can serve as a guardrail for LLM-driven autonomy, while LLMs can extend RV by assisting specification capture, supporting anticipatory reasoning, and helping to handle uncertainty. We outline how this mutual reinforcement differs from existing surveys and roadmaps, discuss challenges and certification implications, and identify future research directions towards dependable autonomy.

FMAS 2024

The sixth edition, , co-located with the International Conference on Integrated Formal Methods (iFM) 2024, hosted by the University of Manchester, UK.

A Case Study on Numerical Analysis of a Path Computation Algorithm
  • Authors: Grégoire Boussu, Nikolai Kosmatov and Franck Vedrine
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.8
  • Cite:
  • Abstract Lack of numerical precision in control software --- in particular, related to trajectory computation --- can lead to incorrect results with costly or even catastrophic consequences. Various tools have been proposed to analyze the precision of program computations. This paper presents a case study on numerical analysis of an industrial implementation of the fast marching algorithm, a popular path computation algorithm frequently used for trajectory computation. We briefly describe the selected tools, present the applied methodology, highlight some attention points, summarize the results and outline future work directions.
Autonomous System Safety Properties with Multi-Machine Hybrid Event-B
  • Authors: Richard Banach
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.1
  • Cite:
  • Abstract Event-B is a well known methodology for the verified design and development of systems that can be characterised as discrete transition systems. Hybrid Event-B is a conservative extension that interleaves the discrete transitions of Event-B (assumed to be temporally isolated) with episodes of continuously varying state change. While a single Hybrid Event-B machine is sufficient for applications with a single locus of control, it will not do for autonomous systems, which have several loci of control by default. Multi-machine Hybrid Event-B is designed to allow the specification of systems with several loci of control. The formalism is succinctly surveyed, pointing out the subtle semantic issues involved. The multi-machine formalism is then used to specify a relatively simple incident response system, involving a controller, two drones and three responders, working in a partly coordinated and partly independent fashion to manage a putative hazardous scenario.
Creating a Formally Verified Neural Network for Autonomous Navigation: An Experience Report
  • Authors: Syed Ali Asadullah Bukhari, Thomas Flinkow, Medet Inkarbekov, Barak A. Pearlmutter and Rosemary Monahan
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.12
  • Cite:
  • Abstract The increased reliance of self-driving vehicles on neural networks opens up the challenge of their verification. In this paper we present an experience report, describing a case study which we undertook to explore the design and training of a neural network on a custom dataset for vision-based autonomous navigation. We are particularly interested in the use of machine learning with differentiable logics to obtain networks satisfying basic safety properties by design, guaranteeing the behaviour of the neural network after training. We motivate the choice of a suitable neural network verifier for our purposes and report our observations on the use of neural network verifiers for self-driving systems.
Cross-layer Formal Verification of Robotic Systems
  • Authors: Sylvain Raïs, Julien Brunel, David Doose, and Frédéric Herbreteau
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.9
  • Cite:
  • Abstract The design and verification of robotic systems are often divided into several layers for reasons of system modularity and complexity. However, some system properties can only be studied by considering several layers simultaneously. We propose a cross-layer verification method to check the expected properties of concrete robotic systems. Our method verifies a layer using abstractions of other layers. We propose two approaches: refining the models of the abstract layers and refining the property under verification. A combination of these two approaches seems the most promising to ensure model genericity and avoid the problem of state-space explosion.
Formal simulation and visualisation of hybrid programs
  • Authors: Pedro Mendes, Ricardo Correia, Renato Neves and José Proença
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.2
  • Cite:
  • Abstract The design and analysis of systems that combine computational, discrete behaviour with physical processes' continuous dynamics – such as movement, velocity, and voltage – is a famous, challenging task. Several theoretical results from programming theory emerged in the last decades to tackle the issue; some of which are the basis of a proof-of-concept tool, called Lince, that aids in the analysis of such systems, by presenting simulations of their respective behaviours.</br> Being a proof-of-concept, the tool is quite limited w.r.t. usability, and when attempting to apply it to a set of common, concrete problems, involving autonomous driving and others, it either simply cannot simulate them or fails to provide a satisfactory user-experience.</br> This work thus complements the aforementioned theoretical approaches with a more practical perspective by improving Lince along several dimensions: to name a few, richer syntactic constructs, more operations, more informative plotting systems and errors messages, and a better performance overall. We illustrate our improvements via small different case-studies that involve both autonomous driving and electrical systems.
Formalizing Stateful Behavior Trees
  • Authors: Serena Serbinowska, Preston Robinette, Gabor Karsai and Taylor T Johnson
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.14
  • Cite:
  • Abstract Behavior Trees (BTs) are high-level controllers that are useful in a variety of planning tasks and are gaining traction in robotic mission planning. As they gain popularity in safety-critical domains, it is important to formalize their syntax and semantics, as well as verify properties for them. In this paper, we formalize a class of BTs we call Stateful Behavior Trees (SBTs) that have auxiliary variables and operate in an environment that can change over time. SBTs have access to persistent shared memory—often known as a blackboard—that keeps track of these auxiliary variables. We demonstrate that SBTs are equivalent in computational power to Turing Machines when the blackboard can store mathematical (unbounded) integers. We also identify conditions where SBTs have computational power equivalent to finite state automata, specifically where the auxiliary variables are of finitary types. We present a domain specific language (DSL) for writing SBTs and adapt the tool BehaVerify for use with this DSL. This new DSL in BehaVerify supports interfacing with popular BT libraries in Python, and also provides generation of Haskell code and nuXmv models, the latter of which are used for model checking temporal logic specifications for the SBTs. We include examples and scalability results where BehaVerify outperforms another verification tool (MoVe4BT) by a factor of 100.
Model Checking for Reinforcement Learning in Autonomous Driving: One Can Do More Than You Think!
  • Authors: Rong Gu
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.11
  • Cite:
  • Abstract Most reinforcement learning (RL) platforms use high-level programming languages, such as OpenAI Gymnasium using Python. These frameworks provide various API and benchmarks for testing RL algorithms in different domains, such as autonomous driving (AD) and robotics. These platforms often emphasise the design of RL algorithms and the training performance but neglect the correctness of models and reward functions, which can be crucial for the successful application of RL. This paper proposes using formal methods to model AD systems and demonstrates how model checking (MC) can be used in RL for AD. Most studies combining MC and RL focus on safety, such as safety shields. However, this paper shows different facets where MC can strengthen RL. First, an MC-based model pre-analysis can reveal bugs with respect to sensor accuracy and learning step size. This step serves as a preparation of RL, which saves time if bugs exist and deepens users' understanding of the target system. Second, reward automata can greatly improve learning performance especially when the learning objectives are multiple. All these findings are supported by experiments.
Model checking and verification of synchronisation properties of cobot welding
  • Authors: Yvonne Murray, Henrik Nordlie, David A. Anisi, Pedro Ribeiro and Ana Cavalcanti
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.6
  • Cite:
  • Abstract This work describes use of model checking to verify synchronisation properties of an industrial welding system consisting of a cobot arm and an external turntable. The robots move synchronously, but sometimes get out of synchronisation. This gives inadequate weld qualities in problem areas, such as around corners. These mistakes are costly, since time is lost both in the robotic welding and in manual repairs needed to improve the weld. Verification of the synchronisation properties showed that they were fulfilled as long as assumptions of correctness were made about parts outside the scope of the model, indicating limitations in the hardware. These results motivated a re-calibration of the real-life system, drastically improving the welding results.
Open Challenges in the Formal Verification of Autonomous Driving
  • Authors: Paolo Burgio, Angelo Ferrando, and Marco Villani
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.13
  • Cite:
  • Abstract In the realm of autonomous driving, the development and integration of highly complex and heterogeneous systems are standard practice. Modern vehicles are not monolithic systems; instead, they are composed of diverse hardware components, each running its own software systems. An autonomous vehicle comprises numerous independent components, often developed by different and potentially competing companies. This diversity poses significant challenges for the certification process, as it necessitates certifying components that may not disclose their internal behaviour (black-boxes). In this paper, we present a real-world case study of an autonomous driving system, identify key open challenges associated with its development and integration, and explore how formal verification techniques can address these challenges to ensure system reliability and safety.
ROSMonitoring 2.0: Extending ROS Runtime Verification to Services and Ordered Topics
  • Authors: Maryam Ghaffari Saadat, Angelo Ferrando, Louise A. Dennis, and Michael Fisher
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.3
  • Cite:
  • Abstract Formal verification of robotic applications presents challenges due to their hybrid nature and distributed architecture. This paper introduces ROSMonitoring 2.0, an extension of ROSMonitoring designed to facilitate the monitoring of both topics and services while considering the order in which messages are published and received. The framework has been enhanced to support these novel features for both ROS1 and ROS2 environments, offering improved real-time support, security, scalability, and interoperability. We discuss the modifications made to accommodate these advancements and present results obtained from a case study involving the runtime monitoring of specific components of a fire-fighting Uncrewed Aerial Vehicle (UAV).
RV4Chatbot: Are Chatbots Allowed to Dream of Electric Sheep?
  • Authors: Andrea Gatti, Viviana Mascardi, and Angelo Ferrando
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.5
  • Cite:
  • Abstract Chatbots have become integral to various application domains, including those with safety-critical considerations. As a result, there is a pressing need for methods that ensure chatbots consistently adhere to expected, safe behaviours. In this paper, we introduce RV4Chatbot, a Runtime Verification framework designed to monitor deviations in chatbot behaviour. We formalise expected behaviours as interaction protocols between the user and the chatbot. We present the RV4Chatbot design and describe two implementations that instantiate it: RV4Rasa, for monitoring chatbots created with the Rasa framework, and RV4Dialogflow, for monitoring Dialogflow chatbots. Additionally, we detail experiments conducted in a factory automation scenario using both RV4Rasa and RV4Dialogflow.
Synthesising Robust Controllers for Robot Collectives with Recurrent Tasks: A Case Study
  • Authors: Till Schnittka and Mario Gleirscher
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.7
  • Cite:
  • Abstract When designing correct-by-construction controllers for autonomous collectives, three key challenges are the task specification, the modelling, and its use at practical scale. In this paper, we focus on a simple yet useful abstraction for high-level controller synthesis for robot collectives with optimisation goals (e.g., maximum cleanliness, minimum energy consumption) as well as recurrence (e.g., maintain a contamination threshold, avoid running out of charge) and safety (e.g., mutually exclusive room occupation) constraints.</br>Due to technical limitations (related to scalability and using constraints in the synthesis), we simplify a graph-based setting using stochastic two-player games into a single-player game on partially observable Markov decision processes (POMDPs). Robustness against environmental uncertainty is encoded via partial observability. Linear-time correctness properties are verified separately after synthesising the POMDP strategy. We contribute at-scale guidance on POMDP modelling and controller synthesis for tasked robot collectives exemplified by the scenario of battery-driven robots responsible for cleaning public buildings with utilisation constraints
Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems
  • Authors: Jan Gruteser, Jan Roßbach, Fabian Vu and Michael Leuschel
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.10
  • Cite:
  • Abstract The certification of autonomous systems is an important concern in science and industry. The KI- LOK project explores new methods for certifying and safely integrating AI components into autonomous trains. We pursued a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker. This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker. The demonstrator is integrated into the formal method tool PROB. This enables runtime monitoring, runtime verification, and statistical validation of formal safety properties using a formal B model. Consequently, one can detect and analyse potential vulnerabilities and weaknesses of the AI and the certificate checker. We apply these techniques to a signal detection case study, demonstrating the effectiveness and presenting initial findings.
Verification of Behavior Trees with Contingency Monitors
  • Authors: Serena Serbinowska, Nicholas Potteiger, Anne Tumlin and Taylor T Johnson
  • Paper: https://dx.doi.org/10.4204/EPTCS.411.4
  • Cite:
  • Abstract Behavior Trees (BTs) are high level controllers that have found use in a wide range of robotics tasks. As they grow in popularity and usage, it is crucial to ensure that the appropriate tools and methods are available for ensuring they work as intended. To that end, we created a new methodology by which to create Runtime Monitors for BTs. These monitors can be used by the BT to correct when undesirable behavior is detected and are capable of handling LTL specifications. We demonstrate that in terms of runtime, the generated monitors are on par with monitors generated by existing tools and highlight certain features that make our method more desirable in various situations. We note that our method allows for our monitors to be swapped out with alternate monitors with fairly minimal user effort. Finally, our method ties in with our existing tool, BehaVerify, allowing for the verification of BTs with monitors.

FMAS 2023

The fifth edition, co-located International Conference on Integrated Formal Methods (iFM) 2023, hosted by Leiden University, The Netherlands.

3vLTL: a Tool to Generate Automata for Three-valued LTL
  • Authors: Francesco Belardinelli, Angelo Ferrando, and Vadim Malvone
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.13
  • Cite:
  • Abstract Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.
Automatic Generation of Scenarios for System-level Simulation-based Verification of Autonomous Driving Systems
  • Authors: Srajan Goyal, Alberto Griggio, Jacob Kimblad, and Stefano Tonetta
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.8
  • Cite:
  • Abstract With the increasing complexity of Automated Driving Systems (ADS), ensuring their safety and reliability has become a critical challenge. The Verification and Validation (V&V) of these systems are particularly demanding when AI components are employed to implement perception and/or control functions. In the ESA-funded project VIVAS, we developed a generic framework for system-level simulation-based V&V of autonomous systems. The approach is based on a simulation model of the system, an abstract model that describes symbolically the system behavior, and formal methods to generate scenarios and to verify the simulation executions. Various coverage criteria can be defined to guide the automated generation of the scenarios. In this paper, we describe the instantiation of the VIVAS framework for an ADS case study. This is based on the integration of CARLA, a widely-used driving simulator, and its ScenarioRunner tool, which enables the creation of diverse and complex driving scenarios. This is also used in the CARLA Autonomous Driving Challenge to validate different ADS agents for perception and control based on AI, shared by the CARLA community. We describe the development of an abstract ADS model and the formulation of a coverage criterion that focuses on the behaviors of vehicles relative to the vehicle with ADS under verification. Leveraging the VIVAS framework, we generate and execute various driving scenarios, thus testing the capabilities of the AI components. The results show the effectiveness of VIVAS in automatically generating scenarios for system-level simulation- based V&V of an automated driving system using CARLA and ScenarioRunner. Therefore, they highlight the potential of the approach as a powerful tool in the future of ADS V&V methodologies.
Certified Control for Train Sign Classification
  • Authors: Jan Roßbach and Michael Leuschel
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.5
  • Cite:
  • Abstract The study of fully autonomous train systems for specialized applications is of high economic interest. The KI-LOK research project is involved in developing new methods for certifying such systems. Here we explore the utility of a certified control architecture for a runtime monitor that prevents false positive detection of traffic signs in an AI-based perception system. The monitor uses classical computer vision algorithms to check if the signs -- detected by an AI object detection model -- fit pre-defined specifications. We provide such specifications for some critical signs and integrate a Python prototype of the monitor with a popular object detection model to measure relevant performance metrics on generated data. Our initial results are promising, achieving considerable precision gains with only minor recall reduction; however, further investigation into Generalization possibilities will be necessary.
Comparing Differentiable Logics for Learning Systems: A Research Preview
  • Authors: Thomas Flinkow, Barak A. Pearlmutter, and Rosemary Monahan
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.3
  • Cite:
  • Abstract Extensive research on formal verification of machine learning (ML) systems indicates that learning from data alone often fails to capture underlying background knowledge. A variety of verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties, however, these verifiers typically assume a trained network with fixed weights. ML-enabled autonomous systems are required to not only detect incorrect predictions, but should also possess the ability to self-correct, continuously improving and adapting. A promising approach for creating ML models that inherently satisfy constraints is to encode background knowledge as logical constraints that guide the learning process via so-called differentiable logics. In this research preview, we compare and evaluate various logics from the literature in weakly-supervised contexts, presenting our findings and highlighting open problems for future work. Our experimental results are broadly consistent with results reported previously in literature; however, learning with differentiable logics introduces a new hyperparameter that is difficult to tune and has significant influence on the effectiveness of the logics.
Correct-by-Construction Control for Stochastic and Uncertain Dynamical Models via Formal Abstractions
  • Authors: Thom Badings, Licio Romao, Alessandro Abate, and Nils Jansen
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.10
  • Cite:
  • Abstract Automated synthesis of correct-by-construction controllers for autonomous systems is crucial for their deployment in safety-critical scenarios. Such autonomous systems are naturally modeled as stochastic dynamical models. The general problem is to compute a controller that provably satisfies a given task, represented as a probabilistic temporal logic specification. However, factors such as stochastic uncertainty, imprecisely known parameters, and hybrid features make this problem challenging. We have developed an abstraction framework that can be used to solve this problem under various modeling assumptions. Our approach is based on a robust finite-state abstraction of the stochastic dynamical model in the form of a Markov decision process with intervals of probabilities (iMDP). We use state-of-the-art verification techniques to compute an optimal policy on the iMDP with guarantees for satisfying the given specification. We then show that, by construction, we can refine this policy into a feedback controller for which these guarantees carry over to the dynamical model. In this short paper, we survey our recent research in this area and highlight two challenges (related to scalability and dealing with nonlinear dynamics) that we aim to address with our ongoing research.
Enforcing Timing Properties in Motorway Traffic
  • Authors: Christopher Bischopink
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.9
  • Cite:
  • Abstract In previous work, we proposed an Runtime Enforcement Approach to deal with timing properties in motorway traffic, which are present in form of Timed Multi-Lane Spatial Logic (TMLSL) formulae, a logic tailored to express both spatial and timing properties. Employing communication between the cars, we utilised a nondeterministic controller guessing which actions to execute next for each car, before asking the local monitors of the cars for permission to execute the announced actions. In this contribution, we consider a more sensible controller that only considers sequences that satisfy its own properties. This is done utilising region automata that one can generate from the cars' specifications. In the approach, we also came along a small decidability result for TMLSL.
Extending Neural Network Verification to a Larger Family of Piece-wise Linear Activation Functions
  • Authors: László Antal, Hana Masara, and Erika Ábrahám
  • Paper: https://cgi.cse.unsw.edu.au/~eptcs/paper.cgi?FMAS2023.4
  • Cite:
  • Abstract In this paper, we extend an available neural network verification technique to support a wider class of piece-wise linear activation functions. Furthermore, we extend the algorithms, which provide in their original form exact respectively over-approximative results for bounded input sets represented as star sets, to allow also unbounded input sets. We implemented our algorithms and demonstrate their effectiveness on some case studies.
Formal Verification of Long Short-Term Memory based Audio Classifiers: A Star based Approach
  • Authors: Neelanjana Pal and Taylor T Johnson
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.12
  • Cite:
  • Abstract Formally verifying audio classification systems is essential to ensure accurate signal classification across real-world applications like surveillance, automotive voice commands, and multimedia content management, preventing potential errors with serious consequences. Drawing from recent research, this study advances the utilization of star-set-based formal verification, extended through reachability analysis, tailored explicitly for Long Short-Term Memory architectures and their Convolutional variations within the audio classification domain. By conceptualizing the classification process as a sequence of set operations, the star set-based reachability approach streamlines the exploration of potential operational states attainable by the system. The paper serves as an encompassing case study, validating and verifying sequence audio classification analytics within real-world contexts. It accentuates the necessity for robustness verification to ensure precise and dependable predictions, particularly in light of the impact of noise on the accuracy of output classifications.
Model checking for Robot Reactive Planning
  • Authors: Christopher Chandler, Bernd Porr, Alice Miller, and Giulia Lafratta
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.6
  • Cite:
  • Abstract In this paper, we show how model checking can be used to create multi-step plans for a differential drive wheeled robot so that it can avoid immediate danger. Using a small, purpose built model checking algorithm in situ we generate plans in real time in a way that reflects the egocentric reactive response of simple biological agents. Our approach is based on chaining temporary control systems which are spawned to eliminate disturbances in the local environment that disrupt an autonomous agent from its preferred action (or resting state). The method involves a novel discretization of 2D LiDAR data which is sensitive to bounded stochastic variations in the immediate environment. We operationalise multi-step planning using invariant checking by forward depth-first search, using a cul-de-sac scenario as a first test case. Our results demonstrate that model checking can be used to plan efficient trajectories for local obstacle avoidance, improving on the performance of a reactive agent which can only plan one step. We achieve this in near real-time using no pre-computed data. While our method has limitations, we believe our approach shows promise as an avenue for the development of safe, reliable and transparent trajectory planning in the context of autonomous vehicles.
Online Reachability Analysis and Space Convexification for Autonomous Racing
  • Authors: Sergiy Bogomolov, Taylor T Johnson, Diego Manzanas, Patrick Musau, and Paulius Stankaitis
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.7
  • Cite:
  • Abstract This paper presents an optimisation-based approach for an obstacle avoidance problem within an autonomous vehicle racing context. Our control regime leverages online reachability analysis and sensor data to compute the maximal safe traversable region that an agent can traverse within the environment. The idea is to first compute a non-convex safe region, which then can be convexified via a novel coupled separating hyperplane algorithm. This derived safe area is then used to formulate a nonlinear model-predictive control problem that seeks to find an optimal and safe driving trajectory. We evaluate the proposed approach through a series of diverse experiments and assess the runtime requirements of our proposed approach through an analysis of the effects of a set of varying optimisation objectives for generating these coupled hyperplanes.
Runtime Verification of Learning Properties for Reinforcement Learning Algorithms
  • Authors: Tommaso Mannucci and Julio de Oliveira Filho
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.15
  • Cite:
  • Abstract Reinforcement learning (RL) algorithms interact with their environment in a trial-and-error fashion. Such interactions can be expensive, inefficient, and timely when learning on a physical system rather than in a simulation. This work develops new runtime verification techniques to predict when the learning phase has not met or will not meet qualitative and timely expectations. This paper presents three verification properties concerning the quality and timeliness of learning in RL algorithms. With each property, we propose design steps for monitoring and assessing the properties during the system's operation.
Towards Formal Fault Injection for Safety Assessment of Automated Systems
  • Authors: Ashfaq Farooqui and Behrooz Sangchoolie
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.11
  • Cite:
  • Abstract Reasoning about the safety, security, and other dependability attributes of autonomous systems is a challenge that needs to be addressed before the adoption of such systems in day-to-day life. _Formal methods_ contain a class of methods that can mathematically reason about the system's behavior. Since these methods employ mathematical techniques, a correctness proof is sufficient to conclude the system's dependability. However, these methods are usually applied to abstract models of the system, which might not fully represent the actual system. _Fault injection_, on the other hand, is a class of testing methods that are often used to evaluate the dependability of systems at different phases of the development lifecycle. However, the amount of testing required to evaluate the system is rather large and often a problem. This vision paper presents the concept of _formal fault injection_, a fusion of these two techniques. Our vision involves integrating them throughout the development lifecycle to enhance the dependability of autonomous systems. We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection. By forging stronger ties between the two fields, we pave the way for developing safe and dependable autonomous systems. This paper delves into the integration's potential and outlines future research avenues, addressing open challenges along the way.
Towards proved formal specification and verification of STL operators as synchronous observers
  • Authors: Céline Bellanger, Pierre-Loïc Garoche, Matthieu Martel, and Célia Picard
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.14
  • Cite:
  • Abstract Signal Temporal Logic (STL) is a convenient formalism to express bounded horizon properties of autonomous critical systems. STL extends LTL to real-valued signals and associates a non-singleton bound interval to each temporal operators. In this work we provide a rigorous encoding of non-nested discrete-time STL formulas into Lustre synchronous observers. Our encoding provides a three-valued online semantics for the observers and therefore enables both the verification of the property and the search of counter-examples. A key contribution of this work is instrumented proof of the validity of the implementation. Each node is proved correct with respect to the original STL semantics. All the experiments are automated with the Kind2 model-checker and the Z3 SMT solver.
Trust Modelling and Verification Using Event-B
  • Authors: Asieh Salehi Fathabadi and Vahid Yazdanpanah
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.2
  • Cite:
  • Abstract Trust is a crucial component in collaborative multiagent systems (MAS) involving humans and autonomous AI agents. Rather than assuming trust based on past system behaviours, it is important to formally verify trust by modelling the current state and capabilities of agents. We argue for verifying actual trust relations based on agents' abilities to deliver intended outcomes in specific contexts. To enable reasoning about different notions of trust, we propose using the refinement-based formal method Event-B. Refinement allows progressively introducing new aspects of trust - from abstract to concrete models incorporating knowledge and runtime states. We demonstrate modelling three trust concepts and verifying associated trust properties in MAS. The formal, correctness-by-construction approach allows deducing guarantees about trustworthy autonomy in human-AI partnerships. Overall, our contribution facilitates rigorous verification of trust in multiagent systems.
What to tell when? -- Information Provision as a Game
  • Authors: Astrid Rakow, Akhila Bairy, and Mehrnoush Hajnorouzi
  • Paper: https://dx.doi.org/10.4204/EPTCS.395.1
  • Cite:
  • Abstract Constantly informing systems (CIS), that is technical systems that provide us with information over a long period of time, face the challenge of providing us with helpful information. The information base of a human model changes over time but also his mood and his ability to accept information. An information provision strategy should hence take such aspects into account. In this paper, we describe our vision of an approach to aid the design of CIS. We envision using psychological models of the human mind and emotions in an information provision game. Its analysis gives comparative insights into design variants.

FMAS 2022

The fourth edition, co-located with the International Conference on Software Engineering and Formal Methods (SEFM) 2022, hosted by Humboldt-University in Berlin-Adlershof, Germany.

A Doxastic Framework for Autonomous Systems
  • Authors: Astrid Rakow
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.8
  • Cite:
  • Abstract A highly autonomous system (HAS) has to assess the situation it is in and derives beliefs, based on which, it decides what to do next. The beliefs are not solely based on the observations the HAS has made so far, but also on general insights about the world, in which the HAS operates. Although the beliefs may be imprecise and sometimes bear substantial flaws, the HAS will have to extrapolate the possible futures in order to evaluate the consequences of its actions and then take its decisions autonomously. In this paper, we formalize an autonomous decisive system as a system that always chooses actions that it currently believes belong to a winning strategy. We show that it can be checked whether an autonomous decisive system can be built for a given list of linear time temporal goals and a given design context. We moreover can synthesize a belief formation for such an autonomous decisive system. In order formally to characterize autonomous decisive systems, we use a doxastic framework for safety-critical HASs where the belief formation depends on the extrapolation.
Active vs. Passive: A Comparison of Automata Learning Paradigms for Network Protocols
  • Authors: Bernhard K. Aichernig, Edi Muskardin, and Andrea Pferscher
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.1
  • Cite:
  • Abstract Active automata learning became a popular tool for the behavioral analysis of communication protocols. The main advantage is that no manual modeling effort is required since a behavioral model is automatically inferred from a black-box system. However, several real-world applications of this technique show that the overhead for the establishment of an active interface might hamper the practical applicability. Our recent work on the active learning of Bluetooth Low Energy (BLE) protocol found that the active interaction creates a bottleneck during learning. Considering the automata learning toolset, passive learning techniques appear as a promising solution since they do not require an active interface to the system under learning. Instead, models are learned based on a given data set. In this paper, we evaluate passive learning for two network protocols: BLE and Message Queuing Telemetry Transport (MQTT). Our results confirm that a passive technique can correctly learn if the data set provides behavioral coverage. However, random data generation for passive learning is more expensive compared to the costs of active learning.
Advising Autonomous Cars about the Rules of the Road
  • Authors: Joe Collenette, Louise Dennis, and Michael Fisher
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.5
  • Cite:
  • Abstract This paper describes (R)ules (o)f (T)he (R)oad (A)dvisor, an agent that provides recommended and possible actions to be generated from a set of human-level rules. We describe the architecture and design of RoTRA, both formally and with an example. Specifically, we use RoTRA to formalise and implement the UK Rules of the Road, and describe how this can be incorporated into autonomous cars such that they can reason internally about obeying the rules of the road. In addition, the possible actions generated are annotated to indicate whether the rules state that the action must be taken or that they only recommend that the action should be taken, as per the UK Highway Code (Rules of The Road). The benefits of utilising this system include being able to adapt to different regulations in different jurisdictions; allowing clear traceability from rules to behaviour, and providing an external automated accountability mechanism that can check whether the rules were obeyed in some given situation. A simulation of an autonomous car shows, via a concrete example, how trust can be built by putting the autonomous vehicle through a number of scenarios which test the car's ability to obey the rules of the road. Autonomous cars that incorporate this system are able to ensure that they are obeying the rules of the road and external (legal or regulatory) bodies can verify that this is the case, without the vehicle or its manufacturer having to expose their source code or make their working transparent, thus allowing greater trust between car companies, jurisdictions, and the general public.
Bounded Invariant Checking for Stateflow
  • Authors: Predrag Filipovikj, Gustav Ung, Dilian Gurov, and Mattias Nyberg
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.3
  • Cite:
  • Abstract Stateflow models are complex software models, often used as part of industrial safety-critical software solutions designed with Matlab Simulink. Being part of safety-critical solutions, these models require the application of rigorous verification techniques for assuring their correctness. In this paper, we propose a refutation-based formal verification approach for analyzing Stateflow models against invariant properties, based on bounded model checking (BMC). The crux of our technique is: i) a representation of the state space of Stateflow models as a symbolic transition system (STS) over the symbolic configurations of the model, and ii) application of incremental BMC, to generate verification results after each unrolling of the next-state relation of the transition system. To this end, we develop a symbolic structural operational semantics (SSOS) for Stateflow, starting from an existing structural operational semantics (SOS), and show the preservation of invariant properties between the two. We define bounded invariant checking for STS over symbolic configurations as a satisfiability problem. We develop an automated procedure for generating the initial and next-state predicates of the STS, and a prototype implementation of the technique in the form of a tool utilising standard, off-the-shelf satisfiability solvers. Finally, we present preliminary performance results by applying our tool on an illustrative example and two industrial models.
Extending Attack-Fault Trees with Runtime Verification
  • Authors: Rafael C. Cardoso, Angelo Ferrando, and Michael Fisher
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.14
  • Cite:
  • Abstract Autonomous systems are often complex and prone to software failures and cyber-attacks. We introduce RVAFTs, an extension of Attack-Fault Trees (AFTs) with runtime events that can be used to construct runtime monitors. These monitors are able to detect when failures, that can be caused either by an attack or by a fault, occur. The safety and security properties monitored are, in turn, derived from the hierarchical decomposition of RVAFTs. Our approach not only provides further use of AFTs, but also improves the process of instrumentation often required in runtime verification. We explain the principles and provide a simple case study demonstrating how RVAFTs can be used in practice. Through this we are also able to evaluate the detection of faults and attacks as well as assessing the computational overhead of the monitors.
From Specification Models to Explanation Models: An Extraction and Refinement Process for Timed Automata
  • Authors: Maike Schwammberger and Verena Klös
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.2
  • Cite:
  • Abstract Autonomous systems control many tasks in our daily lives. To increase trust in those systems and safety of the interaction between humans and autonomous systems, the system behaviour and reasons for autonomous decision should be explained to users, experts and public authorities. One way to provide such explanations is to use behavioural models to generate context- and user-specific explanations at run-time. However, this comes at the cost of higher modelling effort as additional models need to be constructed. In this paper, we propose an approach to extract such explanation models from system models, and to subsequently refine these for users, explanation purposes and situations. By this, we enable the reuse of specification models for integrating explanation capabilities into systems. We showcase our approach using a running example from the autonomous driving domain.
Generating Safe and Intelligent Autonomous Decision-Making in ROS
  • Authors: Yi Yang and Tom Holvoet
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.13
  • Cite:
  • Abstract The Robot Operating System (ROS) is a widely used framework for building robotic systems. It offers a wide variety of reusable packages and a pattern for new developments. It is up to developers how to combine these elements and integrate them with decision-making for autonomous behaviour. The features of such decision-making that are in general valued the most are safety assurance and long-term autonomy. In this research preview, we present a formal approach for generating safe and intelligent autonomous decision-making in the Robot Operating System. To achieve runtime verification, we rely on our previous work on static verification for autonomous decision-making, and integrate a way of dealing with online learning. The implementation of the research proposal is currently under development, yet an initial implementation yields promising results.
Monitoring ROS2: from Requirements to Autonomous Robots
  • Authors: Ivan Perez, Anastasia Mavridou, Thomas Pressburger, Alexander Will, and Patrick Martin
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.15
  • Cite:
  • Abstract Runtime verification (RV) has the potential to enable the safe operation of safety-critical systems that are too complex to formally verify, such as Robot Operating System 2 (ROS2) applications. Writing correct monitors can itself be complex, and errors in the monitoring subsystem threaten the mission as a whole. This paper provides an overview of a formal approach to generating runtime monitors for autonomous robots from requirements written in a structured natural language. Our approach integrates the Formal Requirement Elicitation Tool (FRET) with Copilot, a runtime verification framework, through the Ogma integration tool. FRET is used to specify requirements with unambiguous semantics, which are then automatically translated into temporal logic formulæ. Ogma generates monitor specifications from the FRET output, which are compiled into hard-real time C99. To facilitate integration of the monitors in ROS2, we have extended Ogma to generate ROS2 packages defining monitoring nodes, which run the monitors when new data becomes available, and publish the results of any violations. The goal of our approach is to treat the generated ROS2 packages as black boxes and integrate them into larger ROS2 systems with minimal effort.
Scheduling for Urban Air Mobility using Safe Learning
  • Authors: Natasha Neogi, Surya Murthy, and Suda Bharadwaj
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.7
  • Cite:
  • Abstract This work considers the scheduling problem for Urban Air Mobility (UAM) vehicles travelling between origin-destination pairs with both hard and soft trip deadlines. Each route is described by a discrete probability distribution over trip completion times (or delay) and over inter-arrival times of requests (or demand) for the route along with a fixed hard or soft deadline. Soft deadlines carry a cost that is incurred when the deadline is missed. An online, safe scheduler is developed that ensures that hard deadlines are never missed and that average cost of missing soft deadlines is minimized. The system is modelled as a Markov Decision Process (MDP) and safe model based learning is used to find the probabilistic distributions over route delays and demand. Monte Carlo Tree Search (MCTS) is used to safely explore the learned models in an online fashion and develop the near-optimal non-preemptive scheduling policy. These results are compared with Value Iteration (VI) and Earliest Deadline First (EDF) scheduling solutions
Scheduling of Missions with Constrained Tasks for Heterogeneous Robot Systems
  • Authors: Gricel Vazquez, Radu Calinescu, and Javier Camara
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.11
  • Cite:
  • Abstract We present a formal task allocation and scheduling approach for multi-robot missions (KANOA). KANOA supports two important types of task constraints: task ordering, which requires the execution of several tasks in a specified order; and joint tasks, which indicates tasks that must be performed by more than one robot. To mitigate the complexity of robotic mission planning, KANOA handles the allocation of the mission tasks to robots, and the scheduling of the allocated tasks separately. To that end, the task allocation problem is formalised in first-order logic and resolved using the Alloy model analyzer, and the task scheduling problem is encoded as a Markov decision process and resolved using the PRISM probabilistic model checker. We illustrate the application of KANOA through a case study in which a heterogeneous robotic team is assigned a hospital maintenance mission.
SkiNet: A Petri Net Generation Tool for the Verification of Skillset-based Autonomous Systems
  • Authors: Baptiste Pelletier, Charles Lesire, David Doose, Karen Godary-Dejean, and Charles Dramé-Maigné
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.9
  • Cite:
  • Abstract The need for high-level autonomy and robustness of autonomous systems for missions in dynamic and remote environment has pushed developpers to come up with new software architectures. A common architecture style is to summarize the capabilities of the robotic system into elementary actions, called skills, on top of which a skill management layer is implemented to structure, test and control the functional layer. However, current available verification tools only provide either mission-specific verification or verification on a model that does not replicate the actual execution of the system, which makes it difficult to ensure its robustness to unexpected events. To that end, a tool, SkiNet, has been developped to transform the skill-based architecture of a system into a Petri Net modeling the state-machine behaviors of the skills and the resources they handle. The Petri Net allows the use of model-checking, such as Linear Temporal Logic (LTL) or Computational Tree Logic (CTL), for the user to analyze and verify the model of the system.
Towards Adaptive Planning of Assistive-care Robot Tasks
  • Authors: Jordan Hamilton, Ioannis Stefanakos, Radu Calinescu and Javier Cámara
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.12
  • Cite:
  • Abstract This 'research preview' paper introduces an adaptive path planning framework for robotic mission execution in assistive-care applications. The framework provides a graph-based environment modelling approach, with dynamic path finding performed using Dijkstra's algorithm. A secondary graph-based representation of the mission profile is established and solved using a travelling salesman approach, to ensure Pareto-optimal paths are found based on minimising risk to the human and maximising progress with the mission. A predictive module that uses probabilistic model checking is applied to estimate the human's movement through the environment, allowing run-time re-planning of the robot's path. We illustrate the use of the framework for a simulated assistive-care case study in which a mobile robot monitors an end user with mild physical or cognitive impairments.
Towards Runtime Monitoring of Complex System Requirements for Autonomous Driving Functions
  • Authors: Dominik Grundt, Anna Köhne, Ishan Saxena, Ralf Stemmer, Bernd Westphal, and Eike Möhlmann
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.4
  • Cite:
  • Abstract Autonomous driving functions (ADFs) in public traffic have to comply with complex system requirements that are based on knowledge of experts from different disciplines, e.g., lawyers, safety experts, psychologists. In this paper, we present a research preview regarding the validation of ADFs w.r.t. such requirements. We investigate the suitability of Traffic Sequence Charts (TSCs) for the formalization of such requirements and present a concept for monitoring system compliance during validation runs. We find TSCs, with their intuitive visual syntax over symbols from the traffic domain, to be a promising choice for the collaborative formalization of such requirements. For an example TSC, we build a runtime monitor according to our novel structural concept that exploits the separation of spatial and temporal aspects in TSCs, and successfully apply the monitor on exemplary runs. The monitor continuously provides verdicts at runtime, which is particularly beneficial in ADF validation, where validation runs are expensive. The next open research questions concern the generalization of our monitor construction, the identification of the limits of TSC monitorability, and the investigation of the monitor performance in practical applications. Perspectively, TSC runtime monitoring could provide a useful technique in other emerging application areas such as AI training, safeguarding ADFs during operation, and gathering meaningful traffic data in the field.
Towards a Digital Highway Code using Formal Modelling and Verification of Timed Automata
  • Authors: Gleifer Alves and Maike Schwammberger
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.6
  • Cite:
  • Abstract One of the challenges in designing safe, reliable and trustworthy Autonomous Vehicles (AVs) is to ensure that the AVs abide by traffic rules. For this, the AVs need to be able to understand and reason about traffic rules. In previous work, we introduce the spatial traffic logic USL-TR to allow for the unambiguous, machine-readable, formalisation of traffic rules. This is only the first step towards autonomous traffic agents that verifiably follow traffic rules. In this research preview, we focus on two further steps: a) retrieving behaviour diagrams directly from traffic rules and b) converting the behaviour diagrams into timed automata that are using formulae of USL-TR in guards and invariants. With this, we have a formal representation for traffic rules and can move towards the establishment of a Digital Highway Code. We briefly envision further steps which include adding environment and agent models to the timed automata to finally implement these traffic rules models using a selection of formal verification tools. So that we can check the rules endowed in the AV are formally verified.
Verify Safety of Behaviour Trees in Event-B
  • Authors: Matteo Tadiello and Elena Troubitsyna
  • Paper: https://dx.doi.org/10.4204/EPTCS.371.10
  • Cite:
  • Abstract Behavior Trees (BT) are becoming increasingly popular in the robotics community. The BT tool is well suited for decision-making applications allowing a robot to perform complex behavior while being explainable to humans as well. Verifying that BTs used are well constructed with respect to safety and reliability requirements is essential, especially for robots operating in critical environments. In this work, we propose a formal specification of Behavior Trees and a methodology to prove invariants of already used trees, while keeping the complexity of the formalization of the tree simple for the final user. Allowing the possibility to test the particular instance of the behavior tree without the necessity to know the more abstract levels of the formalization.

FMAS 2021

The third edition, held online.

Assuring Increasingly Autonomous Systems in Human-Machine Teams: An Urban Air Mobility Case Study
  • Authors: Siddhartha Bhattacharyya, Jennifer Davis, Anubhav Gupta, Nandith Narayan, and Michael Matessa
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.11
  • Cite:
  • Abstract As aircraft systems become increasingly autonomous, the human-machine role allocation changes and opportunities for new failure modes arise. This necessitates an approach to identify the safety requirements for the increasingly autonomous system (IAS) as well as a framework and techniques to verify and validate that the IAS meets its safety requirements. We use Crew Resource Management techniques to identify requirements and behaviors for safe human-machine teaming behaviors. We provide a methodology to verify that an IAS meets its requirements. We apply the methodology to a case study in Urban Air Mobility, which includes two contingency scenarios: unreliable sensor and aborted landing. For this case study, we implement an IAS agent in the Soar language that acts as a copilot for the selected contingency scenarios and performs takeoff and landing preparation, while the pilot maintains final decision authority. We develop a formal human-machine team architecture model in the Architectural Analysis and Design Language (AADL), with operator and IAS requirements formalized in the Assume Guarantee REasoning Environment (AGREE) Annex to AADL. We formally verify safety requirements for the human-machine team given the requirements on the IAS and operator. We develop an automated translator from Soar to the nuXmv model checking language and formally verify the IAS requirements for the IAS agent using nuXmv. We share the design and requirements errors found in the process as well as our lessons learned.
Complete Agent-driven Model-based System Testing for Autonomous Systems
  • Authors: Kerstin Eder, Wen-Ling Huang, and Jan Peleska
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.4
  • Cite:
  • Abstract In this position paper, a novel approach to testing complex autonomous transportation systems (ATS) in the automotive, avionic, and railway domains is described. It is well-suited to overcome the problems of verification and validation (V&V) effort which is known to become infeasible for complex ATS, when trying to perform V&V with conventional methods. The approach advocated here uses complete testing methods on module level, because these establish formal proofs for the logical correctness of the software. Having established logical correctness, system-level tests are performed in simulated cloud environments and on the target system. To give evidence that ``sufficiently many'' system tests have been performed with the target system, a formally justified coverage criterion is introduced. To optimise the execution of very large system test suites, we advocate an online testing approach where multiple tests are executed in parallel, and test steps are identified on-the-fly. The coordination and optimisation of these executions is achieved by an agent-based approach. Each aspect of the testing approach advocated here is shown to be consistent with existing standards for development and V&V of safety-critical transportation systems, or it is justified why they should become acceptable in future revisions of the applicable standards.
Complete Test of Synthesised Safety Supervisors for Robots and Autonomous Systems
  • Authors: Mario Gleirscher and Jan Peleska
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.7
  • Cite:
  • Abstract Verified controller synthesis uses world models that comprise all potential behaviours of humans, robots, further equipment, and the controller to be synthesised. A world model enables quantitative risk assessment, for example, by stochastic model checking. Such a model describes a range of controller behaviours some of which---when implemented correctly---guarantee that the overall risk in the actual world is acceptable, provided that the stochastic assumptions have been made to the safe side. Synthesis then selects an acceptable-risk controller behaviour. However, because of crossing abstraction, formalism, and tool boundaries, verified synthesis for robots and autonomous systems has to be accompanied by rigorous testing. In general, standards and regulations for safety-critical systems require testing as a key element to obtain certification credit before entry into service. This work-in-progress paper presents an approach to the complete testing of synthesised supervisory controllers that enforce safety properties in domains such as human-robot collaboration and autonomous driving. Controller code is generated from the selected controller behaviour. The code generator, however, is hard, if not infeasible, to verify in a formal and comprehensive way. Instead, utilising testing, an abstract test reference is generated, a symbolic finite state machine with simpler semantics than code semantics. From this reference, a complete test suite is derived and applied to demonstrate the observational equivalence between the test reference and the controller code.
Extending Urban Multi-Lane Spatial Logic to Formalise Road Junction Rules
  • Authors: Maike Schwammberger and Gleifer Alves
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.1
  • Cite:
  • Abstract During the design of autonomous vehicles (AVs), several stages should include a verification process to guarantee that the AV is driving safely on the roads. One of these stages is to assure the AVs abide by the road traffic rules. To include road traffic rules in the design of an AV, a precise and unambiguous formalisation of these rules is needed. However, only recently this has been pointed out as an issue for the design of AVs and the few works on this only capture the temporal aspects of the rules, leaving behind the spatial aspects. Here, we extend the spatial traffic logic, Urban Multi-lane Spatial Logic, to formalise a subset of the UK road junction rules, where both temporal and spatial aspects of the rules are captured. Our approach has an abstraction level for urban road junctions that could easily promote the formalisation of the whole set of road junction rules and we exemplarily formalise three of the UK road junction rules. Once we have the whole set formalised, we will model, implement, and formally verify the behaviour of an AV against road traffic rules so that guidelines for the creation of a Digital Highway Code for AVs can be established.
Formal Guarantees of Timely Progress for Distributed Knowledge Propagation
  • Authors: Saswata Paul, Stacy Patterson, and Carlos A. Varela
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.5
  • Cite:
  • Abstract Autonomous air traffic management (ATM) operations for urban air mobility (UAM) will necessitate the use of distributed protocols for decentralized coordination between aircraft. As UAM operations are time-critical, it will be imperative to have formal guarantees of progress for the distributed protocols used in ATM. Under asynchronous settings, message transmission and processing delays are unbounded, making it impossible to provide deterministic bounds on the time required to make progress. We present an approach for formally guaranteeing timely progress in a Two-Phase Acknowledge distributed knowledge propagation protocol by probabilistically modeling the delays using theory about the Multicopy Two-Hop Relay protocol and the M/M/1 queue model. The guarantee states a probabilistic upperbound to the time for progress as a function of the probabilities of the total transmission and processing delays being less than two given values. We also showcase the development of a library of formal theory in the Athena proof assistant, that is tailored towards reasoning about timely progress in distributed protocols deployed in airborne networks.
Improving Online Railway Deadlock Detection using a Partial Order Reduction
  • Authors: Bjørnar Luteberget
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.8
  • Cite:
  • Abstract Although railway dispatching on large national networks is gradually becoming more computerized, there are still major obstacles to retrofitting (semi-)autonomous control systems. In addition to requiring extensive and detailed digitalization of infrastructure models and information systems, exact optimization for railway dispatching is computationally hard. Heuristic algorithms and manual overrides are likely to be required for semi-autonomous railway operations for the foreseeable future. In this context, being able to detect problems such as deadlocks can be a valuable part of a runtime verification system. If bound-for-deadlock situations are correctly recognized as early as possible, human operators will have more time to better plan for recovery operations. Deadlock detection may also be useful for verification in a feedback loop with a heuristic or semi-autonomous dispatching algorithm if the dispatching algorithm cannot itself guarantee a deadlock-free plan. We describe a SAT-based planning algorithm for online detection of bound-for-deadlock situations. The algorithm exploits parallel updates of train positions and a partial order reduction technique to significantly reduce the number of state transitions (and correspondingly, the sizes of the formulas) in the SAT instances needed to prove whether a deadlock situation is bound to happen in the future. Implementation source code and benchmark instances are supplied, and a direct comparison against another recent study demonstrates significant performance gains.
Observable and Attention-Directing BDI Agents for Human-Autonomy Teaming
  • Authors: Blair Archibald, Muffy Calder, Michele Sevegnani, and Mengwei Xu
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.12
  • Cite:
  • Abstract Human-autonomy teaming (HAT) scenarios feature humans and autonomous agents collaborating to meet a shared goal. For effective collaboration, the agents must be transparent and able to share important information about their operation with human teammates. We address the challenge of transparency for Belief-Desire-Intention agents defined in the Conceptual Agent Notation (CAN) language. We extend the semantics to model agents that are observable (i.e. the internal state of tasks is available), and attention-directing (i.e. specific states can be flagged to users), and provide an executable semantics via an encoding in Milner's bigraphs. Using an example of unmanned aerial vehicles, the BigraphER tool, and PRISM, we show and verify how the extensions work in practice.
Online Strategy Synthesis for Safe and Optimized Control of Steerable Needles
  • Authors: Sascha Lehmann, Antje Rogalla, Maximilian Neidhardt, Alexander Schlaefer, and Sibylle Schupp
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.9
  • Cite:
  • Abstract Autonomous systems are often applied in uncertain environments, which require prospective action planning and retrospective data evaluation for future planning to ensure safe operation. Formal approaches may support these systems with safety guarantees, but are usually expensive and do not scale well with growing system complexity. In this paper, we introduce online strategy synthesis based on classical strategy synthesis to derive formal safety guarantees while reacting and adapting to environment changes. To guarantee safety online, we split the environment into region types which determine the acceptance of action plans and trigger local correcting actions. Using model checking on a frequently updated model, we can then derive locally safe action plans (prospectively), and match the current model against new observations via reachability checks (retrospectively). As use case, we successfully apply online strategy synthesis to medical needle steering, i.e., navigating a (flexible and beveled) needle through tissue towards a target without damaging its surroundings.
QuantifyML: How good is my machine learning model?
  • Authors: Muhammad Usman, Divya Gopinath, and Corina S. Pasareanu
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.6
  • Cite:
  • Abstract The efficacy of machine learning models is typically determined by computing their accuracy on test data sets. However, this may often be misleading, since the test data may not be representative of the problem that is being studied. With QuantifyML we aim to precisely quantify the extent to which machine learning models have learned and generalized from the given data. Given a trained model, QuantifyML translates it into a C program and feeds it to the CBMC model checker to produce a formula in Conjunctive Normal Form (CNF). The formula is analyzed with off-the-shelf model counters to obtain precise counts with respect to different model behavior. QuantifyML enables i) evaluating learnability by comparing the counts for the outputs to ground truth, expressed as logical predicates, ii) comparing the performance of models built with different machine learning algorithms (decision-trees vs. neural networks), and iii) quantifying the safety and robustness of models.
Simulation and Model Checking for Close to Realtime Overtaking Planning
  • Authors: Daumantas Pagojus, Alice Miller, Bernd Porr, and Ivaylo Valkov
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.2
  • Cite:
  • Abstract Fast and reliable trajectory planning is a key requirement of autonomous vehicles. In this paper we introduce a novel technique for planning the route of an autonomous vehicle on a straight rural road using the Spin model checker. We show how we can combine Spin’s ability to identify paths violating temporal properties with sensor information from a 3D Unity simulation of an autonomous vehicle, to plan and perform consecutive overtaking manoeuvres on a traffic-heavy road. This involves discretising the sensory information and combining multiple sequential Spin models with a Linear Time Temporal Logic specification to generate an error path. This path provides the autonomous vehicle with an action plan. The entire process takes place in close-to real time (using no pre-computed data) and the action plan is specifically tailored for individual scenarios. Our experiments demonstrate that the simulated autonomous vehicle implementing our approach can drive on average at least 40km and overtake 214 vehicles before experiencing a collision – which is usually caused by inaccuracies in the sensory system. While the proposed system has some drawbacks, we believe that our novel approach demonstrates a potentially powerful future tool for efficient trajectory planning for autonomous vehicles.
Towards Partial Monitoring: It is Always too Soon to Give Up
  • Authors: Angelo Ferrando and Rafael C. Cardoso
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.3
  • Cite:
  • Abstract Runtime Verification is a lightweight formal verification technique. It is used to verify at runtime whether the system under analysis behaves as expected. The expected behaviour is usually formally specified by means of properties, which are used to automatically synthesise monitors. A monitor is a device that, given a sequence of events representing a system execution, returns a verdict symbolising the satisfaction or violation of the formal property. Properties that can (resp. cannot) be verified at runtime by a monitor are called monitorable and non-monitorable, respectively. In this paper, we revise the notion of monitorability from a practical perspective, where we show how non-monitorable properties can still be used to generate partial monitors, which can partially check the properties. Finally, we present the implications both from a theoretical and practical perspectives.
Towards a formalization of justification and justifiability
  • Authors: Willem Hagemann
  • Paper: https://dx.doi.org/10.4204/EPTCS.348.10
  • Cite:
  • Abstract We introduce the logic QKSD which is a normal multi-modal logic over finitely many modalities that additionally supports bounded quantification of modalities. An important feature of this logic is that it allows to quantify over the information components of systems. It can be used to derive justifcations and has a notion of justifiability. We also report on a prototypical implementation of a satisfiability solver of this logic and shows some examples.

FMAS 2020

The second edition, held online.

A Formal Model for Quality-Driven Decision Making in Self-Adaptive Systems
  • Authors: Fatma Kachi, Chafia Bouanaka, and Souheir Merkouche
  • Paper: https://dx.doi.org/10.4204/EPTCS.329.5
  • Video:
  • Abstract Maintaining an acceptable level of quality of service in modern complex systems is challenging and particularly in presence of various forms of uncertainty caused by changing execution context, unpredicted events, etc. Self-adaptability is a well-established approach for modelling such systems, and thus enabling them to achieve functional and/or quality of service objectives by autonomously modifying their behavior at runtime. Guaranteeing a continuous satisfaction of quality objectives needs a rigorous definition and analysis of system behavioral properties. Formal methods constitute a promising and effective solution in this direction in order to rigorously specify mathematical models of a software system in general and analyze its behavior in particular. They are also largely adopted to analyze and provide guarantees on the required properties of self-adaptive systems. Therefore, we introduce in the present work a formal model for quality-driven self-adaptive systems under uncertainty. We combine high-level Petri nets and plausible Petri nets in order to model complex data structures enabling system quality attributes quantification and to improve the decision-making process through selecting the most plausible plans with regard to the system actual context.
From Requirements to Autonomous Flight: An Overview of the Monitoring ICAROUS Project
  • Authors: Esther Conrad, Alwyn Goodloe, Laura Titolo, Ivan Perez, Swee Balachandran, Dimitra Giannakopoulou, Anastasia Mavridou and Thomas Pressburger
  • Paper: https://dx.doi.org/10.4204/EPTCS.329.3
  • Video:
  • Abstract The Independent Configurable Architecture for Reliable Operations of Unmanned Systems (ICAROUS) is a software architecture incorporating a set of algorithms to enable autonomous operations of unmanned aircraft applications. This paper provides an overview of Monitoring ICAROUS, a project whose objective is to provide a formal approach to generating runtime monitors for autonomous systems from requirements written in a structured natural language. This approach integrates FRET, a formal requirement elicitation and authoring tool, and Copilot, a runtime verification framework. FRET is used to specify formal requirements in structured natural language. These requirements are translated into temporal logic formulae. Copilot is then used to generate executable runtime monitors from these temporal logic specifications. The generated monitors are directly integrated into ICAROUS to perform runtime verification during flight.
How to Formally Model Human in Collaborative Robotics
  • Authors: Mehrnoosh Askarpour
  • Paper: https://dx.doi.org/10.4204/EPTCS.329.1
  • Video:
  • Abstract Human-robot collaboration (HRC) is an emerging trend of robotics that promotes the co-presence and cooperation of humans and robots in common workspaces.
    Physical vicinity and interaction between humans and robots, combined with the uncertainty of human behavior, could lead to undesired situations where humans are injured. Thus, safety is a primary priority for HRC applications.</br>Safety analysis via formal modeling and verification techniques could considerably avoid dangerous consequences, but only if the models of HRC systems are comprehensive and good enough, which requires reasonably realistic models of human behavior. This paper explores state-of-the-art solutions for modeling human and discusses which ones are suitable for HRC scenarios.
Towards Compositional Verification for Modular Robotic Systems
  • Authors: Rafael C. Cardoso, Louise A. Dennis, Marie Farrel and Matt Luckcuck
  • Paper: https://dx.doi.org/10.4204/EPTCS.329.2
  • Video:
  • Abstract Software engineering of modular robotic systems is a challenging task, however, verifying that the developed components all behave as they should individually and as a whole presents its own unique set of challenges. In particular, distinct components in a modular robotic system often require different verification techniques to ensure that they behave as expected. Ensuring whole system consistency when individual components are verified using a variety of techniques and formalisms is difficult. This paper discusses how to use compositional verification to integrate the various verification techniques that are applied to modular robotic software, using a First-Order Logic (FOL) contract that captures each component's assumptions and guarantees. These contracts can then be used to guide the verification of the individual components, be it by testing or the use of a formal method. In this paper, we provide an example of the former using an illustrative example of an autonomous robot used in remote inspection. We also discuss a way of defining a confidence metric for the verification associated with each component.
YAP: Tool Support for Deriving Safety Controllers from Hazard Analysis and Risk Assessments
  • Authors: Mario Gleirscher
  • Paper: https://dx.doi.org/10.4204/EPTCS.329.4
  • Video:
  • Abstract This tool paper provides a brief overview of YAP, a research tool for risk modelling and discrete-event safety controller design. For an example from the collaborative robotics domain, it is illustrated how one can use YAP in combination with the stochastic model checker PRISM to derive, verify, and synthesise controllers responsible for switching between activities and safety modes.

FMAS 2019

The first edition, hosted as a satellite workshop at Formal Methods 2019.

A Mission Definition, Verification and Validation Toolchain
  • Authors: Louis Viard, Laurent Ciarletta and Pierre-Etienne Moreau
  • Paper: https://link.springer.com/chapter/10.1007/978-3-030-54994-7_20
  • Abstract Safe operation of Cyber-Physical Systems such as Unmanned Aircraft Systems requires guarantees not only on the system, but also on the mission. Following approaches that are used to produce robust Cyber-Physical Controllers, we present the architecture of a mission definition, verification and validation toolchain. We conclude by reporting on the current state of the authors' implementation of this framework.
A Temporal Logic Semantics for Teleo-Reactive Procedures
  • Authors: Keith Clark
  • Paper: https://link.springer.com/chapter/10.1007/978-3-030-54994-7_1
  • Abstract Teleo-Reactive (TR) procedures are a robotic agent programming language proposed by Nils Nilsson. Each parameterised procedure comprises a sequence of guarded action rules where the guards of later rules are sub-goals of the guard of the first rule. We informally describe the evaluation semantics of a TR procedure call. We then formalise the concept of a TR procedure call evaluation by expressing key properties in LTL. Finally we show how the LTL statements can be used to prove key properties of a simple TR program making use of assumptions about the correspondence between the agent’s Belief Store percept facts and properties of the perceived world, and between the agent’s actions and environment change.
CriSGen: Constraint-based Generation of Critical Scenarios for Autonomous Vehicles
  • Authors: Andreas Nonnengart, Matthias Klusch, and Christian Mueller
  • Paper: https://link.springer.com/chapter/10.1007/978-3-030-54994-7_17
  • Abstract Ensuring pedestrian-safety is paramount to the acceptance and success of autonomous cars. The scenario-based training and testing of such self-driving vehicles in virtual driving simulation environments increasingly gained attention in the past years. Key challenge is the automated generation of critical traffic scenarios which usually are rare in real-world traffic, while computing and testing all possible scenarios is infeasible in practice. In this paper, we present a formal method-based approach CriSGen for an automated and complete generation of critical traffic scenarios for virtual training of self-driving cars. These scenarios are determined as close variants of given but uncritical and formally abstracted scenarios via reasoning on their non-linear arithmetic constraint formulas, such that the original maneuver of the self-driving car in them will not be pedestrian-safe anymore, enforcing it to further adapt the behavior during training.
Formalisation and Implementation of Road Junction Rules on an Autonomous Vehicle Modelled as an Agent
  • Authors: Gleifer Alves, Louise Dennis and Michael Fisher
  • Paper: https://link.springer.com/chapter/10.1007/978-3-030-54994-7_16
  • Abstract The design of autonomous vehicles includes obstacle detection and avoidance, route planning, speed control, etc. However, there is a gap considering how the rules of the road should be represented in an autonomous vehicle. Additionally, it is necessary to understand the behaviour of autonomous systems in order to check whether or not they work according to the rules of the road. Here, we propose a model checking agent-based architecture to embed the rules of the road in an agent representing the behaviour of an autonomous vehicle.
Verification of Fair Controllers for Urban Traffic Manoeuvres at Intersections
  • Authors: Maike Schwammberger and Christopher Bischopink
  • Paper: https://link.springer.com/chapter/10.1007/978-3-030-54994-7_18
  • Abstract Nowadays, autonomous crossing manoeuvres at intersections are especially challenging. In related work, a crossing controller for provably safe autonomous urban traffic manoeuvres was introduced. We extend this controller by a decentralised scheduling mechanism that ensures fair behaviour of the controller and also guarantees bounded liveness. We verify the correctness of our extension by an implementation and analysis with UPPAAL Stratego.