FMAS Workshop Logo

This one day workshop will bring together researchers working on a range of techniques for formal verification or autonomous systems, to present recent work in the area, discuss key difficulties, and stimulate collaboration between the robotics and formal methods communities. This workshop will include invited speakers, contributed papers, and experience reports.

FMAS 2020 will be held online, due to the ongoing disruption caused by COVID-19. Since many of the conferences that we were looking to for hosting the workshop are not accepting satellite events, FMAS 2020 will run stand-alone.

Scope

Autonomous – and Robotic – Systems present unique challenges for formal methods. They are embodied entities that can interact with the real world and make autonomous decisions. Amongst others, they can be viewed as safety-critical, cyber-physical, hybrid, and real-time systems. Key issues for formal methods applied to autonomous systems include capturing how the system will deal with a dynamic external environment and verification of the system’s decision making capabilities – including planning, ethical, and reconfiguration choices. Some autonomous systems require certification before deployment, others require public trust for wide adoption; both of these scenarios are being tackled by formal methods.

The goals of this workshop are to bring together leading researchers in this area to present recent and ongoing work, including experience reports and case studies as well as identify future directions for this emerging application of formal methods. This workshop is concerned with the use of formal methods to specify, model, or verify autonomous or robotic systems; in whole or in part. Submissions may focus on case studies that identify the challenges for formal methods in this area, or experience reports that provide guidelines for tackling these challenges.

We are especially interested in work using integrated formal methods, discussing the future directions of the field, using Runtime Verification or other approaches to deal with the reality gap, the cross over of safety and security, and verification of systems against safety assurance arguments or standards documents.

Programme Information

Below you can find the link to the proceedings and the details of the invited talks.

The details of the papers accepted at FMAS 2020 and the presentation videos from the workshop, are available in the Accepted Papers section.

Proceedings

The proceedings for FMAS2020 are available online via EPTCS . We would like to thank our editor and the rest of the team at EPTCS for their help with publishing these proceedings.

Invited Talks

There were two invited speakers for FMAS 2020.

  • Ivan Perez is a research scientist at the National Institute of Aerospace, and has been a member of the NASA Formal Methods Group since 2018. As a contractor assigned to NASA Langley Research Center, he investigates the application of formal methods to problems in aerospace, with particular focus on runtime verification of unmanned aerial vehicles. In the past, Dr. Perez also worked as a programmer and researcher for the High Performance Computing Center (Germany), IMDEA Software (Spain), the Technical University of Madrid (Spain), and the University of Twente (Netherlands), as well as for multiple functional programming companies. Dr. Perez completed his PhD in Computer Science at the University of Nottingham (UK), which focused on testing and functional programming applied to games and user interfaces. He also holds a Master’s Degree in Computational Logic and a Degree of Engineer in Computer Science, both from the Technical University of Madrid.
  • Title : Runtime Verification with Copilot 3
  • Abstract: Ultra-critical systems require high-level assurance, which cannot always be guaranteed in compile time. The use of runtime verification (RV) enables monitoring these systems in runtime, to detect property violations early and limit their potential consequences. However, the introduction of monitors in ultra-critical systems poses a challenge, as failures and delays in the RV subsystem could affect other subsystems and threaten the mission as a whole. In this talk we present Copilot 3, a runtime verification framework for real-time embedded systems. Copilot monitors are written in a compositional, stream-based language with support for a variety of Temporal Logics (TL), which results in robust, high-level specifications that are easier to understand than low-level imperative implementations. The framework then translates monitor specifications into C code with static memory requirements, which can be compiled to run on embedded hardware. We also discuss how Copilot has been used in experimental research by NIA, NASA and other organizations, its place in relation to other RV frameworks, and possible future directions for RV in autonomous systems.
  • Louise Dennis is from the University of Manchester. Her background is in artificial intelligence and more specifically in agent and autonomous systems and automated reasoning. She has worked on the development of several automated reasoning and theorem proving tools, most notably the Agent JPF model checker for BDI agent languages; the lambda-clam proof planning system; and the PROSPER Toolkit for integrating an interactive theorem prover (HOL) with automated reasoning tools (such as SAT solvers) and Case/CAD tools. More recently she has investigated rational agent programming languages and architectures for autonomous systems, with a particular emphasis on verifiable systems and ethical reasoning.
  • Title : Verifying Machine Ethics
  • Abstract: Machine ethics is concerned with the challenge of constructing ethical and ethically behaving artificial agents and systems. One important theme within machine ethics concerns explicitly ethical agents – those which are not ethical simply because they are constrained by their programming or deployment to be so but which use a concept of ethics in some way as part of their operation. Normally this requires the provision of rules, utilities or priorities by a programmer, knowledge engineer or user. In this talk I will address the question of how such explicitly ethical programs can be verified. What kind of properties can we consider and what kind of errors might we find?

Accepted Papers

We accepted the following papers for FMAS 2020.

  • Authors: Mehrnoosh Askarpour
  • Abstract: Human-robot collaboration (HRC) is an emerging trend of robotics that promotes the co-presence and cooperation of humans and robots in common workspaces. Physical vicinity and interaction between humans and robots, combined with the uncertainty of human behavior, could lead to undesired situations where humans are injured. Thus, safety is a primary priority for HRC applications. Safety analysis via formal modeling and verification techniques could considerably avoid dangerous consequences, but only if the models of HRC systems are comprehensive and good enough, which requires reasonably realistic models of human behavior. This paper explores state-of-the-art solutions for modeling human and discusses which ones are suitable for HRC scenarios.
  • Video:
  • Authors: Rafael C. Cardoso , Louise A. Dennis , Marie Farrell, Michael Fisher, and Matt Luckcuck
  • Abstract: Software engineering of modular robotic systems is a challenging task, however, verifying that the developed components all behave as they should individually and as a whole presents its own unique set of challenges. In particular, distinct components in a modular robotic system often require different verification techniques to ensure that they behave as expected. Ensuring whole system consistency when individual components are verified using a variety of techniques and formalisms is difficult. This paper discusses how to use compositional verification to integrate the various verification techniques that are applied to modular robotic software, using a First-Order Logic (FOL) contract that captures each component's assumptions and guarantees. These contracts can then be used to guide the verification of the individual components, be it by testing or the use of a formal method. In this paper, we provide an example of the former using an illustrative example of an autonomous robot used in remote inspection. We also discuss a way of defining a confidence metric for the verification associated with each component.
  • Video:
  • Authors: Aaron Dutle, Cesar Munoz, Esther Conrad, Alwyn Goodloe, Laura Titolo, Ivan Perez, Swee Balachandran, Dimitra Giannakopoulou, Anastasia Mavridou and Thomas Pressburger
  • Abstract: The Independent Configurable Architecture for Reliable Operations of Unmanned Systems (ICAROUS) is a software architecture incorporating a set of algorithms to enable autonomous operations of unmanned aircraft applications. This paper provides an overview of Monitoring ICAROUS, a project whose objective is to provide a formal approach to generating runtime monitors for autonomous systems from requirements written in a structured natural language. This approach integrates FRET, a formal requirement elicitation and authoring tool, and Copilot, a runtime verification framework. FRET is used to specify formal requirements in structured natural language. These requirements are translated into temporal logic formulae. Copilot is then used to generate executable runtime monitors from these temporal logic specifications. The generated monitors are directly integrated into ICAROUS to perform runtime verification during flight.
  • Video:
  • Authors: Mario Gleirscher
  • Abstract: This tool paper provides a brief overview of YAP, a research tool for risk modelling and discrete-event safety controller design. For an example from the collaborative robotics domain, it is illustrated how one can use YAP in combination with the stochastic model checker PRISM to derive, verify, and synthesise controllers responsible for switching between activities and safety modes.
  • Video:
  • Authors: Fatma Kachi, Chafia Bouanaka, and Souheir Merkouche
  • Abstract: Maintaining an acceptable level of quality of service in modern complex systems is challenging and particularly in presence of various forms of uncertainty caused by changing execution context, unpredicted events, etc. Self-adaptability is a well-established approach for modelling such systems, and thus enabling them to achieve functional and/or quality of service objectives by autonomously modifying their behavior at runtime. Guaranteeing a continuous satisfaction of quality objectives needs a rigorous definition and analysis of system behavioral properties. Formal methods constitute a promising and effective solution in this direction in order to rigorously specify mathematical models of a software system in general and analyze its behavior in particular. They are also largely adopted to analyze and provide guarantees on the required properties of self-adaptive systems. Therefore, we introduce in the present work a formal model for quality-driven self-adaptive systems under uncertainty. We combine high-level Petri nets and plausible Petri nets in order to model complex data structures enabling system quality attributes quantification and to improve the decision-making process through selecting the most plausible plans with regard to the system actual context.
  • Video:

Chairs

Programme Committee