FMAS 2021 is a one-day workshop that brings together researchers working on a range of techniques for the formal verification of autonomous systems, to present recent work in the area, discuss key challenges, and stimulate collaboration between autonomous systems and formal methods researchers.
FMAS 2021 will be an online event, with the possibility of being a hybrid event if COVID-19 restrictions allow. The decision between hybrid and purely online will be made on the author notification date (see Important Dates).
Autonomous Systems present unique challenges for formal methods. They are often embodied in robotic systems that can interact with the real world, and they make independent decisions. Amongst other categories, they can be viewed as safety-critical, cyber-physical, hybrid, and real-time systems.
Key challenges for applying formal methods to autonomous systems include:
- the system’s dynamic deployment environment;
- verifying the system’s decision making capabilities – including planning, ethical, and reconfiguration choices; and
- using formal methods results as evidence given to certification or regulatory organisations.
FMAS welcomes submissions that use formal methods to specify, model, or verify autonomous systems; in whole or in part. We are especially interested in work using Integrated Formal Methods, where multiple (formal or non-formal) methods are combined.
Autonomous systems are often embedded in robotic or cyber-physical systems, and they share many features (and verification challenges) with automated systems. FMAS welcomes submissions with applications to:
- automated systems,
- semi-autonomous systems,
- fully-autonomous systems, or
- systems that have the potential to be autonomous in the future.
Topics of interest include (but are not limited to):
- Novel or Integrated Formal Methods that are suited to Autonomous or Automated Systems,
- Runtime Verification or other formal approaches to deal with the reality gap (the gap between models/simulations and the real world),
- Verification against safety assurance arguments or standards documents,
- Case Studies that identify challenges when applying formal methods to autonomous systems,
- Experience Reports that provide guidance for tackling challenges with formal methods or tools, or
- Discussions of the future directions of the field.
Submission and Publication
There are two categories of submission:
- Short papers – 6 pages (excluding references), suitable for overviews of an approach or work-in-progress. Work from PhD students is particularly welcome;
- Long papers – 15 pages (excluding references), suitable for descriptions of completed studies, new approaches, or new directions.
Submission will be via easychair https://easychair.org/conferences/?conf=fmas2021.
FMAS has a single track that accepts both categories of submission. Submissions must be prepared using the EPTCS LaTeX style.
Each submission will receive at least three, single-blind reviews. If a paper is accepted, at least one of the authors must attend the workshop to present their work. We intend that accepted papers will be published via EPTCS.
30th of July 202113th of August 2021 (AOE)
3rd of September 202117th of September 2021
- Final Version due:
24th of September 20218th of October 2021
- Workshop: 21st of October 2021
FMAS 2021 will be held online via a video conference system. We will circulate details closer to the event, when the number of attendees is clearer. If a hybrid event is possible, the venue for physical participants will be Maynooth University, Ireland.
(All times are GMT)
To Be Announced
The proceedings for FMAS 2021 will be made available online via EPTCS.
Clare Dixon Professor of Computer Science at the University of Manchester (UK) and leader of the Autonomy and Verification research group. Clare’s research interests include verification of robot and autonomous systems.
- Title : Help or Hazard: Towards Verifying Autonomous Robot Systems
- Abstract: Autonomous robotic systems are being developed to operate in a range of environments: industrial, transport, healthcare and domestic and have the potential to be of great benefit to society. Further, they may be assisting, collaborating with or working near humans. We must make sure they are safe, robust, reliable and trustworthy and they themselves don't become become the hazard. In this talk I will discuss ways to verify such systems, relating to research carried out over a number of projects with different types of robot, operating in a range of scenarios.
Divya Gopinath Researcher in Formal Verification in the Robust Software Engineering (RSE) group at the NASA Ames Research Center, her research focuses on applying formal methods to analyze and verify neural networks and debug software programs. Divya is also a member of the FMAS2021 Programme Committee.
- Title : Understanding and Verifying Deep Neural Networks
- Abstract: Deep Neural Networks (DNNs) have gained immense popularity in recent times and have widespread use in applications such as image classification, sentiment analysis, speech recognition and also in safety-critical applications such as autonomous driving. However, they suffer limitations such as lack of explainability and robustness which raise safety and security concerns in their usage. Further, the complex structure and large input spaces of DNNs act as an impediment to thorough verification and testing. The SafeDNN project at the Robust Software Engineering (RSE) group at NASA aims at exploring techniques to ensure that systems that use deep neural networks are safe, robust and interpretable. In this talk, I will be presenting our technique Prophecy that automatically infers formal properties of deep neural network models. The tool extracts patterns based on neuron activations as preconditions that imply certain desirable output properties of the model. I would be highlighting case studies that use Prophecy in obtaining explanations for network decisions, understanding correct and incorrect behavior, providing formal guarantees wrt safety and robustness, and debugging neural network models. We have applied the tool on image classification networks, neural network controllers providing turn advisories in unmanned aircrafts, regression models used for autonomous center-line tracking in aircrafts and neural network object detectors.
To Be Announced
- Stylianos Stelios Basagiannis Collins Advanced Technology Centre (Ireland)
- Christopher Bishopink University of Oldenburg (Germany)
- Rafael C. Cardoso University of Manchester (UK)
- Louise A. Dennis University of Manchester (UK)
- Angelo Ferrando University of Genova (Italy)
- Michael Fisher University of Manchester (UK)
- Georgios Giantamidis Collins Advanced Technology Centre (Ireland)
- Mario Gleirscher University of York (UK)
- Divya Gopinath NASA Ames Research Center (USA)
- Mallory S. Graydon NASA Langley Research Center (USA)
- Jérémie Guiochet University of Toulouse (France)
- Rob Hierons University of Sheffield (UK)
- Taylor T. Johnson Vanderbilt University (USA) (USA)
- Raluca Lefticaru University of Bradford (UK)
- Sven Linker University of Lancaster Leipzig (Germany)
- Lina Marsso University of Toronto (Canda)
- Anastasia Mavridou NASA Ames Research Center (USA)
- Claudio Menghi University of Luxembourg (Luxembourg)
- Alice Miller University of Glasgow (UK)
- Alvaro Miyazawa University of York (UK)
- Rosemary Monahan Maynooth University (Ireland)
- Dominique Méry Université de Lorraine (France)
- Colin Paterson University of York (UK)
- Ivan Perez NASA Langley/National Institute of Aerospace (USA)
- Chiara Picardi University of York (UK)
- Maike Schwammberger University of Oldenberg (Germany)
- Marija Slavkovik University of Bergen (Norway)
- Silvia Lizeth Tapia Tarifa University of Oslo (Norway)
- Laura Titolo National Institute of Aerospace (USA)
- Hao Wu Maynooth University (Ireland)