Department of Computer Science

CyPhyAssure School Modules

This page has titles and abstracts for each of school modules, along with slides for some of them. More will be added as they become available. The timetable for the school be found here.

RoboChart and RoboSim: Verified Simulation for Robotics

Prof. Ana Cavalcanti (University of York)

Simulation is a favoured technique for analysis of robotic systems. Currently, however, simulations are programmed in an ad hoc way, for specific simulators, using either proprietary languages or general languages like C or C++. Even when a higher-level language is used, no clear relation between the simulation and a design model is established. We present a tool-independent notation called RoboSim, designed specifically for modelling of (verified) simulations. We also show how we can use RoboSim models to check if a simulation is consistent with a functional design written in a UML-like notation akin to those often used by practitioners on an informal basis. We show how to check whether the design enables a feasible scheduling of behaviours in cycles as needed for a simulation, and formalise implicit assumptions routinely made when programming simulations. Tool support is also briefly discussed, with an extra session devoted to work with the tool. Our results enable the description of simulations using tool-independent diagrammatic models amenable to verification and automatic generation of code.

Challenges of Modelling, Verifying, and Assuring a Robot Assistant

Dr. Clare Dixon (University of Liverpool) – [Lecture Slides]

Robot assistants are being developed to help and work closely with humans in environments such as industrial, domestic and health care. The robots will need to be able to act autonomously, make decisions, choose between a range of activities and yet will need to operate close to, or in collaboration with humans. We need to make sure that such robot assistants are reliable, safe and trustworthy. I will discuss our experiences from the EPSRC-funded Trustworthy Robot Assistants project in developing and applying three approaches to the verification and validation of robot assistants. In particular, we focus on two scenarios: a personal robot assistant located in a domestic style house, and a robot co-worker for a cooperative manufacturing task.

Gaining Confidence in the Correctness of Robotic and Autonomous Systems

Prof. Kerstin Eder (University of Bristol) – [Lecture Slides]

Because no single technique is adequate to cover a whole system in practice, a variety of complementing verification techniques will be needed to gain confidence in the correctness of robotic and autonomous systems. In this talk I will concentrate on three aspects.

First, we will look into how to ensure the correctness of the code that implements robotic navigation algorithms. While much research has been invested into proving navigation algorithms correct, little has been done to demonstrate that the implementations are bug-free. I will present how encoding in SPARK, an imperative, strongly-typed language used mainly in the safety-critical systems domain, enables formal proof to be used, with 97% of the proof obligations being discharged fully automatically and performance comparable to that of the original (buggy) implementations.

Second, we will concentrate on the design stage, as it is important to be able to distinguish between design and implementation errors. In the case of control systems, designs are often written in Simulink before implementation in a programming or hardware design language. On the example of stability, I will show how assertions can be used to capture the designer’s intent (i.e. the system requirements) in Simulink, and how simulation combined with automated formal proof can be used to gain confidence in the control system meeting these requirements.

Third, we will turn to simulation-based testing, focusing on the problem of efficient and effective test generation. In the context of a Human-Robot Interaction scenario, I will present our approach of exploiting the planning in multi-agent systems to generate tests that achieve a high level of coverage as part of a coverage-driven verification method.

Finally, I will briefly outline the challenges of verifying robotic and autonomous systems, including specification, (more) automation, creatively combining techniques and exploring the use of AI for verification.

The research presented in this session is based on collaborations within the EPSRC funded projects “Robust Integrated Verification of Autonomous Systems” (EP/J01205X/1) and “Trustworthy Robotic Assistants” (EP/K006320/1).

The Landscape of Formal Methods for Robotics (Breakout)

Dr. Marie Farrell and Dr. Matt Luckcuck (University of Liverpool)

Autonomous robotic systems are complex, hybrid, and often safety-critical; which make their formal specification and verification uniquely challenging. Though commonly used, testing and simulation alone are insufficient to ensure the correctness of, or provide sufficient evidence for the certification of, autonomous robotics. Formal methods for autonomous robotics has received some attention in the literature, but no resource provides a current overview. In this session we will give an overview of our recent work that systematically surveys the state-of-the-art in formal specification and verification for autonomous robotics, and focus on some of the individual subdomains to which formal methods are being applied. Specifically, we identify and categorise the challenges posed by, the formalisms aimed at, and the formal approaches for the specification and verification of autonomous robotic systems.

Robotics - Autonomy, Verification, and Responsibility

Prof. Michael Fisher (University of Liverpool)

Robots will increasingly make their own decisions about what to do and when to do it without necessarily consulting humans. How can we trust these systems if we do not know what they will choose to do and, crucially, why they make these choices?

In this talk, I will describe how the use of rational agents at the core of autonomous systems forms the basis for not only transparency and explainability, but verifiability of behaviour and decision-making. Once we have robotic systems based on such hybrid agent architectures, then we can strongly analyse them and move towards not only trustworthiness, but safe, ethical and responsible behaviour.

I will provide some examples from our current and previous projects examining the deployment of hybrid agent architectures in both robots (domestic and industrial) and autonomous vehicles (road and air), covering the verification of safety, ethics, fault tolerance, etc.

Risk Assessment for Autonomous Systems (Breakout)

Dr. Mario Gleirscher (University of York)

This talk will discuss concepts used in risk assessment, describe the application of these concepts in reliability and safety assessment, indicate in which form these concepts can be applied in practical techniques, and build a bridge to rigorous computer-assisted assurance of machines, particularly, mobile machines with a very high degree of automation.

Model-based hazard analysis of human-robot interactions with HAZOP-UML

Prof. Jérémie Guiochet (University of Toulouse)

HAZOP-UML is a hazard analysis technique mixing the risk analysis technique HAZOP (Hazard Operability), and the system description language UML (Unified Modelling Language). It is developped at LAAS-CNRS (https://www.laas.fr/projects/HAZOPUML/) and applied in industrial contexts mainly for robot safety analysis. A selection of guidewords are applied to specific elements of UML diagram in order to guide deviation analysis. Outputs of HAZOP-UML are a iist of hazardous situations, a set of recommendations, and a list of hypotheses. Main advantages of HAZOP-UML are : simplicity, applicable at early development process step, systematic, model-based (sharing models with development process) and user-centered. The proposed lecture will introduce risk analysis and HAZOP-UML. Examples from robotics application will be used to illustrate the approach.

Dynamic Reasoning for Safety Assurance

Dr. Ibrahim Habli (University of York) – [Lecture Slides]

Robotics and Autonomous Systems (RAS) are expected to learn and evolve in order to deal with circumstances that cannot be predicted prior to their deployment. This means that the safety evidence and the overall safety case also have to evolve in order to reflect the nature of the actual risk and explain and justify the means for managing this dynamic risk. In this session, we will explore mechanisms for incorporating dynamism in safety cases, particularly for systems that use machine learning. We will address this challenge empirically by focusing on real-world applications and analyse the assurance factors that are expected to evolve (e.g. for healthcare, this could be changes in the clinical practice and scanning technology and frequency of retraining). Further, we will discuss the notion of explainability and the extent to which this could be provided dynamically and evaluate, from safety and ethical perspectives, how this could support an RAS safety case.

Structuring and Potentially Formalising (Assurance) Case Arguments

Prof. Tim Kelly (University of York) – [Lecture Slides]

In many safety-critical industries, developers and operators are required to construct and present well reasoned arguments that their systems achieve acceptable levels of safety. These arguments (together with supporting evidence) are typically referred to as an “assurance case”. Safety arguments historically have been communicated through narrative text, leading often to problems of comprehension and communication. Over the last thirty years, there has been increasing interest in using structured argumentation notations such as GSN (Goal Structuring Notation), CAE (Claims-Argument-Evidence), or SACM (Structured Assurance Case Metamodel) to communicate the structure of the argument. Whilst such arguments are structured, they remain informal. There is increasing interest in exploring how these informal arguments may be modelled in formal logic, potentially opening up benefits of forms of analysis not possible with informally recorded argument. In this lecture, we will explore how evidence originating from formal methods can be integrated into an assurance case, whilst retaining substantive connections with the necessary informal argumentation.

Critical Barriers to Assuring Autonomous Robots

Prof. John McDermid (University of York)

There are challenges in assuring the safety of Robots and Autonomous Systems (RAS) particularly where their functionality is dependent on Artificial Intelligence (AI) or Machine Learning (ML). The Assuring Autonomy International Programme is developing approaches to the assurance and regulation of RAS, with the intention of producing a Body of Knowledge (BoK) that codifies both the challenges and approaches to resolving them. In particular the Programme has identified Critical Barriers to Assurance Regulation (CBARs) to assurance and regulation which, if not overcome, could lead to the inability to deploy beneficial systems or, worse, the deployment of unsafe systems. The talk will illustrate some of the CBARs and identify approaches to overcoming them. The intent is to provide a systems engineering view of issues, not focusing on particular technologies or analysis methods.

Second Generation Model-based Testing – Provably Strong Testing Methods for the Certification of Autonomous Systems

Prof. Jan Peleska (University of Bremen) – [Lecture 1 Slides, Lecture 2 Slides, Breakout Slides]

Starting with a short review of the state of the art in model-based testing, major challenges for testing autonomous systems of systems are highlighted. In particular, we explain why the current approach to base automated test case and test procedure generation on complete behavioural models will become infeasible in the autonomous systems domain. Promising methods to overcome this problem are presented. We discuss the challenge how to approximate complete fault coverage - that is, the capability to uncover all system failures under certain hypotheses - in presence of incomplete behavioural models. This aspect is of high importance, because test suites guaranteeing complete fault coverage are optimally suited for obtaining certification credit for safety-critical systems. Examples are based on a vision developed by the Airbus Urban Mobility division, where small passenger cabins can be alternatively placed on autonomous car chassis, attached to autonomous drones, or inserted into likewise autonomous trains. Cars, drones, and trains may collaborate as autonomous constituents of a system-of-systems, in order to solve complex multi-party/multi-destination transportation problems.

Assurance challenges for robotics and AI in nuclear fusion

Robert Skilton (CEng, MIET) (RACE, UK Atomic Energy Authority)

The world needs new, cleaner ways to supply our increasing energy demand, as concerns grow over climate change and declining supplies of fossil fuels, and nuclear fusion is one of the most promising options for generating large amounts of carbon-free energy in the future. A number of significant challenges are involved with the commercialisation of fusion energy including the development of viable maintenance strategies, which would likely incorporate large numbers of robotic and AI systems, which would need to work within the context of commercial productivity demands as well as nuclear regulation. This talk will explore some of the challenges associated with maintenance of future fusion power plants, focussing on those associated with robotics and AI, where assurance plays a key role.

Practically Formal Development and Assurance of Complex Software-Intensive Safety-Critical Systems

Prof. Alan Wassyng (McMaster University, Canada) – [Lecture 1 Slides]

Many modern systems are incredibly complex and are also potentially dangerous. Formal methods have a long history and play an ever increasing role in the (software) development process. The talk will present a personal view of how and when to use mathematics in both the development and assurance of such systems. The emphasis in the talk will be on building systems such that we can be adequately confident that they are safe - when used as intended. The talk will include a brief look at a new way of assuring safety-critical systems, that promises to be more rigorous and less ad hoc than current assurance case practice.

Formal Methods for Certification of Control Software and Potential for their Successful Application in Autonomous Robot Control

Prof. Alan Wassyng (McMaster University, Canada) – [Lecture 2 Slides]

There are various critical properties we have to assure in different autonomous systems - including robots. Obvious amongst these are safety, security and dependability. State of the practice robotics development uses a variety of formal methods, and very often this includes verification activities. A common misconception is that verification is synonymous with certification. It is not. Certification of a system can be thought of as the confirmation of specific characteristics of that system, usually provided by an external review. This could make use of verification results, but is much more than verification. Traditionally, certification of software intensive systems has involved a myriad of expert reviews. The use of assurance or safety cases has not really changed that. These cases still rely on expert opinion for many components in the case. The “explicit rigorous argument” originally promised for assurance cases has proven to be quite elusive. Modern assurance cases are still quite effective in that they encourage the team developing the assurance case to question the validity of steps in enough detail to develop adequate confidence in the outcome of the case. This talk will discuss key concepts of assuring system safety, why assurance cases are important, how they can be used to satisfy certification standards, and how formal methods should play an important role in both the evidence and the assurance itself.

Towards Verified and Certifiable Subsystems

Prof. Burkhart Wolff (University Paris-Sud, France)

This talk addresses the problem of comprehensive verification of (safety-critical) subsystems including processor, OS, and application functions. Modelling, Refinement-Proofs, Code - and Document Generation were done in Isabelle/HOL. Particular emphasis is done on the aspect of document-generation targeting a formal certification process; the approach is centered around a central document from which all artefacts were generated in order to ensure their coherence both in formal as well semi-formal aspects. The approach is demonstrated for the Odometric Function of a railway system implemented on top of seL4 and a SabreLight Board. The toolchain built around Isabelle is called CVCE.

Department of Computer Science
Deramore Lane, University of York, Heslington, York, YO10 5GH, UK
Tel: 01904 325500 | Fax: 01904 325599