Our research focuses on safety-critical software development and assurance across multiple domains, including emerging autonomous technologies. It ranges from fundamental principles through to trial implementation, linking core computer science with disciplines including engineering, law, mathematics, philosophy and psychology.

Contact us

Professor John McDermid

Professor John McDermid

High Integrity Systems Research Group lead

john.mcdermid@york.ac.uk

The group is well-known for its work on assurance cases, having developed the widely used goal structuring notation (GSN). The work on assurance of machine learning (ML) includes argument patterns supporting each stage of the assurance process, e.g. training data selection. There is ongoing work with lawyers and philosophers (ethicists) to develop principled ethical assurance arguments and to find ways of representing and reasoning about responsibilities for the decisions and actions of autonomous systems.

There are several strands of work focused on the safety assurance and sociotechnical resilience of robotics and autonomous systems (RAS). Such systems move decision-making responsibility from human to machine, and we're developing analysis methods to identify hazards and to derive safety requirements to control those hazards. Similarly, we are developing ways of using mathematical methods to develop decision-making algorithms together with proofs of their soundness. The group is also exploring the use of explanations of ML-based decision-making systems to support validation, and hence assurance, of such algorithms.

The group is involved in major projects: The six-year Assuring Autonomy International Programme (AAIP), supported by Lloyd’s Register Foundation, focuses on the assurance and regulation of RAS. The AAIP develops and evaluates assurance methods with its partners in domains as diverse as agriculture, autonomous driving, healthcare, maritime and space. The group leads the Trustworthy Autonomous Systems Node in Resilience - one node of the EPSRC-funded Trusted Autonomous Systems (TAS) programme. We are also involved in two TAS-funded projects concerned with improving trust in autonomous systems, including the Assuring Responsibility for Trusted Autonomous Systems project. The group also has projects directly funded by a range of companies in several different sectors and with government bodies, including NHS Digital and Volvo Construction Equipment. 

We advance the state of the art and practice of assuring the safety of complex computer-based systems, including autonomous systems such as self-driving cars. This is a critical step to enable the promised benefits of robotics and autonomy to be realised whilst ensuring the health and safety of the public. The group provides the conceptual underpinnings and key methods to design, implement, verify and assure robots and autonomous systems, with a special emphasis on assurance of safety. As the systems progressively embrace ML and become capable of operating with greater degrees of autonomy, this introduces ethical challenges, and the group’s work is expanding to embrace assurance of ethical as well as safe behaviour of ML-based systems. 

More specific objectives include: 

  • providing verification and assurance processes for demonstrating the safety of ML-based systems
  • developing methods for analysing robots and autonomous systems to derive safety requirements for the system as a whole, as well as for the sensing and decision-making subsystems
  • developing theory and tools for the verification of these safety requirements
  • developing methods for the synthesis of correct-by-construction decision-making control systems
  • developing principles and guidelines for ethically assured development of resilient robotics and autonomous systems
  • understanding how to ensure the safety of the interactions between humans and autonomous systems
  • evaluating the efficacy of safety engineering methods. 

The group influences industrial practice and regulations by engaging with industry and government across a range of sectors including aerospace, automotive, healthcare and maritime.

The group’s research has influenced national and international safety standards, guidelines and regulations. These include ISO 26262, the international standard for programmable electronics and software used by all automotive suppliers. The GSN method, which is used to document and present proof that safety goals have been achieved, has advanced the implementation of safety case practice within regulatory structures and processes.

The group’s ongoing work on regulation and standards for autonomous driving will benefit much of UK society by establishing norms for assuring that the vehicles are safe and behave ethically. This work will also benefit citizens of other countries through influence on international standards. The group’s work in the healthcare sector is already influencing policy and is likely to benefit many hospital patients, for example by assuring and explaining the safety of recommendations from ML-based decision-support systems. The work will also benefit clinicians and is already helping to provide a sound technical basis for regulating such systems. This form of impact will increasingly be seen in other sectors including aviation, maritime and defence. The group also provides specialist research-led teaching to industry and government departments to support the transition of emerging principles and methods from research into practice.

Stories

Group members

PhotoContact details
Dr Rob Alexander

Dr Rob Alexander

Academic staff

rob.alexander@york.ac.uk 

Professor Radu Calinescu

Professor Radu Calinescu

Academic staff

radu.calinescu@york.ac.uk

Dr Ibrahim Habli

Dr Ibrahim Habli

Academic staff

ibrahim.habli@york.ac.uk

Professor John McDermid 

Professor John McDermid

Academic staff - group lead

john.mcdermid@york.ac.uk 

Dr Colin Paterson 

Dr Colin Paterson

Academic staff

colin.paterson@york.ac.uk 

Dr Richard Hawkins

Dr Richard Hawkins

Research staff

richard.hawkins@york.ac.uk

Dr Kester Clegg 

Dr Kester Clegg

Research Associate

kester.clegg@york.ac.uk

Dr Jordan Hamilton 

Dr Jordan Hamilton

Research Associate

jordan.hamilton@york.ac.uk

Dr Vicky Hodge 

Dr Vicky Hodge

Research Associate

victoria.hodge@york.ac.uk

Dr Calum Imrie 

Dr Calum Imrie

Research Associate

calum.imrie@york.ac.uk

Dr Yan Jia 

Dr Yan Jia

Research Fellow

yan.jia@york.ac.uk

Marten Kaas 

Marten Kaas

Research Associate

marten.kaas@york.ac.uk

Dr John Molloy 

Dr John Molloy

Research Fellow

john.molloy@york.ac.uk

Matt Osborne 

Matt Osborne

Research Fellow

matthew.osborne@york.ac.uk

Mike Parsons 

Mike Parsons

Research Fellow

m.parsons@york.ac.uk

Dr Chiara Picardi 

Dr Chiara Picardi

Research Associate

chiara.picardi@york.ac.uk

Dr Zoe Porter 

Dr Zoe Porter

Research Associate

zoe.porter@york.ac.uk

Ioannis Stefanakos 

Ioannis Stefanakos

Research Associate

ioannis.stefanakos@york.ac.uk

Dr Simos Gerasimou 

Dr Simos Gerasimou

Academic affiliate

Dr Claire Ingram 

Dr Claire Ingram

Teaching & Scholarship affiliate

claire.ingram@york.ac.uk

Contact us

Professor John McDermid

Professor John McDermid

High Integrity Systems Research Group lead

john.mcdermid@york.ac.uk