Safe machine learning

As safety-critical systems increasingly utilise machine learning (ML) and autonomy, the group is focusing its research on this area. Some of the group’s members are part of the Assuring Autonomy International Programme (AAIP) and authored the first standardised process for assuring the safety of machine learning that is used in autonomous systems.

The Assurance of Machine Learning for use in Autonomous Systems (AMLAS) methodology incorporates a set of safety case patterns and a process for systematically integrating safety assurance into the development of ML components. It helps to develop a compelling argument about the ML model to feed into a system safety case. The group has just launched a tool to help users to work through the AMLAS process and create a safety case for their ML component.

Contact us

Professor John McDermid

Professor John McDermid

High Integrity Systems Research Group lead

john.mcdermid@york.ac.uk

Contact us

Professor John McDermid

Professor John McDermid

High Integrity Systems Research Group lead

john.mcdermid@york.ac.uk