When an autonomous system, such as a self-driving car or healthcare diagnosis app, takes or recommends an action that affects you, how do we establish who is responsible for the outcome?

Establishing who is responsible for the decisions and outcomes of autonomous systems is a crucial element of their trustworthy governance. 

This project will develop an interdisciplinary methodology to trace and allocate responsibility for the decisions and outcomes of autonomous systems across the lifecycle under the conditions of complexity, uncertainty, and change that characterise them and their operating environments.

Based on a rigorous conceptual foundation and on well-formed assurance argument structures, the primary output of the methodology will be a responsibility assurance case. This assurance case will present a clear and defensible argument, supported by evidence, that the tracing and allocation of responsibility has been addressed, is sufficient and is complete.

The overall aim is that all stakeholders - designers, engineers, developers, regulators, operators, and users - will be able to use this methodology to achieve responsibility-by-design (RBD) and responsibility-through-lifecycle (RTL).

About the project

Contact us

Professor Nick Pears

Professor Nick Pears

Deputy Head of Department (Research)

nick.pears@york.ac.uk

Funders

This work is supported by the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems programme [EPSRC ref: EP/W011239/1] 

Visit the UKRI.org

Contact us

Professor Nick Pears

Professor Nick Pears

Deputy Head of Department (Research)

nick.pears@york.ac.uk