Department of Computer Science

Verifiable Autonomy - how can you trust your robots?

Michael Fisher

University of Liverpool

Abstract

As the use of autonomous robotic systems spreads, the need for their activities to not only be understandable and explainable, but even verifiable, is increasing. While the idea of autonomy is appealing and powerful, actually developing autonomous systems to be reliable is far from simple. In particular, how can we be sure what such a system will decide to do, and can we really formally guarantee this behaviour? So, an important aspect is to be able to verify the truly autonomous decision-making that forms the core of many contemporary systems. In this course, we describe a particular approach to the formal verification of decision making in agent-based autonomous systems. This will incorporate material on practical autonomous systems, agent programming languages, formal verification, agent model-checking, and the practical analysis of autonomous systems. [ This material was developed jointly with Louise Dennis ]

Department of Computer Science
Deramore Lane, University of York, Heslington, York, YO10 5GH, UK
Tel: 01904 325500 | Fax: 01904 325599