Robots will increasingly make their own decisions about what to do and when to do it without necessarily consulting humans. How can we trust these systems if we do not know what they will choose to do and, crucially, why they make these choices?
In this talk, we describe how this move to “autonomy” involves distinct approaches to software engineering and formal methods. In particular, our use of rational agents at the core of autonomous systems  forms the basis for not only transparency and explainability, but verifiability of behaviour and decision-making . By engineering systems based on such hybrid agent architectures, we can strongly analyse them and move towards not only trustworthiness and predictability, but safe, ethical and responsible behaviour .
From these principles, we provide some examples examining the deployment of hybrid agent architectures in both robots (domestic and industrial) and autonomous vehicles (road and air), covering the verification of safety, ethics, fault tolerance, etc.
 Michael Fisher, Louise A. Dennis, Matthew P. Webster: Verifying Autonomous Systems. Communications of the ACM 56(9):84-93 (2013).
 Louise A. Dennis, Michael Fisher, Nicholas Lincoln, Alexei Lisitsa, Sandor M. Veres: Practical Verification of Decision-making in Agent-based Autonomous Systems. Automated. Software Engineering 23(3):305-359 (2016).
 Paul Bremner, Louise A. Dennis, Michael Fisher, Alan F. T. Winfield: On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots. Proceedings of the IEEE 107(3):541-561 (2019)