Artificial Intelligence (AI) systems have vast potential to enhance human knowledge and capabilities and drive economic growth. But they must also be deployed with deep consideration and advance mitigation of their potentially adverse impacts on people, organisations and society.
AI Safety is the primary focus of the UKRI AI CDT in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS).
Into the future
The vision of SAINTS is to train the next generation of future leaders with the research expertise and skills necessary to ensure that the benefits of AI systems are realised without introducing harm as the systems and their environments evolve.
Research will be focused on the lifelong safety assurance of increasingly autonomous AI systems in dynamic and uncertain contexts, building on methodologies and concepts in disciplines spanning AI, safety science, philosophy, law and the social sciences.
The focus of SAINTS CDT is safety assurance which is made up of the actions, arguments and evidence which justify confidence that AI systems are acceptably safe in their operating contexts.
The next generation
SAINTS will bring together students from a diversity of backgrounds and sectors to deliver a new generation of experts who make leading contributions to the safety of AI.
Artificial Intelligence is a multi-billion pound industry set to revolutionise how we live, work and behave. The UKRI Centre for Doctoral Training in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS) is preparing the next generation of professionals with the skills and knowledge to ensure AI systems are deployed responsibly and safely.
The CDT’s vision can be split into two, core research themes:
Lifelong safety of AI Systems
Responsible innovation for AI cannot be achieved by viewing safety as a bolt-on activity or a tick-box exercise to be completed prior to deployment. It needs to be reflectively and iteratively weaved through the full engineering life cycle (‘through-life’), from design to post-deployment.
The research under this theme will cover: safety-driven design and training for evolving contexts; testing for open and uncertain operating environments; safe retraining and continual learning; proactive monitoring procedures and dynamic safety cases; ongoing assurance of societal and ethical acceptability.
Safety of increasingly autonomous AI Systems
The transfer of decision-making functions from humans to AI systems subverts traditional safety practices. It places new demands on design and assurance, complicates causal analyses and makes explainability more important. It also disrupts current frameworks for moral and legal accountability.
The research under this theme will cover: understanding Human-AI interaction to design safe joint cognitive systems; the assurance of safe transition between human and AI control; achieving effective human oversight and AI explainability; preserving human autonomy and responsibility.
Why choose a Safe AI PhD?
Society urgently needs professionals with the skills and knowledge to assure the safety of AI systems in their real-world contexts.
Our CDT brings together people from diverse disciplines such as computer science, philosophy, law, sociology and economics to meet this demand and unlock the benefits of AI for all, across all sorts of sectors: health, transport, aerospace, and others.
The SAINTS training programme is rich with opportunities for its students. In addition to expert teaching in AI, safety, philosophical ethics, law and sociology, students will work with our committed industrial partners, and regulatory and public sector partners giving them real-world experience. Students will pursue their doctoral research within multidisciplinary research teams, which focus on ‘grand challenges’ that align with the CDT’s two research themes outlined above.
Our Safe AI doctoral programme is creating the next generation of Safe AI experts and a lasting community of professionals who will pioneer a new generation of evidence-based policy and practices for Safe AI.
We are committed to creating and sustaining a diverse, inclusive community of leaders in AI safety who are committed to upholding equality in their work. SAINTS will embed an ethos of Equality, Diversity and Inclusion from the outset, and create a culture in which researchers collaborate with, reflect on and respond to the diversity of people affected by the deployment of AI, and whose voices need to be included in its design, development and lifelong assurance.
The recruitment process will start in the coming weeks, and more information on the application process will be available shortly. If you have any questions or would like to register your interest, please contact us.
- Prof Ibrahim Habli (CDT director & AI Safety lead)
- Dr Ana MacIntosh (Operations lead & Partnerships lead)
- Prof Tom Stoneham (Training co-lead & Ethics lead)
- Dr Colin Paterson (Training co-lead & Professional and Academic Skills lead)
- Dr Jo Iacovides (Careers lead)
- Prof John McDermid OBE, FREng (Research co-lead)
- Dr Jenn Chubb (Research co-lead)
- Prof Richard Wilson (AI lead)
- Dr Phillip Morgan (Law lead)
- Dr Zoe Porter (Responsible AI lead)
- Prof Cynthia Iglesias Urrutia (EDI lead)
Co-creation and collaboration with and between CDT partners is key.
The SAINTS community, with 34 confirmed partners, stretches across the following sectors: aerospace (NATS and Craft Prospect), automotive (Jaguar Land Rover, Thatcham, Oxa and HORIBA MIRA), defence (Thales, MBDA, Naimuri, BAE Systems), healthcare (Healthcare Safety Investigation Branch, Welsh Ambulance Services NHS Trust, Bradford Teaching Hospitals NHS Trust, HFMA) and maritime (Lloyd's Register Group), alongside underpinning software / AI (BT, Wolfram, Ufonia, PageAI, OpenWeb) and regulation (Health and Safety Executive, Office for Product Safety and Standards, MHRA, BSI). Our partnerships will provide SAINTS doctoral students with unparalleled opportunities to learn from practitioners, and will enable our partners to have exceptional access to cutting edge research.