Accessibility statement

New research project will create AI audit tools to combat misinformation

Posted on 8 May 2024

New project will work with stakeholders including non-experts to develop tools that will stop AI presenting false or invented information as fact.

The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project will develop new methods for maximising the potential benefits of predictive and generative AI while minimising their potential for harm arising from bias and misinformation.

The project will pioneer participatory AI auditing. This is where non-experts, including regulators, end-users and those people most likely to be affected by decisions made by AI systems, will play a role in ensuring that those systems provide fair and reliable outputs.

New tools to support the auditing process will be developed in partnership with these stakeholders. The project will also create new training resources to help encourage widespread adoption of the tools.


The Department of Computer Science is part of the project consortium, which is led by the University of Glasgow and includes the Universities of Edinburgh, Sheffield, Stirling and Strathclyde and King’s College London. Funded by Responsible AI UK, the project brings together 25 researchers from these universities with 23 partner organisations.

York’s expertise in cyber security and privacy will be central to the project. The auditing tools created will help develop more robust and reliable AI systems, but they must consider security and privacy aspects from the start.


Dr Siamak Shahandashti, Senior Lecturer in the Department of Computer Science, will lead this element of the project. “The underlying training data used to train the AI models may contain sensitive private information such as health records,” he said. “We’re taking a 'privacy-by-design' approach to developing AI auditing mechanisms and tools, ensuring that the preservation of privacy is taken into account from the outset, rather than patched on as an afterthought.”

Dr Simone Stumpf, of the University of Glasgow’s School of Computing Science, is the project’s principal investigator. She said: “Auditing the outputs of AI can be a powerful tool to help develop more robust and reliable systems, but until now auditing has been unevenly applied and left mainly in the hands of experts. The PHAWM project will put auditing power in the hands of people who best understand the potential impact…That will help produce fairer and more robust outcomes for end-users and help ensure that AI technologies meet their regulatory obligations.”

Societal impact

Professor of Artificial Intelligence Gopal Ramchurn, from the University of Southampton and CEO of RAi UK, said the projects are multi-disciplinary and bring together computer and social scientists, alongside other specialists.

He added: “These projects are the keystones of the Responsible AI UK programme and have been chosen because they address the most pressing challenges that society faces with the rapid advances in AI. The projects will deliver interdisciplinary research that looks to address the complex socio-technical challenges that already exist or are emerging with the use of generative AI and other forms of AI deployed in the real-world. 

“The concerns around AI are not just for governments and industry to deal with – it is important that AI experts engage with researchers and policymakers to ensure we can better anticipate the issues that will be caused by AI.”