Risk/reward – the AI lessons

By Elliot Jones | 20 February 2024

Local authorities have collected and analysed data as an essential part of their work for decades. However, in the context of reductions to real-terms local government funding, data analytics and AI have become particularly attractive propositions. Companies are offering technological solutions which they claim can deliver efficiency improvements, better forecasting and more effective services.

For a long time, AI systems in the public sector have been about predictive analytics, risk-scoring and prioritisation, for example in social welfare or traffic management. However, since the release of ChatGPT more than a year ago, we have seen a growing interest from local authorities in integrating these newer types of AI systems and tools into their operations.

Underlying these tools, such as OpenAI’s ChatGPT and Microsoft’s Bing, are something known as ‘foundation models’. These also underpin many image generation tools such as Midjourney or within Adobe Photoshop.

Foundation models are powerful AI models, trained on large datasets, and designed to produce a wide variety of outputs. They are capable of a range of tasks, such as text, image or audio generation. They can be used as standalone systems or they can be built on and adapted for many different applications.

There is some optimism – from ministers, local government, the tech industry and other stakeholders – about the potential for using these models to enhance public services.

For example, there are suggestions they could be used for automating the review of complex contracts and case files, drafting policies, powering public-facing chatbots and consolidating information spread across databases into useful insights.

In response, the Ada Lovelace Institute undertook a rapid review of foundation models in the public sector to identify some key considerations for their use.

Firstly, it is essential that local authorities carefully consider the relevant counterfactuals to using these models to support service delivery. What are the opportunity costs? Are there any more mature and better tested solutions that might be more effective or provide better value for money?

The evaluation of these alternatives should be guided by the principles of public life, which include accountability, openness and objectivity. These principles provide a long-established and widely accepted standard and a solid foundation for the governance of foundation models in the public sector.

At the moment, local authorities’ main interaction with these models will be use by individual staff members, such as for drafting a briefing or writing code. In the short term, foundation models are most likely to enter local government through existing IT tools and infrastructure, as they get integrated into Microsoft Office, Google Workspace, search engines and so on.

Whether through specialist tools or existing IT, foundation models are likely to pose a number of risks for local government, including bias, privacy breaches, misinformation, security threats, overreliance on particular services, workforce harms and unequal access.

The Post Office scandal demonstrates the risk of overreliance on automated systems and misplaced institutional trust in those systems over frontline workers. The scandal also demonstrates some of the challenges for procurement. There are clear risks around dependency on a small number of providers, including a potential lack of alignment between their commercial incentives and the needs of local government.

Local authorities need to consider these risks and they should require information about them in any procurement and implementation process. It is vital that these risks are mitigated through effective monitoring, internal and independent oversight, and engagement with those affected by the technologies.

Existing guidance and data protection and equality impact assessments can provide a baseline for governance. Local authorities should be supported to run pilots and experiment before they invest heavily. Independent auditing and public participation can also be effective ways to ensure these systems are safe, reliable and meet the needs of the public.

Since we published our report, the Government’s Central Digital and Data Office has published guidance for the use of generative AI in government and the London Office of Technology and Innovation has published specific guidance for local authorities. These offer a strong starting point for any local authority leader considering the use of foundation models in their own organisation.

The Government Digital Service has also experimented with an OpenAI-powered chatbot, with some positive outcomes but also issues of accuracy and reliability.

Local authority leaders should also be wary of adopting the cutting edge of AI systems that are unproven, often over-hyped, and may raise as many new problems as solutions. One potential ‘next generation’ of foundation models are ‘text-to-action’ systems, capable of performing an action on your computer (like browsing the web to buy plane tickets). These tools could pose serious questions about accountability, redress and liability when things go wrong.

The responsible integration of foundation models into local government will not and should not be a one-time exercise, instead it will require a process of continual learning and iteration, developing new technical and institutional innovations, to ensure they can serve the public and uphold the principles of public life.

Elliot Jones is a senior researcher at the Ada Lovelace Institute

X - @AdaLovelaceInst

comments powered by Disqus
Procurement Data AI Technology governance
Top