EDUCATION

Learning from cancelled systems

Research examining the use of predictive data in public services in countries across the world has identified more than 50 systems that have been paused or cancelled. Dr Joanna Redden and Anna Grant look at the emerging findings.

The education sector has faced no end of challenges since the outbreak of the COVID-19 pandemic, not least, how to assess a cohort who could not sit the normal exam diet.

The subsequent course of action, particularly regarding A Level results, with the use of a centrally developed algorithm to assign pupil grades will likely be studied for years to come.

This case moved beyond education and served to expose and exemplify many of our deepest concerns around the way bias can be embedded in algorithmic systems and sparked one of the most significant debates to date about the use of predictive data in public services.

The use of data systems to determine the allocation of finite resources is not a new phenomenon. Globally, we know that Government agencies are increasingly making greater uses of expanded data points and advanced computing power to inform decision-making.

Despite the goal of doing more and better with the resources available, researchers across various sectors have been documenting the ways that people are actually being harmed, most often unintentionally, by the use of these types of systems since their introduction. Seldom does this harm generate the kind of public debate and challenge as was the case with the A Level algorithm.

In mapping and analysing how and why these systems are being introduced we have been hearing about Government bodies who are now suspending or entirely stopping their uses of automated and predictive systems.

At the Data Justice Lab and the Carnegie UK Trust we decided to investigate the rationales and factors leading public bodies to pause or cancel their use of these kinds of systems.

For the past six months we have been undertaking scoping research, document analysis as well as interviews with people across sectors in multiple European countries, New Zealand, Australia, the US and Canada. We have identified more than 50 systems that have been paused or cancelled. These range from automated systems to identify benefit fraud, to risk scoring systems for individuals and families, predictive policing tools and performance evaluation systems.

Our intention is not simply to explore scope, but to provide public officials with a better understanding of the potential issues when considering, planning, procuring or piloting these interventions.

Our project, Automating Public Services: Learning from Cancelled Systems, is ongoing with our full analysis to be released in October, but we can point to some interesting preliminary results.

We are finding the reasons for cancelling these systems vary, intersect and overlap. In some cases those implementing the systems do not find them as useful as expected.

Counter to the popular efficiency narrative, some systems end up producing too many errors and actually create more work, rather than less. Pressure from civil society organisations and public attention has resulted in internal audits or reviews that lead agencies to pause or cancel some systems.

We have also found a number of cases where legal challenges have been won on the basis of rights infringements.

There are other examples where individuals most impacted by the results of the algorithms have successfully argued that the black box nature of the system and their inability to know how decisions are being made, is a due process issue.

Our research results, while still preliminary, lead to some observations that are timely given current debates about the appropriate uses of algorithmic systems in education.

These findings also reinforce existing research which has long been emphasising the importance of acknowledging the complexity involved in technological projects and the ways that new systems can lead to unintended consequences.

Those considering making use of automated and predictive systems would benefit from including consultation with those who will be most affected by the systems they are planning to introduce.

Involving communities with knowledge and experience in identifying and challenging discrimination is essential and globally we are seeing some public sector agencies working to enhance opportunities for meaningful public engagement.

The public sector also needs support to ask better questions of suppliers and developers to truly understand the system risks or impacts.

This support also needs to come with recognition of the time and resource pressures being put on public bodies.

In a policy context where there are wide-spread calls for greater transparency and accountability surrounding the use of predictive and automated systems, the motivation behind this project is that we all could learn a great deal from those who have direct experience with trying to implement these systems and their reasons for not proceeding with, or abandoning, their use.

Dr Joanna Redden is co-director of the Data Justice Lab and Anna Grant is senior policy and development officer at Carnegie Trust UK

EDUCATION

Competitors warm up for the LG Challenge 2025

By Virginia Ponton | 23 December 2024

Michael Barrett and Virginia Ponton set the scene for the LG Challenge 2025, when 10 participants will test their skills and ingenuity on five real-life chal...

EDUCATION

Is the 'stick'-led approach in planning reform the best strategy?

By Ben Standing | 23 December 2024

New planning rules feature a heavy presumption in favour of development, but Ben Standing argues we must also engage communities to ensure local people feel ...

EDUCATION

New Towns: A checklist for development and delivery

By Katja Stille | 23 December 2024

Katja Stille looks at how New Towns can effectively support local authority housing delivery.

EDUCATION

Goodbye to all that

By Martin Ford | 20 December 2024

Ann McGauran and Martin Ford take a look back at the highs and lows of a pacy and action-packed year for local government.