Automating public services – a careful approach

vchalup – stock.adobe.com

By

Published: 09 Jul 2024

The use of AI and automation in the public sector involves both public-facing systems and behind-the-scenes tools aimed at streamlining operations and freeing up staff time. But we know that these benefits are not guaranteed. Public sector organisations must approach automation with caution to avoid the pitfalls of overhyped claims. Missteps can have severe consequences for citizens and the public bodies themselves. 

A wide range of public sector functions are being automated, showcasing the diverse applications of these technologies and their power to change things. Example use cases include:

  • Local authorities – using chatbots, automating social care assessments, and employing data analytics to identify at-risk families. 

But critical to note is that these systems do not affect everyone equally, and marginalised communities, those with uncertain immigration status, or those reliant on state support are more likely to be impacted by automated decisions. This can lead to significant consequences, including bias and privacy violations. 

Risks of public service automation 

Automated systems must be scrutinised for their ability to achieve intended outcomes. Numerous examples, such as Hackney Council and Bristol’s risk-based verification (RBV) system, show that automated systems can fail to deliver as promised, leading to inefficacy and adverse impacts. 

Bias in automated systems often stems from the data used to train them. Predictive policing tools and the 2020 A-level algorithm are prime examples where biases resulted in unfair outcomes. Concerns about bias led to the discontinuation of certain systems by West Midlands Police and the Home Office. 

Automation relies on vast data, raising concerns about data sources and citizen consent. The controversy surrounding the Data Protection and Digital Information Bill highlights the delicate balance between data usage and privacy rights. 

Many automated systems are developed by private companies, resulting in limited transparency. Public sector bodies sometimes lack detailed knowledge of these systems, complicating accountability and citizen recourse. The Public Law Project’s register of automated systems highlights widespread opacity. 

Trust is crucial. Without it, citizens may resist data collection and the use of automated systems. The NHS’s digital transformation efforts and the Post Office Horizon scandal illustrate the challenges in maintaining public confidence in technology. 

Guiding principles for automation 

To balance automation benefits and risks, it is essential to follow a guiding set of principles: 

  1. Interrogate the need – examine the underlying reasons for automation. Ensure that it does not replicate or exacerbate existing issues. 

  1. Impact vs. risk – focus on areas where automation can deliver significant benefits with minimal risks. 

  1. Establish “red lines” – identify decisions too risky for automation due to their potential impact on individuals’ lives. 

  1. Thorough safeguarding and evaluation – implement consistent impact evaluations and empower authorities to modify harmful systems. 

  1. Transparency – make the use and mechanics of automated systems public and ensure comprehensive impact assessments. 

  1. Upskill staff- equip public sector employees with the knowledge to understand and assess automated systems. 

  1. Citizen involvement – engage citizens and civil society groups in decision-making and monitoring of automated systems, building trust and adoption 

Without a careful, principled approach we cannot leverage automation’s benefits without compromising citizens’ rights and trust. Care is critical, and something we should all get behind.

Anna Dent is a researcher and public policy consultant, and head of research for Promising Trouble.

Read more on Artificial intelligence, automation and robotics

Source

Shopping Cart
Shopping cart0
There are no products in the cart!
Continue shopping
0
Scroll to Top