The hype around artificial intelligence (AI) has spread across all of society and government. 

The UK government has set up a central incubator to develop the use of AI in government, and many government departments are piloting the use of AI tools in areas such as ministerial correspondence, or automating administrative case working processes. 

There is rightfully a lot of optimism at the prospect of AI helping public services to solve the productivity challenge they face.  

But AI is not a silver bullet to better value in government. It faces the same difficulties that other digital transformation initiatives have faced over the years – navigating complex government processes and building on legacy technology. 

Baringa has already highlighted that government needs to invest in the right areas to ensure that technology can lead to better services. Public sector organisations need to have a clear purpose through which they can focus their efforts on the areas that will deliver outcomes for service users – and use this clarity of purpose to prioritise the technology and data innovations to invest in. 

The same is true of AI. To make it work, departments need to ensure they are agile in developing and governing AI use cases through testing, iteration, and then deployment, in a way that is safe, cost-effective and ultimately delivers the productivity (or other) benefits we started off trying to achieve. 

Baringa has identified five key mistakes organisations make in this process – and how government departments can avoid them. 

 

1. Don’t forget the purpose you’re trying to achieve

It’s tempting to start with the productivity improvements we want to achieve through AI:

  • fully automating objective processes, freeing up civil servants for the processes that need human intervention 
  • supporting professional judgement by making recommendations which can be reviewed by a human decision-maker 
  • automating the collection and exploration of departmental data, and service delivery itself. 

However, focusing on the operational benefit alone, rather than the problem we’re trying to solve for the service user, risks an over-engineered solution, or worse a solution that doesn’t improve things for users at all. 

Government must be sure about the problem that it is looking to use AI to solve. Taking a user-centred approach to all technology investment – for citizen and staff-facing services – ensures that the investment links to their organisation’s purpose, and key outcomes their service users want to achieve. 

Keeping it simple, can you articulate what service users want that the current service isn’t providing for them?  

Once you know what needs to be done differently, assess whether there’s an easier option than implementing an AI solution: 

  • Ensure that staff understand and are using the best available tools in the organisation 
  • Change or improve the information that users provide, for example by improving UI or service documentation 
  • Simplify or remove steps in the process or service. 

 

2. Don’t miss the opportunity to transform your wider operation while you’re transforming the tech 

The service is the sum of its technology, people, process, and users. Technology may be the headline opportunity, but this offers a fantastic opportunity to review the service across the board. 

It’s now standard practice to simplify processes before automating them. Root cause analysis will identify processes that can be eliminated. Prioritise your efforts by measuring how much time the team spends on different elements of the process, and focus on the largest buckets of operational time to identify areas where AI can unlock the greatest productivity benefits. 

We recommend applying this same approach to the overall operation, including the process, team structure, ways of working, and commercial operations. Implementing AI offers an opportunity to radically transform the service. 

Where possible, build time, flow, and quality measurement into your operational systems. Where this isn’t possible, or you don’t have a good baseline, ‘day in the life of’ (DILO) studies, and delay studies (analysing activity volumes at each stage in the process at a point in time) are great ways to rapidly identify opportunities to improve. 

 

3. Don’t ignore your data issues – but also don’t get stuck on them 

AI decision making relies on good quality underlying data. If data is not collected or updated consistently and accurately, then the information provided by AI will not be useful. 

Government must think about how to collect and manage the core data for their services, so that automated processes always have access to the most up-to-date information. 

As above, investment should be focused on the data sources which underpin the most valuable services and impact the operational areas where your team spends the most time. 

However, don’t let data maturity be an excuse that stops the organisation from working on AI opportunities. These outcome-focused goals will help sharpen focus on underlying data maturity and legacy challenges that need to be addressed. 

Use cases should drive investment in data maturity, improving data where we know there is potential value for users. 

 

4. Don’t start thinking about responsible AI the day before you launch 

All government organisations looking to use AI need to demonstrate they are applying it responsibly. They need to ensure it is safe, ethical, auditable and sustainable.  

This is arguably even more important to public services, which can impact everyone in the UK, and where service users don’t necessarily have a choice of alternative services (or a choice as to whether they want to use a service at all). 

Don’t assume that having a “human in the loop” means you have made AI safe. There is a growing body of evidence that demonstrates that humans are prone to adopting the bias of algorithms used to guide their decisions.  

They also need to reassure users and other stakeholders that they’ve done this work! The benefits of AI can only be unlocked if it is used – and AI will only be used if it is trusted. 

One of the most significant blockers to any government AI opportunity is leaders understanding and trusting that an AI solution is responsible. 

This can often result in blocking the AI solution from reaching its potential, diluting what it could do to your operations, and in some cases even blocking its release entirely, sometimes after months of work. 

You can learn more about implementing Responsible AI approaches in government, and our views on this topic here.

 

5. Don’t underestimate the critical role of operational expertise in delivering sustainable AI-driven transformation 

Lastly and most vitally, don’t forget the role of people. AI has potentially huge implications for our service users, teams delivering public services, and public service leaders – but sometimes people have the wrong idea of what this impact may be. 

There are two pitfalls we see around people: some projects remain too fixed to the current service and miss the opportunity to really transform outcomes; whilst others develop independently from the existing teams and end up with a service that misunderstands its users, or isn’t accepted by key stakeholders. 

We recommend a model office approach: bringing together a representative set of stakeholders from the existing service with a team of transformation experts. Aim to build a small-scale, end-to-end service, implementing an AI solution with the people who will actually deliver the service. 

AI based technologies often require far more direct operational engagement. They need greater ongoing oversight, management, and continuous improvement than typical operational systems.  

This is partly from an AI safety perspective: ensuring that changes in process, inputs or external factors do not unduly introduce new unintended AI outcomes such as bias or hallucinations. But it also is of critical importance from an operational management perspective, especially where AI is responsible for a significant proportion of an end-to-end process previously executed by people.  

AI and ops delivery professionals co-owning operational performance, and quality assurance, requires new ways of workings and new practical tools to maintain operational grip. 

 

To find out how Baringa could help your organisation to unlock productivity through automation and AI technologies, please contact our public sector AI specialists Henry Holms, Callum Sparrowhawk, or Sam Atkins.  

Over the last few months we have been exploring the topic of 'Public Sector Productivity' through a recent knowledge series. Subscribe here to be the first to receive these insights. 

Our Experts

Related Insights

Related Case Studies

Sign-up for the latest insights

Receive the latest government and public sector updates direct to your inbox.

Subscribe

Contact us

Find out what we can do for you...

Get in touch

Does kindness in business pay?

Find out in our Economics of Kindness series