Departments must not obsess over ‘input measures’ when developing productivity plans – especially when it comes to tracking time
5 min read 18 March 2024
Chancellor Jeremy Hunt launched a revamped public sector productivity drive in the Budget to improve the efficiency of public spending.
Public service productivity is estimated by the Office for National Statistics to be 5.9% below pre-pandemic levels, and returning productivity to pre-pandemic levels would deliver up to £20bn of benefits a year.
The Treasury has therefore launched the Public Sector Productivity Programme intended to help close this gap. Plans will be developed ahead of the next Spending Review and are intended to provide the foundation to drive significant improvements in how crucial public services operate.
As departments develop their proposals, though, they must resist the urge to over-focus on the input side of the productivity equation.
Too much focus on controlling costs can overshadow work on how processes can be made more effective to deliver the outcomes that departments need to meet their purpose.
The pitfalls to avoid
Properly understanding the inputs that government departments and agencies have at their disposal – in terms of people, technology and finances – is important to how well they work.
In particular, they are important in helping an organisation to plan how it aims to deliver its purpose – the outcomes that are required, and the steps to make them happen.
However, there are some common mistakes when using inputs to make government more effective, and these should be avoided.
Input metrics do not measure performance
Organisational metrics can be split into two categories – planning and performance.
Input metrics help us with planning assumptions and contextualising performance. But they are never performance measures in their own right.
This means that the public sector must get the level of detail for input measures right.
Don’t conflate granularity with accuracy
Often in government efficiency programmes, operational leaders look at the proxies they have for input and bemoan the accuracy available – and a common response to this is to push for greater granularity (such as moving towards individual self-reporting).
It can be tempting to chase ever-increasing detail on tracking time and effort, but extra granularity – information not just on what tasks teams are undertaking, but reports on the utilisation of different grades within the team or even down to individual reports from individual employees – increases the number of people who are inputting data into the system, leading to data accuracy issues that undermine the quality of the data.
Input metrics are very hard to automate
Getting granular data in organisational inputs creates onerous time reporting requirements on employees, without the cost of collection being matched by useable insight on organisational performance.
And these costs can’t be avoided, as it is also tough for organisations to automate this work. Time capture requirements need to be built into workflow tools at product conception, and this almost never happens, meaning that the granularity often increases the workload faced by civil servants. Many products on the market promise to close this gap but almost all of them require significant self-reporting of time.
Granular input data demands organisational behaviour change
What’s more, the organisational effort required to provide overly detailed government input data requires a huge amount of behavioural change. Launching and maintaining a self-reporting solution on individual workloads, for example, requires a lot of leadership attention and sponsorship. This focus inevitably leads to giving the impression that controlling inputs is more important than achieving outcomes – the metric gaming that then occurs is counter to the overall productivity outcome the work is looking to achieve.
How to get input measures right
However, input measures are useful in government, so organisations must focus on how to get this right.
Governments should work out what input measures they think they need, and not focus on getting the data 100% right. It is typically much better value for money to use proxies for input, such as headcount by team or service, rather than asking for self-reporting at an individual level. Baringa has worked with government organisations to support analysts in building proxies using full-time equivalent headcounts – which help to contextualise performance and allow for better performance discussions that focus on outputs and outcomes.
But the key lesson is that government organisations need to be mindful of the cost they are creating around input measures. Think about how you can best measure plans and performance – and don’t go chasing unicorns of perfect data at any expense.
Find out more about how to solve the public sector productivity puzzle or get in touch with Matt Jones to discover how Baringa could help your organisation unlock productivity improvements.
Our Experts
Related Insights
Baringa Matrix of Health Productivity
Discover our recommendations for both long term opportunities and quick wins that NHS leaders should consider.
Read moreA productive conversation about productivity – what’s next for the NHS?
Discover what could be done to address the NHS productivity puzzle.
Read moreHow can the new Government ensure value for money?
Discover the four ways the Treasury could drive better value for money outcomes.
Read moreWhat will increasing automation mean for civil service leaders of the future?
Matt Jones and his guests explore automation in Government, what automation means for the people who work on operating our public services, and the risks of bringing AI into organisations.
Read moreAre digital and AI delivering what your business needs?
Digital and AI can solve your toughest challenges and elevate your business performance. But success isn’t always straightforward. Where can you unlock opportunity? And what does it take to set the foundation for lasting success?