Oflog consultation on new draft metrics

We welcome the opportunity to work alongside Oflog to achieve the two principal aims set out in the accompanying letter. However, one overarching, general observation would be that it is difficult to see how this new draft set of metrics supports these two aims.


Introduction 

We thank you for consulting with us in advance of the finalisation and publication of the next set of metrics for Oflog’s Data Explorer. In future, we would welcome the opportunity to be involved in selecting the metrics on the list; but we, nonetheless, appreciate the opportunity to comment before finalisation. We have consulted internally within the LGA, as well as with our member councils and partners in the sector and have produced one collective response to support Oflog in refining and improving the metrics where necessary and providing relevant caveats and context to the metrics to aid their interpretation. 

We welcome the opportunity to work alongside Oflog to achieve the two principal aims set out in the accompanying letter.  However, one overarching, general observation would be that it is difficult to see how this new draft set of metrics supports these two aims.

With each area of council activity, we would suggest that Oflog be clear on what they are trying to measure and why, and focus discussion on how far the metrics meet these aims rather than on the technical merits and drawbacks of each metric. This response will provide feedback on each theme but, overall, the lag on the data does limit the power these metrics have as “early warning signs”. 

Planning

The metrics for planning generally seem sensible, as they are metrics that are currently used when determining and monitoring local authority planning performance.

We would, however, look to highlight that capacity and resourcing concerns are the primary reason some local authorities may not meet application determination periods. We would like to refer back to the LGA submission to DLUHC’s consultation on the proposed changes to planning funding and fees earlier this year, linked here, which expressed the funding and resourcing challenges that council planning departments face.  

We have also highlighted previously that it is too simple to equate the speed of decision-making with the quality of decisions from local authorities, and that, for example, an additional two weeks to determine an application to ensure it has better design, input from a statutory consultee or community buy-in is a good thing, though it gets noted down as a ‘negative’.

Metric-specific comments

7. Date when a Local Plan was formally adopted by an authority [lower tier, unitary authorities, London and metropolitan boroughs]

Given the ‘digitisation’ of the process, in the long-term, it might be appropriate to be able to highlight when local authorities reach certain points of the proposed new 30-month timetable, e.g., scoping phases, gateways, examination. This would help illustrate what stage plan preparation is at, rather than simply when the last one was adopted. 

Roads

Considering the metrics for roads, we would express concern about the extent to which meaningful comparisons can be made between different local authorities. 

Metrics 8 and 9 tell you about the conditions of different categories of roads, but they do not easily facilitate meaningful comparisons between any and every local authority. It may be possible to compare like with like – e.g., Nottingham with Leicester – but not rural and urban areas. Some places may be much more exposed to extremes of weather and therefore much more difficult to maintain; they may have more lorry traffic therefore damage is more severe. To facilitate comparisons and to give context to metrics 8 and 9, we would suggest that there should be a measure of what percentage of roads in a local authority fall into each of the three categories.

Business and economic growth

We appreciate the efforts to standardise the metrics to allow meaningful comparisons to be made between authorities. We would, however, suggest that a starting point for any of the selected metrics should be whether local authorities have direct control or influence over the outcome to which the metric refers. It would be wholly inappropriate to include measures that could depict a council as underperforming in areas that it cannot change at all, or without very significant additional resources. Many of the metrics fall into this category and would not therefore allow meaningful comparisons to be made.

In addition, we would argue that local authorities should not be assessed on the local impacts of policy that is implemented nationally. We also need to maintain the balance between striving for growth through enterprise, which these metrics measure, and accommodating other indicators of economic growth, for example residential growth requirements.

Metric-specific comments

11. Births of new enterprises [upper tier, unitary authorities, MCAs]

This is a reasonable metric as local authorities are able to nurture and support new enterprises to an extent through direct support and longer-term policy approaches. 

There is, however, a considerable data lag which impacts how valuable this data is as a metric of council performance. We would also add the caveat that small and medium enterprises (SMEs) may be registered under the address of their accountant. We refer to an incident for Wiltshire Council in 2016, where the relocation of a finance firm caused the loss of 4,000 businesses from Wiltshire’s council area.

12. Deaths of enterprises [upper tier, unitary authorities, MCAs]

This metric, however, is less useful because there are far too many factors impacting this that are outside of a local authority’s control. Furthermore, as with Metric 11, we must consider the data lag, and we would add the same caveat offered above.

13. Number of high growth enterprises [upper tier, unitary authorities, MCAs]

We would argue that this is not necessarily something that all local authorities can influence. There are many fixed factors, for example, land and property availability, skills, and transport connectivity.

14. Gross Value Added (GVA) per hour worked [upper tier, unitary authorities, MCAs]

Although GVA is a useful comparator between local authorities, it isn’t one that councils should be assessed on as there are too many factors that influence this. While it would be useful to compare local authority areas, it wouldn’t be fair to judge authorities on these levels, such as in ranked tables.

15. Gross median weekly pay (£) [upper tier, unitary authorities, MCAs]

While we may be able to promote policies like Living Wage through our supply chain etc, again, there are too many factors impacting pay to allow it to provide a meaningful, comparative assessment of local authority performance. 

In addition, if we were to use mechanisms like supply chain and S106 to encourage paying Living Wage, those benefits can often be for non-residents as many organisations ‘import’ a vast number of lower-paid workers from neighbouring areas. Of course, improving the pay for them is good, but it would boost the ranking of the neighbouring areas. In this way, we would argue that this metric would be counterproductive to include. One suggestion would be to base pay data on the workplace of the employee rather than their residence. 

16. Employment rate for 16-64 years olds [upper tier, unitary authorities, MCAs] 

Again, while local authorities can mitigate the impacts of unemployment and underemployment, its drivers are largely beyond the scope of local authority influence. While it would be useful to compare local authority areas, it wouldn’t be fair to judge authorities on these levels, such as in ranked tables.

Suggestions

We have highlighted that many of the proposed metrics are too far outside of the control of local authorities to be valuable representations of local authority performance. In light of this, we offer below some suggestions from our members over which the local authorities have more control:

  • Numbers of apprenticeships or similar early careers support provided within the local authority workforce (as a percentage of total employee base)
  • Reduction of carbon emissions from direct business operations 
  • Proportion of care leavers supported to gain and maintain employment.

Corporate and finance

We understand that the present live Data Explorer finance metrics may be replaced with the four capital risk metrics proposed in the Levelling Up and Regeneration Bill (LURB) consultation. We welcome a change to the finance metrics included in the first set of Data Explorer metrics, however, we would like to reiterate that we have serious concerns about the LURB capital risk metrics which we communicated in response to the LURB consultation, linked here. We understand that the present Oflog Data Explorer consultation does not wish to replicate or cut across the LURB consultation, however, we wish to ensure that these comments will be considered in both contexts. 

Metric-specific comments

17. Percentage of Ombudsman complaints upheld [all tiers]

This result is partly determined by the denominator, which is the number of complaints to the Ombudsman. Good authorities work very hard to reach a local resolution with complainants, resulting in a relatively low number of complaints to the Ombudsman. For these authorities only the most serious complaints reach the Ombudsman, of which a higher percentage are then upheld. This indicator would therefore show authorities that work hard for local resolutions in a bad light.

18. Number of upheld Ombudsman complaints per 10,000 population [all tiers]

Compared to Metric 17, this metric shows upheld complaints in a much more equitable way by using population size as the denominator. 

As a caveat, we would suggest that comparisons be encouraged only between authorities of the same structure as this metric would not allow for a fair comparison between unitary and two-tier authorities due to the difference in the number of services provided. 

19. Council tax collection rates [lower tier, unitary authorities, London and metropolitan boroughs] AND 20. Non-domestic rates collection rates [lower tier, unitary authorities, London boroughs and metropolitan boroughs]

This data is already collected as part of the Quarterly return of Council Taxes and non-domestic rates (QRC) returns therefore it does not present an additional data burden. We would, however, question how useful collection rates are as a metric of local authority performance: debt collection is not a true reflection of a council’s financial management, as collection rates are impacted by the demographic of the local area and the level of deprivation. 

Waste management (fly-tipping)

We have some concern about the standardisation of the fly-tipping metrics and how far meaningful comparisons can be made between authorities. This is because there is currently no national definition of fly-tipping, and the type of incidents that get recorded can vary between councils. Furthermore, enforcement strategies also vary between councils and can depend on the fly-tipped material; in some cases, anti-social behaviour powers and community protection notices are used in place of fixed penalty notices. Alternative enforcement strategies would therefore not be reflected in the current proposed metrics. 

There is a risk that these metrics may encourage more badly issued fixed penalty notices, as the metrics prioritise quantity over quality. Some councils may also have a strategy in place to focus on bigger fly-tips and criminal gangs, and consequently issue fewer fixed penalty notices. Furthermore, judging councils on enforcement action does a disservice to the preventative action some councils may be taking, for example target hardening and education.

It is also worth emphasising that there is split responsibility for enforcement against fly-tipping. Councils do have powers to take action against small scale fly-tipping, but larger scale offences fall under the responsibility of the Environment Agency. We would argue that this section should be called ‘small scale fly-tipping’, otherwise it risks perpetuating the myth that councils carry full responsibility for fly-tipping, ignoring the role of the Environment Agency. 

Lastly, the number of indicators proposed for fly-tipping seems hugely out of proportion to the level of resource allocated to this by local authorities. It is difficult to understand why so many indicators have been allocated for this topic.

Metric-specific comments

21. Fly-tipping incidents per 1,000 people

This metric is particularly problematic, as deprivation and density of housing underpin high levels of fly-tipping, rather than the number of people. Furthermore, this metric would not be fair for areas which experience high levels of tourism, and for districts on city borders which may suffer the impact of criminals travelling out of the area to fly-tip.