Category: Policy and analysis
Fixing Whitehall’s broken policy machine
Jonathan Slater – The Policy Institute
There is a recursiveness to Whitehall’s examinations of its own shortcomings. The construction and analysis of arguments, the slight distancing from the subject matter at hand, the elegant and occasionally self-deprecating prose, the focus on the quantity of delivery rather than the quality of service can all be deployed in looking at how government operates as well as at what it does and should do. And those are powerful tools, deployed by smart and thoughtful people, so real insight can be expected to come from them.
Jonathan Slater’s paper is very much of this genre. It is clear about the problems and about how those problems have persisted despite attempts to solve them. It is clear about examples of where better ways of doing things at the very least point to some hope of improvement. But at the same time it is oddly disengaged. Slater draws heavily on hs experience, on which he is thoughtful and illuminating, he has clear ideas about how things could be improved, and can point to clear evidence of using his leadership of the Department for Education to test and develop some of those ideas. But he is now an observer, not a player. It is hard for the reader not to wonder what stopped apparently good ideas from getting the traction they deserved, but while the point is acknowledged, it is not much developed:
The good news is that civil servants responded with genuine enthusiasm to my call to put “the user” at the heart of their work, however hard it might appear, although we only made limited progress.
To some extent that may be because DfE is not a good example of the kind of department Slater wants to see, as its policy and operational responsibilities do not sit comfortably together. I remember hearing Slater talk frustratedly about having to spend his – and minsters’ time – on decisions about the replacement of individual schools’ boilers – decisions which it is quite absurd to be taking in that way and at that level. But the much bigger and much more general question remains. And interestingly I can best express it in the same words I used to sum up King and Crewe’s The Blunders of our Governments. (published in 2013, but still essential reading):
We know what goes wrong. We know many of the factors which result in things going wrong. But we don’t why, knowing those things, it has proved so hard to break the cycle.
But this is still a paper well worth reading by someone whose record of doing this for real deserves great respect. The Policy Institute has also recorded a discussion of the paper, with Gus O’Donnell, Justine Greening and Bobby Duffy, as well as Jonathan Slater himself. Andrew Greenway has a good twitter thread reflecting on the paper from a slightly different angle.
‘Government as a system’ for COVID-19
This post takes the government as a system approach which the Policy Lab has been developing and applies it to the policy challenges created by COVID-19, less in terms of the immediate response, more in terms of emphasising three areas where modern policy approaches are likely to be critical.
The first is thinking about the future in conditions of particular uncertainty. Doing that creatively, radically, realistically and usefully is hard enough at the best of times, and these are not they. The more self-consciously and the more collaboratively that is done, the better the chance that the results will survive contact with developing reality.
The second is data, critically recognising that it is not just a matter of collecting updated answers to existing questions, but of identifying new questions and the data needed to answer them. The Policy Lab analytics ladder provides a really useful framework for thinking not just about how approaches relate to high level questions, but how their relative emphasis will change as we go through the crisis.
The third is relating all that and more back to the whole system and providing leadership and direction – a reminder that policy is an approach and an activity for those who want to change the world, not for those who wish merely to observe it.
A catalogue of things that are stopping change
Following Simon Parker’s challenge that nothing can change until we change the rules inhibiting change, along come James Reeve and Rose Mortada drilling down a level to explore how policy, politics and delivery come together in fundamentally unproductive ways to make change harder. The intermediate output of policy is neither good policy (because it hasn’t been tested against reality) nor a good input for delivery planning, because too often the elements which make policy work good – not least imprecision and uncertainty – are stripped out before a delivery team is asked to make sense of them.
This post focuses on the civil service, but the issues are much wider ones (though the political yearnings for certainty do tend to make things harder). And in one sense the answer is trivial: blend policy and delivery together, build shared respect for different skills appropriate to different problem spaces, resist the temptation to wish away politics, and perhaps above all:
the policy should never be considered done until the outcome has been achieved
Trivial does not mean easy, of course. There’s nothing new about the problems described here and there are no glib solutions on offer. But there are great insights from the lived experience of trying to do it better infused with realism about the scope and pace of change.
This is the first of a three part series – part 2 and part 3 are well worth reading as well (with an interesting difference of tone between the first two, written pre-COVID-19 and the third written while it is in full spate).
Data as institutional memory
There’s more to this deceptively self-deprecating piece than meets the eye. Fragmented data cannot support integrated services, still less integrated organisations. Deep understanding and effective management of data are therefore not a minor issue for techie obsessives, but are fundamental to organisational success.
As so often, the diagnosis is simple (which of course doesn’t stop it being hard), acting on that diagnosis is complicated, and even harder. This post brings the two together through an account of making it work on one part of government.
Most of government is mostly service design most of the time. Discuss.
Unusually for Strategic Reading, this post earns its place not by being new and timely but because it has become an essential point of reference in an important debate. It makes a very powerful argument – but one that is slighly undermined by the conclusion it draws.
It is a measure of continuing progress in the four years since the post was written that the proposition that service design is important in government has become less surprising and less contentious, as well as much more widely practised. It is a measure of how much more needs to be done that the problems described are still very recognisable.
So it’s absolutely right to say that service design is critically important for government and that much of what happens in government is better illuminated by service design thinking. But to assert further that that is most of government most of the time is to miss something important. Much of government is not service design and much of what is service-related is an aspect of a wider public purpose. The function of many government services is only in part to deliver a service, even where there is a service being delivered at all. So the five gaps which are at the heart of this post are all real and all can and should be addressed by service design approaches – but they are not the only gaps, so a solution which addresses only those is at risk of missing something important.
Revisiting the study of policy failures
Mark Bovens & Paul ‘t Hart – Journal of European Public Policy
What is a policy success? What is a policy failure? It feels as though that ought to be straightforward question, but the answer looks more uncertain the more closely we look. There is a gung ho – but still very valuable – approach of finding fairly big and fairly obvious blunders, but that’s a way of avoiding the question, rather than answering it.
This paper takes a more reflective approach, distinguishing between ‘programmatic’ and ‘political’ success and failure, arguing that neither determines the other and that the subject attract analytical confusion as much as clarity. None of that may sound helpful to the jobbing policymaker, struggling to find practically and politically effective solutions to complicated problems, but there is a clear conclusion (even though, perhaps in parallel with some of the policies used as examples, it is not entirely clear how the conclusion follows from the evidence): that open policy making is better than closed, that the messiness of democratic challenge is more effective than the apparent virtues of pure analytical precision.
But it also follows that policy failure is a political construct, as much as it is anything:
there is no ‘just world’ of policy assessment in which reputation naturally reflects performance. The nexus between the two is constructed, negotiated and therefore contingent, and often variable over time
It further follows, perhaps, that that jobbing policymaker needs have a political sensibility well beyond what a more managerialist approach might think necessary, being ready to recognise and operate in ‘the world of impressions: lived experiences, stories, frames, counter-frames, heroes and villains’.
How the World Conceals its Secrets
We like to think of ourselves as rational decision makers, using patterns of evidence to discern meaning and to understand and shape our environment. The case made in this video is that that is at best a half truth. The reality is that our powers of explanation are much weaker than we tend to recognise or care to admit and that in looking for patterns we are too ready to overlook random variation.
That’s not just an abstract or theoretical concern: the crisis of replication in science is a real and alarming symptom of the problem; the challenge to the very concept of statistical significance is closely related.
This video is a thirty minute summary by Michael Blastland of the ideas in his recent book, followed by a discussion with Matthew Taylor which is also well worth watching. That’s a rather bland description of a talk which was anything but – these are challenging ideas, powerfully presented, which anybody who creates or uses evidence for public policy needs to understand.
Mind the Gender Gap: The Hidden Data Gap in Transport
Nicole Badstuber – London Reconnections
Algorithmic bias doesn’t start with the algorithms, it starts with the bias. That bias comes in two basic forms, one more active and one more passive; one about what is present and one what is absent. Both forms matter and often both come together. If we examine a data set, we might see clear differences between groups but be slower to spot – if we spot it at all – skews caused by the representation of those groups in the data set in the first place. If we survey bus passengers, we may find out important things about the needs of women travelling with small children (and their pushchair and paraphernalia), but we may overlook those who have been discouraged from travelling that way at all. That’s a very simple example, many are more subtle than that – but the essential point is that bias of absence is pervasive.
This post systematically identifies and addresses those biases in the context of transport. It draws heavily on the approach of Caroline Criado Perez’s book, Invisible Women: Exposing the Data Bias in a World Designed for Men, illustrating the general point with pointers to a vast range of data and analysis. It should be compelling reading for anybody involved with transport planning, but it’s included here for two other reasons as well.
The first is that it provides a clear explanation of why it is essential to be intensely careful about even apparently objective and neutral data – the seductive objectivity of computerised algorithmic decision making is too often anything but, and why those problems won’t be solved by better code if the deeper causes discussed here are not addressed.
The second is prompted by a tweet about the post by Peter Hendy. He is the former Transport Commissioner for London and is currently the chairman of Network Rail, and he comments
This is brilliant! It’s required reading at Network Rail already.
That’s good, of course – a senior leader in the industry acknowledging the problem if not quite promising to do anything about it. But it’s also quite alarming: part of the power of this post is that in an important sense there is nothing new about it – it’s a brilliant survey of the landscape, but there isn’t much new about the landscape itself. So Hendy’s tweet leaves us wondering when it becomes acceptable to know something – and when it becomes essential. Or in the oddly appropriately gendered line of Upton Sinclair:
It is difficult to get a man to understand something, when his salary depends upon his not understanding it!
The Surprising Value of Obvious Insights
Adam Grant – Sloan Management Review
It is counter intuitive that insights don’t have to be counter intuitive.
There is excitement and recognition in grand discoveries, uncovering what we didn’t know as a critical step towards doing a better thing. The bigger the surprise, the better the achievement. And at the other end of the spectrum, the time honoured way of sneering at consultants is to say that they have borrowed your watch so that they can tell you the time. Over and over again, though, big organisations pay expensive consultancies to do exactly that. There are various reasons why that might be rational (or at least understandable) behaviour, one is perhaps that the obvious is not actually obvious until it is made obvious.
This interesting article expands on the power of obviousness made obvious as an enabler and driver of change. It’s focus is on internal management practices, but the approach clearly has wider application:
Findings don’t have to be earth-shattering to be useful. In fact, I’ve come to believe that in many workplaces, obvious insights are the most powerful forces for change.
How service ownership works in DfE
Rachel Hope – DfE Digital and Transformation
Most of government is mostly service design most of the time. That’s a pithy and powerful assertion, and has been deservedly influential since Matt Edgar coined it a few years ago. But influential is not the same as right – and indeed the title of Matt’s original blog post ended more tentatively with ‘…Discuss.’
This post, which is in effect a case study of acting as if the assertion were true, throws useful light on what it could mean. In doing so it makes it easier to see that there is a risk of eliding two questions and that it is worth answering them separately. The easy first question is whether policy and delivery should understand and respect each other and expect to work in close partnership – to which the answer must be yes. The harder second question is whether the venn diagram does – or should – eventually consume itself to become a single all encompassing circle. Verbally and visually, the argument of this post it that it does, and that argument is powerfully made in respect of the service it describes. But that still leaves open the question of whether the model works as well when the service is less specific or delivered less directly.
Sub-prime evidence: Is evidence-based policy facing a crisis?
Adrian Brown – Centre for Public Impact
Everybody is in favour of evidence-based policy – by definition it must be far superior to the policy-based evidence with which it is often contrasted. This post is a brave challenge to the assertion that there is an evidence base for evidence-based policy. In particular, it argues first that weak evidence can be unwittingly assembled to appear misleadingly strong and in doing so close down policy options which should at the very least be kept open; and secondly that experimentation is a better approach, precisely because it avoids forcing complex issues into simple binary choices.
That’s not an argument that evidence is unimportant, of course. But it’s a good reminder that evidence should be scrutinised and that simple conclusions can often be simplistic.