Adam Locker – Medium
There’s more to this deceptively self-deprecating piece than meets the eye. Fragmented data cannot support integrated services, still less integrated organisations. Deep understanding and effective management of data are therefore not a minor issue for techie obsessives, but are fundamental to organisational success.
As so often, the diagnosis is simple (which of course doesn’t stop it being hard), acting on that diagnosis is complicated, and even harder. This post brings the two together through an account of making it work on one part of government.
Matt Edgar writes here
Unusually for Strategic Reading, this post earns its place not by being new and timely but because it has become an essential point of reference in an important debate. It makes a very powerful argument – but one that is slighly undermined by the conclusion it draws.
It is a measure of continuing progress in the four years since the post was written that the proposition that service design is important in government has become less surprising and less contentious, as well as much more widely practised. It is a measure of how much more needs to be done that the problems described are still very recognisable.
So it’s absolutely right to say that service design is critically important for government and that much of what happens in government is better illuminated by service design thinking. But to assert further that that is most of government most of the time is to miss something important. Much of government is not service design and much of what is service-related is an aspect of a wider public purpose. The function of many government services is only in part to deliver a service, even where there is a service being delivered at all. So the five gaps which are at the heart of this post are all real and all can and should be addressed by service design approaches – but they are not the only gaps, so a solution which addresses only those is at risk of missing something important.
Mark Bovens & Paul ‘t Hart – Journal of European Public Policy
What is a policy success? What is a policy failure? It feels as though that ought to be straightforward question, but the answer looks more uncertain the more closely we look. There is a gung ho – but still very valuable – approach of finding fairly big and fairly obvious blunders, but that’s a way of avoiding the question, rather than answering it.
This paper takes a more reflective approach, distinguishing between ‘programmatic’ and ‘political’ success and failure, arguing that neither determines the other and that the subject attract analytical confusion as much as clarity. None of that may sound helpful to the jobbing policymaker, struggling to find practically and politically effective solutions to complicated problems, but there is a clear conclusion (even though, perhaps in parallel with some of the policies used as examples, it is not entirely clear how the conclusion follows from the evidence): that open policy making is better than closed, that the messiness of democratic challenge is more effective than the apparent virtues of pure analytical precision.
But it also follows that policy failure is a political construct, as much as it is anything:
there is no ‘just world’ of policy assessment in which reputation naturally reflects performance. The nexus between the two is constructed, negotiated and therefore contingent, and often variable over time
It further follows, perhaps, that that jobbing policymaker needs have a political sensibility well beyond what a more managerialist approach might think necessary, being ready to recognise and operate in ‘the world of impressions: lived experiences, stories, frames, counter-frames, heroes and villains’.
Michael Blastland – the RSA
We like to think of ourselves as rational decision makers, using patterns of evidence to discern meaning and to understand and shape our environment. The case made in this video is that that is at best a half truth. The reality is that our powers of explanation are much weaker than we tend to recognise or care to admit and that in looking for patterns we are too ready to overlook random variation.
That’s not just an abstract or theoretical concern: the crisis of replication in science is a real and alarming symptom of the problem; the challenge to the very concept of statistical significance is closely related.
This video is a thirty minute summary by Michael Blastland of the ideas in his recent book, followed by a discussion with Matthew Taylor which is also well worth watching. That’s a rather bland description of a talk which was anything but – these are challenging ideas, powerfully presented, which anybody who creates or uses evidence for public policy needs to understand.
Nicole Badstuber – London Reconnections
Algorithmic bias doesn’t start with the algorithms, it starts with the bias. That bias comes in two basic forms, one more active and one more passive; one about what is present and one what is absent. Both forms matter and often both come together. If we examine a data set, we might see clear differences between groups but be slower to spot – if we spot it at all – skews caused by the representation of those groups in the data set in the first place. If we survey bus passengers, we may find out important things about the needs of women travelling with small children (and their pushchair and paraphernalia), but we may overlook those who have been discouraged from travelling that way at all. That’s a very simple example, many are more subtle than that – but the essential point is that bias of absence is pervasive.
This post systematically identifies and addresses those biases in the context of transport. It draws heavily on the approach of Caroline Criado Perez’s book, Invisible Women: Exposing the Data Bias in a World Designed for Men, illustrating the general point with pointers to a vast range of data and analysis. It should be compelling reading for anybody involved with transport planning, but it’s included here for two other reasons as well.
The first is that it provides a clear explanation of why it is essential to be intensely careful about even apparently objective and neutral data – the seductive objectivity of computerised algorithmic decision making is too often anything but, and why those problems won’t be solved by better code if the deeper causes discussed here are not addressed.
The second is prompted by a tweet about the post by Peter Hendy. He is the former Transport Commissioner for London and is currently the chairman of Network Rail, and he comments
This is brilliant! It’s required reading at Network Rail already.
That’s good, of course – a senior leader in the industry acknowledging the problem if not quite promising to do anything about it. But it’s also quite alarming: part of the power of this post is that in an important sense there is nothing new about it – it’s a brilliant survey of the landscape, but there isn’t much new about the landscape itself. So Hendy’s tweet leaves us wondering when it becomes acceptable to know something – and when it becomes essential. Or in the oddly appropriately gendered line of Upton Sinclair:
It is difficult to get a man to understand something, when his salary depends upon his not understanding it!
Adam Grant – Sloan Management Review
It is counter intuitive that insights don’t have to be counter intuitive.
There is excitement and recognition in grand discoveries, uncovering what we didn’t know as a critical step towards doing a better thing. The bigger the surprise, the better the achievement. And at the other end of the spectrum, the time honoured way of sneering at consultants is to say that they have borrowed your watch so that they can tell you the time. Over and over again, though, big organisations pay expensive consultancies to do exactly that. There are various reasons why that might be rational (or at least understandable) behaviour, one is perhaps that the obvious is not actually obvious until it is made obvious.
This interesting article expands on the power of obviousness made obvious as an enabler and driver of change. It’s focus is on internal management practices, but the approach clearly has wider application:
Findings don’t have to be earth-shattering to be useful. In fact, I’ve come to believe that in many workplaces, obvious insights are the most powerful forces for change.
Rachel Hope – DfE Digital and Transformation
Most of government is mostly service design most of the time. That’s a pithy and powerful assertion, and has been deservedly influential since Matt Edgar coined it a few years ago. But influential is not the same as right – and indeed the title of Matt’s original blog post ended more tentatively with ‘…Discuss.’
This post, which is in effect a case study of acting as if the assertion were true, throws useful light on what it could mean. In doing so it makes it easier to see that there is a risk of eliding two questions and that it is worth answering them separately. The easy first question is whether policy and delivery should understand and respect each other and expect to work in close partnership – to which the answer must be yes. The harder second question is whether the venn diagram does – or should – eventually consume itself to become a single all encompassing circle. Verbally and visually, the argument of this post it that it does, and that argument is powerfully made in respect of the service it describes. But that still leaves open the question of whether the model works as well when the service is less specific or delivered less directly.
Adrian Brown – Centre for Public Impact
Everybody is in favour of evidence-based policy – by definition it must be far superior to the policy-based evidence with which it is often contrasted. This post is a brave challenge to the assertion that there is an evidence base for evidence-based policy. In particular, it argues first that weak evidence can be unwittingly assembled to appear misleadingly strong and in doing so close down policy options which should at the very least be kept open; and secondly that experimentation is a better approach, precisely because it avoids forcing complex issues into simple binary choices.
That’s not an argument that evidence is unimportant, of course. But it’s a good reminder that evidence should be scrutinised and that simple conclusions can often be simplistic.