Michael Blastland – the RSA
We like to think of ourselves as rational decision makers, using patterns of evidence to discern meaning and to understand and shape our environment. The case made in this video is that that is at best a half truth. The reality is that our powers of explanation are much weaker than we tend to recognise or care to admit and that in looking for patterns we are too ready to overlook random variation.
That’s not just an abstract or theoretical concern: the crisis of replication in science is a real and alarming symptom of the problem; the challenge to the very concept of statistical significance is closely related.
This video is a thirty minute summary by Michael Blastland of the ideas in his recent book, followed by a discussion with Matthew Taylor which is also well worth watching. That’s a rather bland description of a talk which was anything but – these are challenging ideas, powerfully presented, which anybody who creates or uses evidence for public policy needs to understand.
Nicole Badstuber – London Reconnections
Algorithmic bias doesn’t start with the algorithms, it starts with the bias. That bias comes in two basic forms, one more active and one more passive; one about what is present and one what is absent. Both forms matter and often both come together. If we examine a data set, we might see clear differences between groups but be slower to spot – if we spot it at all – skews caused by the representation of those groups in the data set in the first place. If we survey bus passengers, we may find out important things about the needs of women travelling with small children (and their pushchair and paraphernalia), but we may overlook those who have been discouraged from travelling that way at all. That’s a very simple example, many are more subtle than that – but the essential point is that bias of absence is pervasive.
This post systematically identifies and addresses those biases in the context of transport. It draws heavily on the approach of Caroline Criado Perez’s book, Invisible Women: Exposing the Data Bias in a World Designed for Men, illustrating the general point with pointers to a vast range of data and analysis. It should be compelling reading for anybody involved with transport planning, but it’s included here for two other reasons as well.
The first is that it provides a clear explanation of why it is essential to be intensely careful about even apparently objective and neutral data – the seductive objectivity of computerised algorithmic decision making is too often anything but, and why those problems won’t be solved by better code if the deeper causes discussed here are not addressed.
The second is prompted by a tweet about the post by Peter Hendy. He is the former Transport Commissioner for London and is currently the chairman of Network Rail, and he comments
This is brilliant! It’s required reading at Network Rail already.
That’s good, of course – a senior leader in the industry acknowledging the problem if not quite promising to do anything about it. But it’s also quite alarming: part of the power of this post is that in an important sense there is nothing new about it – it’s a brilliant survey of the landscape, but there isn’t much new about the landscape itself. So Hendy’s tweet leaves us wondering when it becomes acceptable to know something – and when it becomes essential. Or in the oddly appropriately gendered line of Upton Sinclair:
It is difficult to get a man to understand something, when his salary depends upon his not understanding it!
Adam Grant – Sloan Management Review
It is counter intuitive that insights don’t have to be counter intuitive.
There is excitement and recognition in grand discoveries, uncovering what we didn’t know as a critical step towards doing a better thing. The bigger the surprise, the better the achievement. And at the other end of the spectrum, the time honoured way of sneering at consultants is to say that they have borrowed your watch so that they can tell you the time. Over and over again, though, big organisations pay expensive consultancies to do exactly that. There are various reasons why that might be rational (or at least understandable) behaviour, one is perhaps that the obvious is not actually obvious until it is made obvious.
This interesting article expands on the power of obviousness made obvious as an enabler and driver of change. It’s focus is on internal management practices, but the approach clearly has wider application:
Findings don’t have to be earth-shattering to be useful. In fact, I’ve come to believe that in many workplaces, obvious insights are the most powerful forces for change.
Rachel Hope – DfE Digital and Transformation
Most of government is mostly service design most of the time. That’s a pithy and powerful assertion, and has been deservedly influential since Matt Edgar coined it a few years ago. But influential is not the same as right – and indeed the title of Matt’s original blog post ended more tentatively with ‘…Discuss.’
This post, which is in effect a case study of acting as if the assertion were true, throws useful light on what it could mean. In doing so it makes it easier to see that there is a risk of eliding two questions and that it is worth answering them separately. The easy first question is whether policy and delivery should understand and respect each other and expect to work in close partnership – to which the answer must be yes. The harder second question is whether the venn diagram does – or should – eventually consume itself to become a single all encompassing circle. Verbally and visually, the argument of this post it that it does, and that argument is powerfully made in respect of the service it describes. But that still leaves open the question of whether the model works as well when the service is less specific or delivered less directly.
Adrian Brown – Centre for Public Impact
Everybody is in favour of evidence-based policy – by definition it must be far superior to the policy-based evidence with which it is often contrasted. This post is a brave challenge to the assertion that there is an evidence base for evidence-based policy. In particular, it argues first that weak evidence can be unwittingly assembled to appear misleadingly strong and in doing so close down policy options which should at the very least be kept open; and secondly that experimentation is a better approach, precisely because it avoids forcing complex issues into simple binary choices.
That’s not an argument that evidence is unimportant, of course. But it’s a good reminder that evidence should be scrutinised and that simple conclusions can often be simplistic.