Thomas Davenport – MIT Sloan Management Review
It’s generally far easier to make decisions badly than to make them well, even at the best of times. Knowing that is the first step towards countering it, and this post gives a pretty standard account of a range of cognitive biases which may be relevant in the context of COVID-19.
Nine biases are covered in the post, some more obviously of particular relevance to present circumstances than others. The last two are perhaps most pertinent. Neglect of probability is essentially the same point made in Scott Alexander’s much more detailed argument, that structuring thinking in terms of probabilities is harder than the attractive simplicity of binary choices. And perhaps the most challenging of all, normalcy bias. What is normal is a really useful guide to what is to come, until it isn’t. There is a lot of rhetoric around at present about things not going back to the way things were and about the need for and desirability of a new normal. But we have seen from other crises that the sense of what is normal, the sense of there being a natural order of things (often reinforced, as it happens, by a poor grasp of probability) can too easily overwhelm the sense of opportunity and possibility which the crisis itself has created. Normalcy bias is part of what made the crisis what it is, but it is also part of how we will manage the aftermath, with the risk of becoming part of why fewer lessons will have been learned and applied when we come to look back at this period in the years to come.
Ross Anderson – Light Blue Touchpaper
This post is interesting at three levels. It is a meticulous case study of why contact tracing, and particularly pseudonymous contact tracing, and particularly app-based pseudonymous contact tracing is a hard problem (maybe even a wicked problem). It is an example of a more general phenomenon that describing a policy aspiration generally turns out to be much easier than describing, let alone implementing, a way of meeting that aspiration. And it illustrates the adage (distorted from an original by Mencken) that for every complex problem there is an answer that is clear, simple, and wrong.
And there is a fourth, which is perhaps most pertinent of all, which is that for problems of any complexity, technology cannot wish away human behaviour. Even if a contact tracing app were to work perfectly in technical terms (whatever that might mean), the individual and social behavioural responses may be far from what is desired. Or as Anderson puts it:
We cannot field an app that will cause more worried well people to phone 999.
That’s an insight relevant to many more problems than this one.
Adam Grant – Sloan Management Review
It is counter intuitive that insights don’t have to be counter intuitive.
There is excitement and recognition in grand discoveries, uncovering what we didn’t know as a critical step towards doing a better thing. The bigger the surprise, the better the achievement. And at the other end of the spectrum, the time honoured way of sneering at consultants is to say that they have borrowed your watch so that they can tell you the time. Over and over again, though, big organisations pay expensive consultancies to do exactly that. There are various reasons why that might be rational (or at least understandable) behaviour, one is perhaps that the obvious is not actually obvious until it is made obvious.
This interesting article expands on the power of obviousness made obvious as an enabler and driver of change. It’s focus is on internal management practices, but the approach clearly has wider application:
Findings don’t have to be earth-shattering to be useful. In fact, I’ve come to believe that in many workplaces, obvious insights are the most powerful forces for change.
‘Start with user needs’ has been the mantra of digital government since the early heady days of GDS. It’s a thought which is simple, powerful and – this post argues – wrong. Or, more accurately, unhelpful: it’s a concept which both lacks precision in its own right and risks being too tightly coupled to the construction of solutions.
This is not some random hit job, but a deeply reflective post which brings out clearly where user research most adds value, by being still clearer about where it doesn’t. In doing so it draws out a point which is relevant and important to a much wider audience. It positions user research as a means of reducing risk – and that is important not just as a way of helping senior decision makers see the value in it, but because basing decisions on unexamined and untested assumptions leads to bad consequences (and has been doing since long before government was digital).
Tom Vanderbilt – Nautilus
This post is more a string of examples than a fully constructed argument but is none the worse for that. The thread which holds the examples together is an important one: predicting the future goes wrong because we misunderstand behaviour, not because we misunderstand technology.
A couple of points stand out. One is the mismatch between social change and technology change: the shift of technology into the workplace turned out to be much easier to predict than the movement of women into the workplace. That’s a specific instance of the more general point that we both under- and over-predict the future. A second is that we over-weight the innovative in thinking about the future (and about the past and present); as Charlie Stross describes it, the near-future is comprised of three parts: 90% of it is just like the present, 9% is new but foreseeable developments and innovations, and 1% is utterly bizarre and unexpected.
None of that is a reason for abandoning attempts to think about the future. But the post is a strong – and necessary – reminder of the need to keep in mind the biases and distortions which all too easily skew the attempt.
Koen Smets – Behavioral Scientist
There are plenty of places you can find lists of biases, capturing human behaviour at the edge of human rationality. All too often they get used to reinforce a tendency to play cognitive bias I-spy, prompting us to spot the many ways in which actual messy humans fall short of tidy economic assumptions. This post gives a beautifully clear account of why interpreting them in that way risks missing the point, imposing a dangerously inaccurate determinism on human behaviour.
Armed with a sparkling new vocabulary of cognitive and behavioral effects, it’s easy to see examples of biases all around us, and we fool ourselves into believing that we have become experts. We risk falling prey to confirmation bias. The outcomes of experiments appear obvious to us because we overlook the intricate nature of the full picture (or fail to notice unsuccessful replications). By simplifying human behavior into a collection of easily identified, neatly separate irrationalities, we strengthen our misguided self-perception of expertise.
And as Carla Groom noted in drawing attention to the article, seductive science isn’t a good basis for effective policy making:
Jennifer Guay – Apolitical
To describe something as ‘policy based evidence making’ is to be deliberately rude at two levels, first because it implies the use of evidence to conceal rather than illuminate, but secondly because it implies a failure to recognise that evidence should drive policy (and thus, though often less explicitly, politics).
Evidence based policy, on the other hand, is a thing of virtue for which we should all be striving. That much is obvious to right-thinking people. In recent times, the generality of that thought has been reinforced by very specific approaches. If you haven’t tested your approach through randomised controlled trials, how can you know that your policy making has reached the necessary level of objective rigour?
This post is a thoughtful critique of that position. At one level, the argument is that RCTs tell you less than they might first appear to. At another level, that fact is a symptom of a wider problem, that human life is messy and multivariate and that optimising a current approach may at best get you to a local maximum. That is of course why the social sciences are so much harder than the so-called hard sciences, but that is probably a battle for another day.
David Weinberger – Los Angeles Review of Books
At one level, this is an entertainingly polite but damning book review. At another, it is a case study in how profound expertise in one academic domain does not automatically translate into the distillation of wisdom in another. But beyond both of those, the real value of this piece is in drawing out the point that in the realm of ideas, as with so many others, the internet is a place where new things are happening, not just the old things being done a bit better. We need to get better not just at knowing things, but at how to know things. How, in this new world, do we take advantage of its strengths to come at knowledge in different ways?
I had got to the end of reading this before noticing that it was by David Weinberger. That would have been endorsement enough – he has been sharing deep insights about how all this works for many years and is always a name to look out for.
There is a caricature of policy making in which it is presented as an exercise in introspection, free of evidence and free in particular of contact with those who might experience and understand the context and impact of its delivery. Like all good caricatures, there is something recognisable in that, and like all good caricatures, those caricatured more easily see the distortion than the likeness.
The underlying challenge in this post is a good one. Emphasis on things which can be measured distorts attention from things which may be just as important but are more elusive. Understanding the variation around a central figure is as important as understanding the central figure itself – and can tell you very different things. Broader and more qualitative approaches are an essential complement to narrower and more quantitative ones.
But policy makers are people too. Dismissing them as ivory towered elitists is too easy. It would be good to have more empathetic policy makers, but more empathy with policy makers is part of what we need to get there. Policy making is itself the product of a system – and understanding the drivers and behaviours of that system is the essential first step to changing it.
Emmanuel Lee – LSE Impact Blog
Behavioural science meets service design meets engineering. Some interesting ideas (though the experimental guinea pigs are, as so often, students – that might, or might not, tell us much about the wider population.
Piyush Tantia – Stanford Social Innovation Review