Tom Vanderbilt – Nautilus
This post is more a string of examples than a fully constructed argument but is none the worse for that. The thread which holds the examples together is an important one: predicting the future goes wrong because we misunderstand behaviour, not because we misunderstand technology.
A couple of points stand out. One is the mismatch between social change and technology change: the shift of technology into the workplace turned out to be much easier to predict than the movement of women into the workplace. That’s a specific instance of the more general point that we both under- and over-predict the future. A second is that we over-weight the innovative in thinking about the future (and about the past and present); as Charlie Stross describes it, the near-future is comprised of three parts: 90% of it is just like the present, 9% is new but foreseeable developments and innovations, and 1% is utterly bizarre and unexpected.
None of that is a reason for abandoning attempts to think about the future. But the post is a strong – and necessary – reminder of the need to keep in mind the biases and distortions which all too easily skew the attempt.
Koen Smets – Behavioral Scientist
There are plenty of places you can find lists of biases, capturing human behaviour at the edge of human rationality. All too often they get used to reinforce a tendency to play cognitive bias I-spy, prompting us to spot the many ways in which actual messy humans fall short of tidy economic assumptions. This post gives a beautifully clear account of why interpreting them in that way risks missing the point, imposing a dangerously inaccurate determinism on human behaviour.
Armed with a sparkling new vocabulary of cognitive and behavioral effects, it’s easy to see examples of biases all around us, and we fool ourselves into believing that we have become experts. We risk falling prey to confirmation bias. The outcomes of experiments appear obvious to us because we overlook the intricate nature of the full picture (or fail to notice unsuccessful replications). By simplifying human behavior into a collection of easily identified, neatly separate irrationalities, we strengthen our misguided self-perception of expertise.
And as Carla Groom noted in drawing attention to the article, seductive science isn’t a good basis for effective policy making:
Jennifer Guay – Apolitical
To describe something as ‘policy based evidence making’ is to be deliberately rude at two levels, first because it implies the use of evidence to conceal rather than illuminate, but secondly because it implies a failure to recognise that evidence should drive policy (and thus, though often less explicitly, politics).
Evidence based policy, on the other hand, is a thing of virtue for which we should all be striving. That much is obvious to right-thinking people. In recent times, the generality of that thought has been reinforced by very specific approaches. If you haven’t tested your approach through randomised controlled trials, how can you know that your policy making has reached the necessary level of objective rigour?
This post is a thoughtful critique of that position. At one level, the argument is that RCTs tell you less than they might first appear to. At another level, that fact is a symptom of a wider problem, that human life is messy and multivariate and that optimising a current approach may at best get you to a local maximum. That is of course why the social sciences are so much harder than the so-called hard sciences, but that is probably a battle for another day.
David Weinberger – Los Angeles Review of Books
At one level, this is an entertainingly polite but damning book review. At another, it is a case study in how profound expertise in one academic domain does not automatically translate into the distillation of wisdom in another. But beyond both of those, the real value of this piece is in drawing out the point that in the realm of ideas, as with so many others, the internet is a place where new things are happening, not just the old things being done a bit better. We need to get better not just at knowing things, but at how to know things. How, in this new world, do we take advantage of its strengths to come at knowledge in different ways?
I had got to the end of reading this before noticing that it was by David Weinberger. That would have been endorsement enough – he has been sharing deep insights about how all this works for many years and is always a name to look out for.
There is a caricature of policy making in which it is presented as an exercise in introspection, free of evidence and free in particular of contact with those who might experience and understand the context and impact of its delivery. Like all good caricatures, there is something recognisable in that, and like all good caricatures, those caricatured more easily see the distortion than the likeness.
The underlying challenge in this post is a good one. Emphasis on things which can be measured distorts attention from things which may be just as important but are more elusive. Understanding the variation around a central figure is as important as understanding the central figure itself – and can tell you very different things. Broader and more qualitative approaches are an essential complement to narrower and more quantitative ones.
But policy makers are people too. Dismissing them as ivory towered elitists is too easy. It would be good to have more empathetic policy makers, but more empathy with policy makers is part of what we need to get there. Policy making is itself the product of a system – and understanding the drivers and behaviours of that system is the essential first step to changing it.
Emmanuel Lee – LSE Impact Blog
Behavioural science meets service design meets engineering. Some interesting ideas (though the experimental guinea pigs are, as so often, students – that might, or might not, tell us much about the wider population.
Piyush Tantia – Stanford Social Innovation Review