Strategy

The perils of bad strategy

Richard Rumelt – McKinsey Quarterly

If we want to create a good strategy, there is some value in understanding what makeas a bad one. This paper sets out to do exactly that and ends even more helpfully by reversing that into three key characteristics of a good strategy – understanding the problem; describing a guiding approach to addressing it; and setting out a coherent set of actions to deliver the approach. This is a classic article – which is a way of saying both that it’s a few years old, while also being pretty timeless. It derives from a book, but as is not uncommon, the book is very much longer without adding value in proportion.

Data and AI

The Risk of Machine-Learning Bias (and How to Prevent It)

Chris DeBrusk – Sloan MIT Management Review

This article is a good complement to the previous post, providing some pragmatic rigour on the risk of bias in machine learning and ways of countering it. Perhaps the most important point is one of the simplest:

It is safe to assume that bias exists in all data. The question is how to identify it and remove it from the model.

There is some good practical advice on how to do just that. But there is an obvious corollary: if human bias is endemic in data, it risks being no less endemic in attempts to remove it. That’s not a counsel of despair, this is an area where good intentions really do count for something. But it does underline the importance of being alert to the opposite, that unless it is clear that bias has been thought about and countered, the probability is high that it still remains. And of course it will be hard to calibrate the residual risk, whatever its level might be, particularly for the individual on the receiving end of the computer saying ‘no’.

Data and AI

Computer Says No: Part 1 Algorithmic Bias and Part 2 Explainability

These two (of a planned three) posts take an interesting approach to the ethical problems of algorithmic decision making, resulting in a much more optimistic view than most who write on this. It’s very much worth reading even though the arguments don’t seem quite as strong as they are made to appear.

Part 1 essentially side steps the problem of bias in decision making by asserting that automated decision systems don’t actually make decisions (humans still mostly do that), but should instead be thought of as prediction systems – and the test of a prediction system is in the quality of its predictions, not in the operations of its black box. The human dimension is a bit of a red herring as it’s not hard to think of examples where in practice the prediction outputs are all the decision maker has to go on, even if in theory the system is advisory. More subtly, there is an assumption that prediction quality can easily be assessed and an assertion that machine predictions can be made independent of the biases of those who create them, both of which are harder problems than the post implies.

The second post goes on to address explainability, with the core argument being that it is a red herring (an argument Ed Felten has developed more systematically): we don’t really care whether a decision can be explained, we care whether it can be justified, and the source of justification is in its predictive power, not in the detail of its generation. There are two very different problems with that. One is that not all individual decisions are testable in that way: if I am turned down for a mortgage, it’s hard to falsify the prediction that I wouldn’t have kept up the payments. The second is that the thing in need of explanation may be different for AI decisions from that for human decisions. The recent killing of a pedestrian by an autonomous Uber car illustrates the point: it is alarming precisely because it is inexplicable (or at least so far unexplained), but whatever went wrong, it seems most unlikely that a generally low propensity to kill people will be thought sufficiently reassuring.

None of that should be taken as a reason for not reading these posts. Quite the opposite: the different perspective is a good challenge to the emerging conventional wisdom on this and is well worth reflecting on.