This is clever. Deliver a one hour training session on agile for policy makers (or, presumably, others thought to be deficient in agility) in the form of a one hour long agile project. People often worry about whether agile is scalable, but they usually mean upwards – this really does seem to put the minimum into viable.
At one level, this is an entertainingly polite but damning book review. At another, it is a case study in how profound expertise in one academic domain does not automatically translate into the distillation of wisdom in another. But beyond both of those, the real value of this piece is in drawing out the point that in the realm of ideas, as with so many others, the internet is a place where new things are happening, not just the old things being done a bit better. We need to get better not just at knowing things, but at how to know things. How, in this new world, do we take advantage of its strengths to come at knowledge in different ways?
I had got to the end of reading this before noticing that it was by David Weinberger. That would have been endorsement enough – he has been sharing deep insights about how all this works for many years and is always a name to look out for.
Another good provocation from Paul Taylor, arguing this time that solitary thinking is a better source of creative breakthroughs than collaborative activities. It’s not all or nothing – there is a useful distinction drawn between problems where collaboration is valuable (complex, strategic, needing engagement) and those where it isn’t (deep, radical, disruptive, urgent).
But almost more important than that is the observation that few organisations actually value purposeful thinking in the first place – or at least, they don’t create the conditions in which such thinking can readily take place.
This is a really interesting attempt to set out a a set of regulatory principles for the use of algorithms in the public sector. It brings what can easily be quite an abstract debate down to earth: we can talk about open data and open decision making, but what actually needs to be open (and to whom) to make that real?
The suggested principles mostly look like a sensible starting point for debate. Two of them though seem a little problematic, one trivially, the other much more significantly. The trivial one is principle 9, that public sector organisations should insure against errors, which isn’t really a principle at all, though the provision of compensation might be. The important one is principle 5, “Citizens must be informed when their treatment has been informed wholly or in part by an algorithm”. On the face of it, that’s innocuous and reasonable. Arguably though, it’s the equivalent of having a man with a red flag walking in front of a car. Government decisions are already either based on algorithms (often called “laws” or “regulations”) or they are based on human judgements, likely to be more opaque than any computer algorithm. Citizens should absolutely be entitled to an explanation and a justification for any decision affecting them – but the means by which the decision at issue was made should have no bearing on that right.
This is a useful summary of the limitations of automation in service design. Only humans can be genuinely emotional, humans are still preferred to resolve problems, and automation doesn’t always stop human work, it can just shift it from provider to customer. So far, so good. But this has the feel of an article which could have been written almost at any time in the last decade or more and it does not touch at all on whether these attributes are absolute, situational or (for example) generational. People who design services are always at risk of over-representing their personal preferences, which are often to automate and streamline. Conversely though, there is no doubt that what it widely seen as normal changes over time and there is no very obvious reason to think that the balance of preferences has become more stable than it was in the past.
One of the reasons why large organisations find change hard is that inevitably new things are at first small in relation to established things. That’s not – in the short term – a problem for the established things: they can just ignore the new thing. It’s very much a problem for the new things: they need to find ways of operating in a system optimised for the old things.
This post is the distillation of a number of discussions about how to do design from the inside. It’s interesting in its own right in suggesting some responses to the challenge of making things happen from the inside. But it’s doubly interesting precisely because it is about making things happen: design on the inside is a very close relation of change on the inside.
Neither, seems to be the answer to the question posed by the title. But if one thing AIs do is filter the world for us, the question of who does the filtering and in whose interest they do it becomes very important. As with other free services, free algorithms will be provided in expectation of a benefit to somebody, and that somebody may very well not be the end user. So far so unexceptional (and putting it under the heading of AI doesn’t change the substance of an issue which has been around a good while). But if this is a problem, what are the pressures and processes which will work to relieve it rather than reinforce it? Here, the argument rather fades away: we are told we need clear laws and well-accepted procedures to regulate AI, but there is little suggestion here about what they would say or how we would get to them. It’s slightly unfair to single this piece out for what is quite a common problem: when challenges are technology driven, but solutions need to be socially driven, it’s a lot easier to talk about the first than the second.
This opinion piece gives a punchy account of reasons to oppose the introduction of a universal basic income – which makes it useful as a clear statement of one side of the argument, though at the expense of a more balanced assessment. Interestingly, three of the five reasons given are to do with work incentives and the role of work in supporting self-worth. In the context of the wage-based economy which has dominated industrial societies that may make sense – but prompts the interesting further questions about whether they are universally applicable and will survive the next rounds of work automation.
This post is just a teaser, but a teaser for something interesting in both style and substance. Starting next Monday, we are promised a five day intensive course on artificial intelligence. AI is for many people at the stage where there is lots of fragmented insight and understanding, but little which brings the fragments together to form a coherent whole. So there is a gap here well worth filling – and this looks to be a neat low key way of filling it. Watch that space.
If you had to write down a list of innovation methods and techniques, how many could you come up with? However long your list, it’s a fair bit that it won’t have as much on it as this landscape of innovation approaches (also available as a more legible PDF to cut out and keep).
Methods are grouped into four overlapping ‘spaces’. There’s room for debate about what best fits where and there is a broad range from mainstream to eclectic – but that in itself is a good start in challenging assumptions about methods which appear natural and obvious and indeed about the kind of innovation being sought.
A recurrent criticism of governments’ approach to digital services has been that they have been over focused on the final stage of online interaction, leaving the fundamental organisation and operation of government services unchanged. More recently, design has more often gone deeper, looking at all elements of the service and the systems which support it, but still largely leaving the underlying concept of the service in question unchallenged and unchanged. This post takes that a stage further to look at options for the underlying operating model. Eight are set out in the post, but it is probably still true that most government service design and delivery happens under the first two heading. What are the prospects for the other six – and for all the others which haven’t made it onto this list?
This is a follow up to a post covered here a few days ago which looked critically at outsourcing, starting from the fundamental question first posed by Coase on what organisations should do and what they should buy. This second post is at one level a short summary of the first one, but it’s also rather more than that. It puts forward a slightly different way of framing the question, making the point that time and uncertainty are relevant to the decision, as well as pure transaction cost narrowly defined.
There are transactions, which are in the moment, and imply no further commitment or relationship. There are contracts, which are a commitment to future transactions, and depend on shared assumptions about the future conditions in which those transactions will happen. And there are organisations, which exist in the space beyond contractual precision and certainty.
To complete the hat trick, there is also a separate post applying this thinking to Capita. Even for those less interested in the company, it’s worth reading to the end to get to the punch in the punchline:
In important ways, this is the service that Capita provided and still provides: the ability to blame problems on computers and computer people, while ignoring the physical reality of policy
How many design innovation toolkits are there? The answer seems to be that there are more than you might think possible. Over a hundred are brought together on this page, which makes it an extraordinarily rich collection. There are lots of interesting-looking things here, some well known, others more obscure – though it’s hard not to come away with the thought that the world’s need for innovation toolkits has now over abundantly been met.
Being a subversive is hard work. That’s partly because being the odd one out takes more energy than going with the flow, but it’s also because subversion decays: yesterday’s radicalism is today’s fashion and tomorrow’s received wisdom, to be challenged by the next round of subversion. If that sounds a bit like the innovator’s dilemma, that’s perhaps because it is, with some of the same consequences: you can ride the S-curve to the top, but if you don’t flip to the next curve, your subversion-fu will be lost.
The reciprocal effect – which is more the focus of this post – is the effect on the organisation being subverted. Just yesterday, I heard ‘minimum viable product’ being used to mean ‘best quick fix we can manage in the time’. The good intention was still there, as was an echo of the original meaning, but the hard edge of the concept had been lost, partly I suspect, because it had become dissociated from the conceptual context which gave the original meaning. That’s not deliberate degradation but – as the post notes – is the consequence of a virtuous attempt to bring in new thinking, only for it to get absorbed by the wider culture.
So the challenge for subversives remains: how to keep subverting themselves, how to stay one curve ahead.
The idea of the black box pervades a lot of thinking and writing about AI. Mysterious algorithms do inscrutable things which impinge on people’s lives in inexplicable ways. That is alarming in its own right, but doubly so because this is new and uncharted territory. Except that, as this post painstakingly points out, it’s not actually new at all. People have been writing software about which they could not predict the outputs from the inputs since pretty much since they have been writing software at all – in a sense, that’s precisely the point of it. And if you want to look at it that way, the ultimate black box is the human brain, where the evidence that we don’t understand the reasons for our own decisions, never mind anybody else’s, is pretty overwhelming.
The need for precision at one level – software doesn’t cope well with typos and syntax errors – doesn’t translate into precision at a higher level, of understanding what that precisely written software will actually do. That thought came from Marvin Minsky in 1967, but people had been writing about black boxes for years before that, when the complexity of software was a tiny fraction of what is normal now.
The fact that this is neither new nor newly recognised doesn’t in itself change the nature of the challenge. What it does perhaps suggest, though, is that strategies developed for coping with these uncertainties in the past may well still be relevant for the future.
The bigger the underlying change, the bigger the second (and higher) order effects. Those effects often get overlooked in looking at the impact of change (and in trying to understand why expected impacts haven’t happened). Benedict Evans has always been good at spotting and exploring the more distant consequences of technology-driven change, for example in his recent piece on ten-year futures. ‘Cascading collapse’ is a good way of putting it: if the long-heralded but slow to materialise collapse of physical retail is beginning to appear, what consequences flow from that?
Today HMRC announced that 92.5% of this year’s tax returns were submitted online. That too has been a slow but inexorable growth, taking twenty years to go from expensive sideshow to near complete dominance. There is more to do to reflect on the cascading collapses that that and other changes will wreak not just on government, but through government to society and the economy more widely.
Organisations, including governments, follow fashions. Some of those fashions change on short cycles, others move more slowly, sometimes creating the illusion of permanence. The fashion for outsourcing, for buying rather than making, has been in place in government for many years, but there are some interesting signs that change may be coming. One immediate cause and signal of that change is the collapse of Carillion, but that happened at point when the debate was already beginning to change.
This post goes back to the roots of the make or buy choice in the work of Ronald Coase on the nature of the firm. The principle is simple enough, that it makes sense to buy things when the overhead of creating and managing contracts is low and to make them when the overheads are high. The mistake, it is argued here, is that organisations, particularly governments, have systematically misunderstood the cost and complexity of contract management, resulting in the creation of large businesses and networks of businesses whose primary competence is the creation and management of contracts.
One consequence of that is that it becomes difficult or impossible to understand the true level of costs within a contractual system (because prices quickly stop carrying that information) or to understand how the system works (because tacit knowledge is not costed or paid for).
All very thought provoking, and apparently the first in a series of posts. It will be worth looking out for the others.
Issues of data aggregation and de-anonymisation are hardly new, but there’s nothing like a good example to make an issue more visible – and secret US bases revealed through aggregated data from fitness trackers are about as good as it gets.
The real issue though is less such revelations and more the implications for data and privacy more generally. This article argues powerfully that to see this as an issue of individuals and clickthrough privacy policies is to miss a very important point. People can’t consent to the ways their personal data will be used and the risks that carries, because service providers don’t and can’t understand those things themselves, and so can’t explain them in a way which makes consent meaningful. That has some important data policy implications, including much stronger liability for data breaches, and keeping the amount of data captured and held to a minimum in the first place. Those are not new suggestions, of course, so as ever the real question is not how the risks could be managed better, but how incentives can be aligned to ensure that the risks are in fact managed. And that is a political and social problem, not a technical one.