This rather mundane title is the gateway to a rich set of resources – a compendium of tools for public sector innovation and transformation, as the site’s subtitle has it. It’s a library organised by topics and actions, as well as supporting connections between people working on public sector innovation round the world. It’s very richness has the potential to be a bit overwhelming, so it’s well worth starting with a very clear blog post by Angela Hanson which introduces the approach OECD has taken.
Geoff Mulgan has written a book about the power of collective intelligence. Martin Stewart-Weeks has amplified and added to Geoff’s work by writing a review. And now this note may spread attention and engagement a little further.
That is a ridiculously trite introduction to a deeply serious book. Spreading, amplifying, challenging and engaging with ideas and the application of those ideas are all critically important, and it’s hard to imagine serious disagreement with the proposition that it’s the right thing to do. But the doing of it is hard, to put it mildly. More importantly, that’s only one side of the driving problem: how do unavoidably collective problems get genuinely collective solutions? And in the end, that question is itself just such a problem, demanding just such a solution. Collectively, we need to find it. It’s well worth reading the book, but this review is a pretty good substitute.
Time Berners-Lee didn’t invent the internet. But he did invent the world wide web, and he does not altogether like what it has become. This post is his manifesto for reversing one of the central power relationships of the web, the giving and taking of data. Instead of giving data to other organisations and having to watch them abuse it, lose it and compromise it, people should keep control of their personal data and allow third parties to see and use it only under their control.
This is not a new idea. Under the names ‘vendor relationship management’ (horrible) and ‘volunteered personal information’ (considerably better but not perfect), the thinking stretches back a decade and more, developing steadily, but without getting much traction. If nothing else, attaching Berners-Lee’s name to it could start to change that, but more substantively it’s clear that there is money and engineering behind this, as well as thoughts and words.
But one of the central problems of this approach from a decade ago also feels just as real today, perhaps more so. As so often with better futures, it’s fairly easy to describe what they should look like, but remarkably difficult to work out how to get there from here. This post briefly acknowledges the problem, but says nothing about how to address it. The web itself is, of course, a brilliant example of how a clear and powerful idea can transform the world without the ghost of an implementation plan, so this may not feel as big a challenge to Berners-Lee as it would to any more normal person. But the web filled what was in many ways a void, while the data driven business models of the modern internet are anything but, and those who have accumulated wealth and power through those models will not go quietly.
It’s nearly ten years since Tim Wu wrote The Master Switch, a meticulous account of how every wave of communications technology has started with dispersed creativity and ended with centralised industrial scale. In 2010, it was possible to treat the question of whether that was also the fate of the internet as still open, though with a tipping point visible ahead. The final sentence of the book sets out the challenge:
If we do not take this moment to secure our sovereignty over the choices our information age has allowed us to enjoy, we cannot reasonably blame its loss on those who are free to enrich themselves by taking it from us in a way history has foretold
A decade on, the path dependence is massively stronger and will need to be recognised if it is to be addressed. technological creativity based on simple views of data ownership is unlikely to be enough by itself.
Here are a hundred disruptive technologies, set out in waves of innovation, with time to ubiquity on one axis and potential for disruption on the other. On that basis, smart nappies appear in the bottom left corner, as imminent and not particularly disruptive (though perhaps that depends on just how smart they are and on who is being disrupted), while towards the other end of the diagonal we get to transhuman technologies – and then who knows what beyond.
The authors are firm that this is scientific foresight, not idle futurism, though that’s an assertion which doesn’t always stand up to close scrutiny. Planetary colonisation is further into the future than implantable phones, but will apparently be less disruptive when it comes. Dream recording falls in to the distant future category (rather than fringe science, where it might appear more at home), rather oddly on the same time scale but three levels of disruption higher than fusion power.
The table itself demonstrates that dreams are powerful. But perhaps not quite that powerful. And it’s a useful reminder, yet again, that technology change is only ever partly about the technology, and is always about a host of other things as well.
Governments should move slowly and try not to break things. That’s a suggestion slightly contrary to the fashionable wisdom in some quarters, but has some solid reasoning behind it. There are good reasons for governments not to be leading edge adopters – government services should work; innovation is not normally a necessary counter to existential threats; service users are not able to trade stability for excitement.
That’s not an argument against innovation, but it is an argument for setting pace and risk appropriately. As a result, this post argues, the skills government needs are less to do cutting edge novelty, and much more to do with identifying and adopting innovations from elsewhere.
The idea that it should be possible to capture legislative rules as code and that good things might result from doing so is not a new one. It sounds as though it should be simple: the re-expression of what has already been captured in one structured language in another. It turns out though not to be at all simple, partly because of what John Sheridan calls the intertwingling of law: the idea that law often takes effect through reference and amendment and that the precise effect of its doing so can be hard to discern.
There is interesting work going on in New Zealand experimenting with the idea of law and code in some limited domains, and this post is prompted by that work. What makes it distinctive is that it is written from a policy perspective, asking questions such as whether the discipline of producing machine consumable rules is a route to better policy development. It’s still unclear how far this approach might take us – but the developments in New Zealand are definitely worth keeping an eye on.
Innovation is perhaps second only to transformation as a word to convey excitement and derring do to the messy business of getting things done in organisations – a view promoted not least by people whose derring may be rather stronger than their do.
The assumption that disruption and iconoclasm are the best – or even only – way of making significant and sustained change happen is a curiously pervasive one. The problem with it is not that it’s always wrong, but that there is good reason to think that it’s not always right. As this post argues, sometimes deep experience can be just as powerful, in part because intractable problems often respond better to sustained incremental efforts than to a single transformational solution.
This article suffers a little from a rather patronising view of government, and some of the examples used tend to the trivial. But the underlying point remains a good one: people who understand and care about the problem may be the people best placed to solve it – if they are given the licence to do so.
The word ‘digital’ has long been both powerful and problematic. It’s powerful because new technologies and, in some ways even more so, new ways of developing and applying new technologies have made many things better, faster, cheaper – and often very different. And as Tom Loosemore, perhaps the leading proponent of using ‘digital’ in a sense which transcends a narrow, technical meaning puts it:
You find you need a name to describe that new way of working that is shorter than user centric multi disciplinary iterative open agile etc…
— Tom Loosemore (@tomskitomski) May 24, 2018
But it’s also problematic, because it stretches the meaning of ‘digital’ so far as to drain it of content. It has become a vague word, implying modernity and goodness and not much more. More seriously, it puts the emphasis in the wrong place: digital is a means, not an end, and there is always a risk that in focusing too much on means, we lose sight of the ends.
That’s not to say that ‘digital’ has not been a useful word. In many ways it has been. It’s more to say that the time has come – and is arguably long past – when we should move beyond it. That makes this post a really interesting sign of what may be to come: Citizens Advice has chosen to replace its Chief Digital Officer with a Director of Customer Journey, and its reasons for doing so are well worth reading.
Sometimes a simple tweet says all there is to say. Though in this case it’s well worth reading the replies as well.
The main stages of resistance to / adoption of new things:
1 Ignore it
2 Ridicule it and the horse it rode in on
3 Say yes, but do nothing at all differently
4 'Comply' (maliciously)
5 Openly attack it
6 Claim it as your own idea that you had ages ago and have always believed in
— Janet Hughes (@JanetHughes) May 17, 2018
Beyond even the bonus points for talking about laws being ‘intertwingled’, this is an important and interesting post at the intersection of law, policy and automation. It neatly illustrates why the goal of machine-interpretable legislation,such as the recent work by the New Zealand government, is a much harder challenge than it first appears – law can have tacit external interpretation rules, which means that the highly structured interpretation which is normal, and indeed necessary, for software just doesn’t work. Which is why legal systems have judges and programming languages generally don’t – and why the New Zealand project is so interesting.
The rather dry title of this post belies the importance and interest of its content. Lots of people have spotted that laws are systems of rules, computer code is systems of rules and that somehow these two fact should illuminate each other. Quite how that should happen is much less clear. Ideas have ranged from developing systems to turn law into code to adapting software testing tools to check legislative compliance. This post records an experiment with a different approach again, exploring the possibility of creating legislative rules in a way which is designed to make them machine consumable. That’s an approach with some really interesting possibilities, but also some very deep challenges. As John Sheridan has put it, law is deeply intertwingled: the meaning of legislation is only partly conveyed by the words of a specific measure, which means that transcoding the literal letter of the law will never be enough. And beyond that again, the process of delivering and experiencing a service based on a particular set of legal rules will include a whole set of rules and norms which are not themselves captured in law.
That makes it sensible to start, as the work by the New Zealand government reported here has done, with exploratory thinking, rather than jumping too quickly to assumptions about the best approach. The recommendations for areas to investigate further set out in their full report are an excellent set of questions, which will be of interest to governments round the world.
This is a good post on the very practical difficulties in establishing secure digital identity, in this case for the purpose of voting in elections. It’s included here mainly as a timely but inadvertent illustration of the point in the previous post that even technology fixes are harder than they look. Implementing some form of online voting wouldn’t be too difficult; implementing a secure and trustworthy electoral system would be very hard indeed.
Digital identity (like digital voting) sounds as though it ought to be a problem with a reasonably straightforward solution, but which looks a lot more complicated when it comes to actually doing it. Like everything with the word ‘digital’ attached to it, that’s partly a problem of technical implementation. But also like everything with the word ‘digital’ attached to it, particularly in the public and political space, it’s a problem with many social aspects too.
This post makes a brave attempt at offering a solution to some of the technical challenges. But the reason why the introduction of identity cards has been highly politically contentious in the UK, but not in other countries, has a lot to do with history and politics and very little to do with technology. So better technology may indeed to be better, but that doesn’t in itself constitute a new approach to identity. Even if the better technology is in fact better (and as Paul Clarke spotted, ‘attestation’ is doing a lot more work as a word than it first appears), there are some much wider issues (some flagged by Peter Wells) which would also need to be addressed as part of an overall approach.
This is close to the beginning of what is billed as series of indefinite length on agility and Agility, which we are promised will at times be polemical and curmudgeonly, and are tangentially illustrated with references to Alice (the one in Wonderland, not the cryptographic exemplar). The first post in the series set some context; this second one focuses on the question of whether short-cycle software production techniques translate to business strategy. In particular, the argument is that scrum-based approaches to agile work best when the problem space is reasonably well understood and that this will be the case to different extents and different stages of an overall development cycle.
Dave Snowden is best known as the originator of the Cynefin framework, which is probably enough to guarantee that this series will be thought provoking. He positions scrum approaches within the Cynefin complex domain and as a powerful approach – but not the only or uniquely appropriate one. It will be well worth watching his arguments develop.
Another good provocation from Paul Taylor, arguing this time that solitary thinking is a better source of creative breakthroughs than collaborative activities. It’s not all or nothing – there is a useful distinction drawn between problems where collaboration is valuable (complex, strategic, needing engagement) and those where it isn’t (deep, radical, disruptive, urgent).
But almost more important than that is the observation that few organisations actually value purposeful thinking in the first place – or at least, they don’t create the conditions in which such thinking can readily take place.
If you had to write down a list of innovation methods and techniques, how many could you come up with? However long your list, it’s a fair bit that it won’t have as much on it as this landscape of innovation approaches (also available as a more legible PDF to cut out and keep).
Methods are grouped into four overlapping ‘spaces’. There’s room for debate about what best fits where and there is a broad range from mainstream to eclectic – but that in itself is a good start in challenging assumptions about methods which appear natural and obvious and indeed about the kind of innovation being sought.
How many design innovation toolkits are there? The answer seems to be that there are more than you might think possible. Over a hundred are brought together on this page, which makes it an extraordinarily rich collection. There are lots of interesting-looking things here, some well known, others more obscure – though it’s hard not to come away with the thought that the world’s need for innovation toolkits has now over abundantly been met.
The bigger the underlying change, the bigger the second (and higher) order effects. Those effects often get overlooked in looking at the impact of change (and in trying to understand why expected impacts haven’t happened). Benedict Evans has always been good at spotting and exploring the more distant consequences of technology-driven change, for example in his recent piece on ten-year futures. ‘Cascading collapse’ is a good way of putting it: if the long-heralded but slow to materialise collapse of physical retail is beginning to appear, what consequences flow from that?
Today HMRC announced that 92.5% of this year’s tax returns were submitted online. That too has been a slow but inexorable growth, taking twenty years to go from expensive sideshow to near complete dominance. There is more to do to reflect on the cascading collapses that that and other changes will wreak not just on government, but through government to society and the economy more widely.
Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.
They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.
This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.
There are some who argue that the only test of progress is delivery and that the only thing which can be iterated is a live service. That is a horribly misguided approach. There is no point in producing a good answer to a bad question, and lots to be gained from investing time and energy in understanding the question before attempting to answer it. Even for pretty simple problems, badly formed initial questions can generate an endless – and expensive – chain of solutions which would never have needed to exist if that first question had been a better one. Characteristically, Paul Taylor asks some better questions about asking better questions.