Real-time government

Richard Pope – Platform Land

New writing from Richard Pope is always something to look out for: he has been thinking about and doing the intersection of digital and government more creatively and for longer than most. This post is about the myriad ways in which government is not real time – you can’t track the progress of your benefit claim in anything like the way in which you can track your Amazon delivery. And conversely, at any given moment, Amazon has a very clear picture of who its active customers are and what they are doing, in a way which is rather less true of operators of government services.

He is absolutely right to make the point that many services would be improved if they operated – or at least conveyed information – in real time, and he is just as right that converted (rather than transformed) paper processes and overnight batch updates account for some of that. So it shouldn’t detract from his central point to note that some of his examples are slightly odd ones, which may come from an uncharacteristic confusion between real time and event triggered. There is a notification to potential school leavers of their new national insurance number – but since children’s sixteenth birthdays are highly predictable, that notification doesn’t need to be real time in the sense meant here.  It was very useful to be told that my passport was about to expire – but since they were helpfully giving me three months’ notice, the day and the hour of the message was pretty immaterial.

Of course there are government services which should operate on less leisurely cycles than that, and of course those services should be as fast and as transparent as they reasonably can be. But perhaps the real power of real-time government is from the other side, less in shortening the cycle times of service delivery and much more in shortening the cycle times of service improvement.

The Ultimate Guide to Making Smart Decisions

Shane Parrish – Farnam Street

Who could not want not just any guide to making smart decisions, but the ultimate guide? That’s a big promise, but there is some substance to what is delivered. The post itself briskly covers categories of bad decisions before moving on to extensive sets of links to material on thinking in general and decision making in particular. I can’t imagined anyone wanting to work through all of that systematically, but if you need a way of homing in on an aspect of or approach to the subject, this could be a very good place to start.

The four types of strategy work you need for the digital revolution

Josef Oehmen – LSE Business Review

The world is probably not crying out for another 2×2 typology of strategy, but nevertheless still they come. This one is interesting less for it cells than for its axes. Degree of uncertainty is fairly standard, but degree of people impact is slightly more surprising. The people in question are those within the organisation being strategised about – is the relevant change marginal to business as usual, are jobs and careers at risk, how much emotional stress can be expected. All those are good questions, of course, and the approach is certainly a good counter to the tendency to see people as machine components in change, and then to be surprised when they turn out not to be. But it risks muddling up two rather different aspects of the people impact of strategy – those who conceive of the strategy and execute its projects on one hand, and those who are affected by it on the other – and raises the bigger question of whether an internal people focus is the best way of understanding strategy in the first place. And the answer to that feels more likely to be situational than universal.

Perhaps though it is the matrix itself which gets slightly in the way of understanding. This is not an argument that organisations choose or discover which cell to be in or by what route to move between them. Instead:

Our impression was that the most successful companies had learned to execute activities in all four quadrants, all the time, and had robust processes for managing the transition of an activity from one quadrant to the other.

One Small Step for the Web…

Tim Berners-Lee – Medium

Time Berners-Lee didn’t invent the internet. But he did invent the world wide web, and he does not altogether like what it has become. This post is his manifesto for reversing one of the central power relationships of the web, the giving and taking of data. Instead of giving data to other organisations and having to watch them abuse it, lose it and compromise it, people should keep control of their personal data and allow third parties to see and use it only under their control.

This is not a new idea. Under the names ‘vendor relationship management’ (horrible) and ‘volunteered personal information’ (considerably better but not perfect), the thinking stretches back a decade and more, developing steadily, but without getting much traction. If nothing else, attaching Berners-Lee’s name to it could start to change that, but more substantively it’s clear that there is money and engineering behind this, as well as thoughts and words.

But one of the central problems of this approach from a decade ago also feels just as real today, perhaps more so. As so often with better futures, it’s fairly easy to describe what they should look like, but remarkably difficult to work out how to get there from here. This post briefly acknowledges the problem, but says nothing about how to address it. The web itself is, of course, a brilliant example of how a clear and powerful idea can transform the world without the ghost of an implementation plan, so this may not feel as big a challenge to Berners-Lee as it would to any more normal person. But the web filled what was in many ways a void, while the data driven business models of the modern internet are anything but, and those who have accumulated wealth and power through those models will not go quietly.

It’s nearly ten years since Tim Wu wrote The Master Switch, a meticulous account of how every wave of communications technology has started with dispersed creativity and ended with centralised industrial scale. In 2010, it was possible to treat the question of whether that was also the fate of the internet as still open, though with a tipping point visible ahead. The final sentence of the book sets out the challenge:

If we do not take this moment to secure our sovereignty over the choices our information age has allowed us to enjoy, we cannot reasonably blame its loss on those who are free to enrich themselves by taking it from us in a way history has foretold

A decade on, the path dependence is massively stronger and will need to be recognised if it is to be addressed. technological creativity based on simple views of data ownership is unlikely to be enough by itself.

How to be Strategic

Julie Zhuo – Medium

This is a post which earns itself a place here just by its title, though that’s not all that can be said in its favour. It doesn’t start very promisingly, setting up the shakiest of straw men in order to knock them down – does anybody really think that ‘writing long documents’ is a good test of being strategic? – but it improves after the first third, to focus much more usefully on doing three things which actually make for good strategy. As the post acknowledges, the suggestions are very much in the spirit of Richard Rumelt’s good and bad strategy approach. So you can read the book, read Rumelt’s HBR article which is an excellent summary of the book, or read this post. Rumelt’s article is probably the best of the three, but this shorter and simpler post isn’t a bad alternative starting point.

Is Estonia the Silicon Valley of digital government?

Rainer Kattel and Ines Mergel – UCL Institute for Innovation and Public Purpose

The story of how Estonia became the most e of e-governments is often told, but often pretty superficially and often with an implied – or even explicit – challenge to everybody else to measure themselves and their governments against the standard set by Estonia and despair. This post provides exactly the context which is missing from such accounts: Estonia is certainly the result of visionary leadership, which at least in principle could be found anywhere, but it is also the result of some very particular circumstances which can’t simply be copied or assumed to be universal. There is also a hint of the question behind Solow’s paradox: the real test is not the implementation of technology, but the delivery of better outcomes.

None of that is to knock Estonia’s very real achievements, but yet again to make clear that the test of the effectiveness of technology is not a technological one.

10 questions to answer before using AI in public sector algorithmic decision making

Eddie Copeland – NESTA

A few months ago, Eddie Copeland shared 10 Principles for Public Sector use of Algorithmic Decision Making. They later apparently morphed into twenty questions to address, and now the twenty have been slimmed down to ten. They are all  good questions, but one very important one seems to be missing – how can decisions based on the algorithm be challenged? (and what, therefore, do people affected by a decision need to understand about how it was reached?)

 

Why Technology Favors Tyranny

Yuval Noah Harari – The Atlantic

The really interesting effects of technology are often the second and third order ones. The invention of electricity changed the design of factories. The invention of the internal combustion engine changed the design of cities. The invention of social media shows signs of changing the design of democracy.

This essay is a broader and bolder exploration of the consequences of today’s new technologies. That AI will destroy jobs is a common argument, that it might destroy human judgement and ability to make decisions is a rather bolder one (apparently a really creative human chess move is now seen as an indicator of potential cheating, since creativity in chess is now overwhelmingly the province of computers).

The most intriguing argument is that new technologies destroy the comparative advantage of democracy over dictatorship. The important difference between the two, it asserts, is not between their ethics but between their data processing models. Centralised data and decision making used to be a weakness; increasingly it is a strength.

There is much to debate in all that, of course. But the underlying point, that those later order effects are important to recognise, understand and address, is powerfully made.

Identifiers and data sharing

Leigh Dodds

This post – which is actually a set of tweets gathered together – is a beautifully short and simple explanation of why some basic stuff really matters in efficiently integrating data and the services it supports (and is actually quite important as well in ensuring that things don’t get joined up which shouldn’t be). Without common identifiers, simple and value adding connections get difficult, expensive and unreliable – a point powerfully made in a post linked from that one which sets out a bewildering array of unique identifiers for property in the UK – definitely unique in the sense that there is a one to one mapping between identifier and place, but ludicrously far from unique in their proliferation.

There is a huge appetite for making more effective use of data. The appetite needs to be as strong for creating the conditions which make that possible.

Data as property

Jeni Tennison – Exponential View

Azeem Azhar’s Exponential View is one of the very few weekly emails which earns regular attention, and it is no disrespect to him to say that the occasional guest authors he invites add further to the attraction. This edition is by Jeni Tennison, bringing her very particular eye to the question of data ownership.

Is owning data just like owning anything else? The simple answer to that is ‘no’. But if it isn’t, what does it mean to talk about data as property? To which the only simple answer is that there is no simple answer. This is not the place to look for detailed exposition and analysis, but it is very much the place to look for a set of links to a huge range of rich content, curated by somebody who is herself a real expert in the field.

Fading out the Echo of Consumer Protection: An empirical study at the intersection of data protection and trade secrets

Guido Noto La Diega

This is by way of a footnote to the previous post – a bit more detail on one small part of the enormous ecosystem described there.

If you buy an Amazon Echo then, partly depending on what you intend to do with it, you may be required to accept 17 different contracts, amounting to close to 50,000 words, not very far short of the length of a novel. You will also be deemed to be monitoring them all for any changes, and to have accepted any such changes by default.

That may be extreme in length and complexity, but the basic approach has become normal to the point of invisibility. That raises a question about the reasonableness of Amazon’s approach. But it raises a much more important question about our wider approach to merging new technologies into existing social, cultural and legal constructs. This suggests, to put it mildly, that there is room for improvement.

(note that the link is to a conference agenda page rather than directly to the presentation, as that is a 100Mb download, but if needed this is the direct link)

Anatomy of an AI System

Kate Crawford and Vladan Joler

An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.

It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.

MIT taught a neural network how to show its work

Tristan Greene – The Next Web

One of the many concerns about automated decision making is its lack of transparency. Particularly (but by no means only) for government services, accountability requires not just that decisions are well based, but that they can be challenged and explained. AI black boxes may be efficient and accurate, but they are not accountable or transparent.

This is an interesting early indicator that those issues might be reconciled. It’s in the special – and much researched – area of image recognition, so a long way from a general solution, but it’s encouraging to see systematic thought being addressed to the problem.

Table of Disruptive Technologies

Richard Watson and Anna Cupani – Imperial Tech Foresight

table of disruptove technologiesHere are a hundred disruptive technologies, set out in waves of innovation, with time to ubiquity on one axis and potential for disruption on the other. On that basis, smart nappies appear in the bottom left corner, as imminent and not particularly disruptive (though perhaps that depends on just how smart they are and on who is being disrupted), while towards the other end of the diagonal we get to transhuman technologies – and then who knows what beyond.

The authors are firm that this is scientific foresight, not idle futurism, though that’s an assertion which doesn’t always stand up to close scrutiny. Planetary colonisation is further into the future than implantable phones, but will apparently be less disruptive when it comes. Dream recording falls in to the distant future category (rather than fringe science, where it might appear more at home), rather oddly on the same time scale but three levels of disruption higher than fusion power.

The table itself demonstrates that dreams are powerful. But perhaps not quite that powerful. And it’s a useful reminder, yet again, that technology change is only ever partly about the technology, and is always about a host of other things as well.

The Fast-Follower Strategy for Technology in Government

David Eaves and Ben McGuire – Governing

Governments should move slowly and try not to break things. That’s a suggestion slightly contrary to the fashionable wisdom in some quarters, but has some solid reasoning behind it. There are good reasons for governments not to be leading edge adopters – government services should work; innovation is not normally a necessary counter to existential threats; service users are not able to trade stability for excitement.

That’s not an argument against innovation, but it is an argument for setting pace and risk appropriately. As a result, this post argues, the skills government needs are less to do cutting edge novelty, and much more to do with identifying and adopting innovations from elsewhere.

Self-Driving Cars Are Not the Future

Paris Marx – Medium

If you fall into the trap of thinking that technology-driven change is about the technology, you risk missing something important. No new technology arrives in a pristine environment, there are always complex interactions with the existing social, political, cultural, economic, environmental and no doubt other contexts. This post is a polemic challenging the inevitability – and practicality – of self-driving cars, drawing very much on that perspective.

The result is something which is interesting and entertaining in its own right, but which also makes a wider point. Just as it’s not technology that’s disrupting our jobs, it’s not technology which determines how self-driving cars disrupt our travel patterns and land use. And over and over again, the hard bit of predicting the future is not the technology but the sociology,

Too Many Projects

Rose Hollister and Michael Watkins – Harvard Business Review

The hardest bit of strategy is not thinking up the goal and direction in the first place. It’s not even identifying the set of activities which will move things in the desired direction. The hardest bit is stopping all the things which aren’t heading in that direction or are a distraction of attention or energy from the most valuable activities. Stopping things is hard. Stopping things which aren’t necessarily failing to do the thing they were set up to do, but are nevertheless not the most important things to be doing, is harder. In principle, it’s easier to stop things before they have started than to rein them in once they have got going, but even that is pretty hard.

In all of that, ‘hard’ doesn’t mean hard in principle: the need, and often the intention, is clear enough. It means instead that observation of organisations, and particularly larger and older organisation, provides strong reason to think that it’s hard in practice. Finding ways of doing it better is important for many organisations.

This article clearly and systematically sets out what underlies the problem, what doesn’t work in trying to solve it – and offers some very practical suggestions for what does. Practical does not, of course, mean easy. But if we don’t start somewhere, project sclerosis will only get worse.

Spoiler alert – there are now 9 tribes of digital

Catherine Howe – Curious?

The eight tribes of digital (which were once seven) have become nine.

The real value of the tribes – other than that they are the distillation of four years of observation, reflection and synthesis – is not so much in whether they are definitively right (which pretty self-evidently they aren’t, and can’t be) but as a prompt for understanding why individuals and groups might behave as they do. And of course, the very fact that there can be nine kinds of digital is another way of saying that there is no such thing as digital