Digital By Default, Lonely By Design

Rich Denyer-Bewick – CitizensOnline

We are better connected than ever before, through a bewildering array of devices and networks. And loneliness is an acute problem, undermining wellbeing and health. This post both explores that paradox and focuses more directly on its implications for the design of public services.

There is an apparently happy alignment between the improvements to quality which come from putting services online and the consequential efficiency savings which accrue to hard pressed public sector delivery organisations. But the reduction in human interaction which follows is a fundamental and deliberate feature of the new service design. It surely can’t be right that an occasional conversation with a harried bureaucrat will stave off the adverse effects of loneliness – but it always worth remembering that making services more impersonal is always likely disproportionately to affect those who are most vulnerable and most in need of support.

Why do so many digital transformations fail?

Michael Graber – Innovation Excellence

This short post asks a question which falls to be answered all too often. The answer it gives is that failure comes from the misperception that the most important thing about digital transformation is that it is digital:

Digital transformations are actually transformations of mindset, business model, culture, and operations. These are people problems, in the main, not technology issues.

AI, work and ‘outcome-thinking’

Richard Susskind – The British Academy Review

The debate about the scale of the impact of automation on employment rumbles on. Opinions vary enormously both on the numbers and types of jobs affected and on the more esoteric question of whether jobs or tasks are the more useful unit of measurement.

This short article neatly sidesteps that debate altogether. Its focus is on outcomes, the things we want to achieve. They will remain unchanged even as the means of achieving them changes radically. So the core question is not whether the way humans achieve the outcome can be replicated by robots and AI, but rather whether there is an alternative – and perhaps very different – way of achieving the same outcome in a way which is optimised for machines, not people.

Framing the question that way does two things. The first is that it brings some much needed clarity to a complex issue. The second is that all of us who have been congratulating ourselves on our irreplaceability need to start worrying much sooner than we might have thought.

Internet-era ways of working

Tom Loosemore – Public Digital

This is a deceptively simple list which describes ways of working in internet-era organisations. The GDS design principles are clearly among its antecedents, but this is a broader and deeper approach, setting out how to make things work – and work well – in organisations. It’s hard to argue with the thrust of the advice given here, and in any case it ends with an admonition to do sensible things in the right way rather than stick rigidly to the rules.

That doesn’t make the approach beyond criticism, both in detail and in approach, though it does have the happy consequence that challenge and consequent improvement are themselves part of the model being advocated. With that starting point, there are a couple of places where a further iteration could improve things further.

One is the instruction to treat data as infrastructure. The thought behind that is a good one: data matters, and it matters that it is managed well. Well ordered data is part of infrastructure at every level from the national (and international) downwards. But data is also part of the superstructure. Managing, processing, and creating value out of data are fundamental to the purpose and activities of organisations. Both aspects need to be understood and integrated.

A more subtle issue is that while it might be clear what counts as good internet-era ways of working, much of that work happens in organisations which are barely of the internet era at all. Precisely because it does challenge established approaches, established power structures and established infrastructure of every kind, the path to adoption is far from straightforward. Looked at in that light, this list is oddly impersonal: it is couched in the imperative, but without being clear who the orders are addressed to. There is a dimension of behavioural and organisational change which never quite makes it to the centre of the narrative, but which for organisations which are not native to the internet era is critically important.

None of that is a reason for not following the advice given here. But some of it might be part of the explanation of why it needs to be given in the first place.

 

Show Me Your Data and I’ll Tell You Who You Are

Sandra Wachter – Oxford Internet Institute

The ethical and legal issues around even relatively straightforward objectively factual personal data are complicated enough. But they seem simple beside the further complexity brought in by inferences derived from that data. Inferences are not new, of course: human beings have been drawing inferences about each other long before they had the assistance of machines. But as in other area, big data makes a big difference.

Inferences are tricky for several reasons. The ownership of an inference is clearly something different from ownership of the information from which the inference is drawn (even supposing that it is meaningful to talk about ownership in this context at all). An inference is often a propensity, which can be wrong without being falsifiable – ‘people who do x tend to like y‘ may remain true even I do x and don’t like y. And all that gets even more tricky over time – ‘people who do x tend to become y in later life’ can’t even be denied or contradicted at the individual level.

This lecture explores those questions and more, examining them at the intersection of law, technology and ethics – and then asks what rights we, as individuals, should have about the inferences which are made about us.

The same arguments are also explored in a blog post written by Wachter with her collaborator Brent Mittelstadt and in very much more detail in an academic paper, also written with Mittelstadt.

How solid is Tim’s plan to redecentralize the web?

Irina Bolychevsky – Medium

As a corollary to the comment here a few weeks back on Tim Berners-Lee’s ideas for shifting the power balance of the web away from data-exploiting conglomerates and back towards individuals, this post is a good clear-headed account of why his goal – however laudable – may be hard to achieve in practice.

What makes it striking and powerful is that it is not written from the perspective of somebody critical of the approach. On the contrary, it is by a long-standing advocate of redecentralising the internet, but who has a hard-headed appreciation of what would be involved. It is a good critique, for example addressing the need to recognise that data does not perfectly map to individuals (and therefore what data counts as mine is nowhere near as straightforward as might be thought) and that for many purposes the attributes of the data, including the authority with which it is asserted, can be as important at the data itself.

One response to that and other problems could be to give up on the ambition for change in this area, and leave control (and thus power) with the incumbents. Instead, the post takes the more radical approach of challenging current assumptions about data ownership and control at a deeper level, arguing that governments should be providing the common, open infrastructure which would allow very different models of data control to emerge and flourish.

Clearing the fog: Using outcomes to focus organisations

Kate Tarling and Matti Keltanen – Medium

This post is a deep and thoughtful essay on why large organisations struggle to find a clear direction and to sustain high quality delivery. At one level the solution is disarmingly simple: define what success looks like, work out how well the organisation is configured to deliver that success, and change the configuration if necessary – but in the meantime, since reconfiguration is slow and hard, be systematic and practical at developing and working through change.

If it were that easy, of course, everybody would have done it by now and all large organisations would be operating in a state of near perfection. Simple observation tells us that that is not the case, and simple experience tells us that it is not at all easy to fix. This post avoids the common trap of suggesting a simple – often simplistic – single answer, but instead acknowledges the need to find ways of moving forward despite the aspects of the organisational environment which hold things back. Even more usefully, it sets out an approach for doing that in practice based on real (and no doubt painful) experience.

If there were a weakness in this approach, it would be in appearing to underestimate some of the behavioural challenges, partly because the post notes, but doesn’t really address, the different powers and perspectives which come from different positions. The options – and frustrations – of a chief executive or board member are very different from those elsewhere in the organisation who may feel some of the problems more viscerally but find it harder to identify points of leverage to change things. The argument that in the absence of structures aligned to outcomes and goals we should fall back to alignment around purpose is a strong one, but the challenge of even achieving the fallback shouldn’t be underestimated.

It’s a pretty safe bet though that anybody struggling to find ways of helping large organisations to become fully effective will find ideas and insights here which are well worth reflecting on.

The art and practice of intelligence design

Martin Stewart-Weeks – Public Purpose

Geoff Mulgan has written a book about the power of collective intelligence. Martin Stewart-Weeks has amplified and added to Geoff’s work by writing a review. And now this note may spread attention and engagement a little further.

That is a ridiculously trite introduction to a deeply serious book. Spreading, amplifying, challenging and engaging with ideas and the application of those ideas are all critically important, and it’s hard to imagine serious disagreement with the proposition that it’s the right thing to do. But the doing of it is hard, to put it mildly. More importantly, that’s only one side of the driving problem: how do unavoidably collective problems get genuinely collective solutions? And in the end, that question is itself just such a problem, demanding just such a solution. Collectively, we need to find it. It’s well worth reading the book, but this review is a pretty good substitute.

Payday loans and the missed opportunity?

Jerry Fishenden – ntouk

Don’t be misled by the title, this isn’t really a post about payday loans. Instead, it explores the fascinating contrast between the approaches HMRC (for tax) and DWP (for benefits) have taken to opening their services to third parties. The basic story is pretty simple: HMRC has a long pre-internet history of working with third party intermediaries which it carried forward into its thinking abuot online services (at one stage their ambition was not directly to offer an online tax return service at all); DWP’s history is much more about direct delivery, and that tradition similarly has been carried forward into the online world. The post makes no pretence to neutrality on the central question of which was the better choice, HMRC is clearly seen to have won that argument hands down.

The post is good on the advantages of the open method and the opportunities that could create (including the ethical payday loans of the title). But it doesn’t address the fairly central question of whether there is a reason for the difference. After all, HMRC’s administration of tax credits, which are a benefit in everything but name, didn’t get the same open treatment as their revenue-raising lines of business. The question of whether a version of HMRC’s trust model for taxpayers and their agents could be translated to the benefits system is one well worth further reflection.

Why Futurism Has a Cultural Blindspot

Tom Vanderbilt – Nautilus

This post is more a string of examples than a fully constructed argument but is none the worse for that. The thread which holds the examples together is an important one: predicting the future goes wrong because we misunderstand behaviour, not because we misunderstand technology.

A couple of points stand out. One is the mismatch between social change and technology change: the shift of technology into the workplace turned out to be much easier to predict than the movement of women into the workplace. That’s a specific instance of the more general point that we both under- and over-predict the future. A second is that we over-weight the innovative in thinking about the future (and about the past and present); as Charlie Stross describes it, the near-future is comprised of three parts: 90% of it is just like the present, 9% is new but foreseeable developments and innovations, and 1% is utterly bizarre and unexpected.

None of that is a reason for abandoning attempts to think about the future. But the post is a strong – and necessary – reminder of the need to keep in mind the biases and distortions which all too easily skew the attempt.

Real-time government

Richard Pope – Platform Land

New writing from Richard Pope is always something to look out for: he has been thinking about and doing the intersection of digital and government more creatively and for longer than most. This post is about the myriad ways in which government is not real time – you can’t track the progress of your benefit claim in anything like the way in which you can track your Amazon delivery. And conversely, at any given moment, Amazon has a very clear picture of who its active customers are and what they are doing, in a way which is rather less true of operators of government services.

He is absolutely right to make the point that many services would be improved if they operated – or at least conveyed information – in real time, and he is just as right that converted (rather than transformed) paper processes and overnight batch updates account for some of that. So it shouldn’t detract from his central point to note that some of his examples are slightly odd ones, which may come from an uncharacteristic confusion between real time and event triggered. There is a notification to potential school leavers of their new national insurance number – but since children’s sixteenth birthdays are highly predictable, that notification doesn’t need to be real time in the sense meant here.  It was very useful to be told that my passport was about to expire – but since they were helpfully giving me three months’ notice, the day and the hour of the message was pretty immaterial.

Of course there are government services which should operate on less leisurely cycles than that, and of course those services should be as fast and as transparent as they reasonably can be. But perhaps the real power of real-time government is from the other side, less in shortening the cycle times of service delivery and much more in shortening the cycle times of service improvement.

The Ultimate Guide to Making Smart Decisions

Shane Parrish – Farnam Street

Who could not want not just any guide to making smart decisions, but the ultimate guide? That’s a big promise, but there is some substance to what is delivered. The post itself briskly covers categories of bad decisions before moving on to extensive sets of links to material on thinking in general and decision making in particular. I can’t imagined anyone wanting to work through all of that systematically, but if you need a way of homing in on an aspect of or approach to the subject, this could be a very good place to start.

The four types of strategy work you need for the digital revolution

Josef Oehmen – LSE Business Review

The world is probably not crying out for another 2×2 typology of strategy, but nevertheless still they come. This one is interesting less for it cells than for its axes. Degree of uncertainty is fairly standard, but degree of people impact is slightly more surprising. The people in question are those within the organisation being strategised about – is the relevant change marginal to business as usual, are jobs and careers at risk, how much emotional stress can be expected. All those are good questions, of course, and the approach is certainly a good counter to the tendency to see people as machine components in change, and then to be surprised when they turn out not to be. But it risks muddling up two rather different aspects of the people impact of strategy – those who conceive of the strategy and execute its projects on one hand, and those who are affected by it on the other – and raises the bigger question of whether an internal people focus is the best way of understanding strategy in the first place. And the answer to that feels more likely to be situational than universal.

Perhaps though it is the matrix itself which gets slightly in the way of understanding. This is not an argument that organisations choose or discover which cell to be in or by what route to move between them. Instead:

Our impression was that the most successful companies had learned to execute activities in all four quadrants, all the time, and had robust processes for managing the transition of an activity from one quadrant to the other.

One Small Step for the Web…

Tim Berners-Lee – Medium

Time Berners-Lee didn’t invent the internet. But he did invent the world wide web, and he does not altogether like what it has become. This post is his manifesto for reversing one of the central power relationships of the web, the giving and taking of data. Instead of giving data to other organisations and having to watch them abuse it, lose it and compromise it, people should keep control of their personal data and allow third parties to see and use it only under their control.

This is not a new idea. Under the names ‘vendor relationship management’ (horrible) and ‘volunteered personal information’ (considerably better but not perfect), the thinking stretches back a decade and more, developing steadily, but without getting much traction. If nothing else, attaching Berners-Lee’s name to it could start to change that, but more substantively it’s clear that there is money and engineering behind this, as well as thoughts and words.

But one of the central problems of this approach from a decade ago also feels just as real today, perhaps more so. As so often with better futures, it’s fairly easy to describe what they should look like, but remarkably difficult to work out how to get there from here. This post briefly acknowledges the problem, but says nothing about how to address it. The web itself is, of course, a brilliant example of how a clear and powerful idea can transform the world without the ghost of an implementation plan, so this may not feel as big a challenge to Berners-Lee as it would to any more normal person. But the web filled what was in many ways a void, while the data driven business models of the modern internet are anything but, and those who have accumulated wealth and power through those models will not go quietly.

It’s nearly ten years since Tim Wu wrote The Master Switch, a meticulous account of how every wave of communications technology has started with dispersed creativity and ended with centralised industrial scale. In 2010, it was possible to treat the question of whether that was also the fate of the internet as still open, though with a tipping point visible ahead. The final sentence of the book sets out the challenge:

If we do not take this moment to secure our sovereignty over the choices our information age has allowed us to enjoy, we cannot reasonably blame its loss on those who are free to enrich themselves by taking it from us in a way history has foretold

A decade on, the path dependence is massively stronger and will need to be recognised if it is to be addressed. technological creativity based on simple views of data ownership is unlikely to be enough by itself.

How to be Strategic

Julie Zhuo – Medium

This is a post which earns itself a place here just by its title, though that’s not all that can be said in its favour. It doesn’t start very promisingly, setting up the shakiest of straw men in order to knock them down – does anybody really think that ‘writing long documents’ is a good test of being strategic? – but it improves after the first third, to focus much more usefully on doing three things which actually make for good strategy. As the post acknowledges, the suggestions are very much in the spirit of Richard Rumelt’s good and bad strategy approach. So you can read the book, read Rumelt’s HBR article which is an excellent summary of the book, or read this post. Rumelt’s article is probably the best of the three, but this shorter and simpler post isn’t a bad alternative starting point.

Is Estonia the Silicon Valley of digital government?

Rainer Kattel and Ines Mergel – UCL Institute for Innovation and Public Purpose

The story of how Estonia became the most e of e-governments is often told, but often pretty superficially and often with an implied – or even explicit – challenge to everybody else to measure themselves and their governments against the standard set by Estonia and despair. This post provides exactly the context which is missing from such accounts: Estonia is certainly the result of visionary leadership, which at least in principle could be found anywhere, but it is also the result of some very particular circumstances which can’t simply be copied or assumed to be universal. There is also a hint of the question behind Solow’s paradox: the real test is not the implementation of technology, but the delivery of better outcomes.

None of that is to knock Estonia’s very real achievements, but yet again to make clear that the test of the effectiveness of technology is not a technological one.

10 questions to answer before using AI in public sector algorithmic decision making

Eddie Copeland – NESTA

A few months ago, Eddie Copeland shared 10 Principles for Public Sector use of Algorithmic Decision Making. They later apparently morphed into twenty questions to address, and now the twenty have been slimmed down to ten. They are all  good questions, but one very important one seems to be missing – how can decisions based on the algorithm be challenged? (and what, therefore, do people affected by a decision need to understand about how it was reached?)

 

Why Technology Favors Tyranny

Yuval Noah Harari – The Atlantic

The really interesting effects of technology are often the second and third order ones. The invention of electricity changed the design of factories. The invention of the internal combustion engine changed the design of cities. The invention of social media shows signs of changing the design of democracy.

This essay is a broader and bolder exploration of the consequences of today’s new technologies. That AI will destroy jobs is a common argument, that it might destroy human judgement and ability to make decisions is a rather bolder one (apparently a really creative human chess move is now seen as an indicator of potential cheating, since creativity in chess is now overwhelmingly the province of computers).

The most intriguing argument is that new technologies destroy the comparative advantage of democracy over dictatorship. The important difference between the two, it asserts, is not between their ethics but between their data processing models. Centralised data and decision making used to be a weakness; increasingly it is a strength.

There is much to debate in all that, of course. But the underlying point, that those later order effects are important to recognise, understand and address, is powerfully made.

Identifiers and data sharing

Leigh Dodds

This post – which is actually a set of tweets gathered together – is a beautifully short and simple explanation of why some basic stuff really matters in efficiently integrating data and the services it supports (and is actually quite important as well in ensuring that things don’t get joined up which shouldn’t be). Without common identifiers, simple and value adding connections get difficult, expensive and unreliable – a point powerfully made in a post linked from that one which sets out a bewildering array of unique identifiers for property in the UK – definitely unique in the sense that there is a one to one mapping between identifier and place, but ludicrously far from unique in their proliferation.

There is a huge appetite for making more effective use of data. The appetite needs to be as strong for creating the conditions which make that possible.