One Small Step for the Web…

Tim Berners-Lee – Medium

Time Berners-Lee didn’t invent the internet. But he did invent the world wide web, and he does not altogether like what it has become. This post is his manifesto for reversing one of the central power relationships of the web, the giving and taking of data. Instead of giving data to other organisations and having to watch them abuse it, lose it and compromise it, people should keep control of their personal data and allow third parties to see and use it only under their control.

This is not a new idea. Under the names ‘vendor relationship management’ (horrible) and ‘volunteered personal information’ (considerably better but not perfect), the thinking stretches back a decade and more, developing steadily, but without getting much traction. If nothing else, attaching Berners-Lee’s name to it could start to change that, but more substantively it’s clear that there is money and engineering behind this, as well as thoughts and words.

But one of the central problems of this approach from a decade ago also feels just as real today, perhaps more so. As so often with better futures, it’s fairly easy to describe what they should look like, but remarkably difficult to work out how to get there from here. This post briefly acknowledges the problem, but says nothing about how to address it. The web itself is, of course, a brilliant example of how a clear and powerful idea can transform the world without the ghost of an implementation plan, so this may not feel as big a challenge to Berners-Lee as it would to any more normal person. But the web filled what was in many ways a void, while the data driven business models of the modern internet are anything but, and those who have accumulated wealth and power through those models will not go quietly.

It’s nearly ten years since Tim Wu wrote The Master Switch, a meticulous account of how every wave of communications technology has started with dispersed creativity and ended with centralised industrial scale. In 2010, it was possible to treat the question of whether that was also the fate of the internet as still open, though with a tipping point visible ahead. The final sentence of the book sets out the challenge:

If we do not take this moment to secure our sovereignty over the choices our information age has allowed us to enjoy, we cannot reasonably blame its loss on those who are free to enrich themselves by taking it from us in a way history has foretold

A decade on, the path dependence is massively stronger and will need to be recognised if it is to be addressed. technological creativity based on simple views of data ownership is unlikely to be enough by itself.

Why Technology Favors Tyranny

Yuval Noah Harari – The Atlantic

The really interesting effects of technology are often the second and third order ones. The invention of electricity changed the design of factories. The invention of the internal combustion engine changed the design of cities. The invention of social media shows signs of changing the design of democracy.

This essay is a broader and bolder exploration of the consequences of today’s new technologies. That AI will destroy jobs is a common argument, that it might destroy human judgement and ability to make decisions is a rather bolder one (apparently a really creative human chess move is now seen as an indicator of potential cheating, since creativity in chess is now overwhelmingly the province of computers).

The most intriguing argument is that new technologies destroy the comparative advantage of democracy over dictatorship. The important difference between the two, it asserts, is not between their ethics but between their data processing models. Centralised data and decision making used to be a weakness; increasingly it is a strength.

There is much to debate in all that, of course. But the underlying point, that those later order effects are important to recognise, understand and address, is powerfully made.

Fading out the Echo of Consumer Protection: An empirical study at the intersection of data protection and trade secrets

Guido Noto La Diega

This is by way of a footnote to the previous post – a bit more detail on one small part of the enormous ecosystem described there.

If you buy an Amazon Echo then, partly depending on what you intend to do with it, you may be required to accept 17 different contracts, amounting to close to 50,000 words, not very far short of the length of a novel. You will also be deemed to be monitoring them all for any changes, and to have accepted any such changes by default.

That may be extreme in length and complexity, but the basic approach has become normal to the point of invisibility. That raises a question about the reasonableness of Amazon’s approach. But it raises a much more important question about our wider approach to merging new technologies into existing social, cultural and legal constructs. This suggests, to put it mildly, that there is room for improvement.

(note that the link is to a conference agenda page rather than directly to the presentation, as that is a 100Mb download, but if needed this is the direct link)

Anatomy of an AI System

Kate Crawford and Vladan Joler

An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.

It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.

Self-Driving Cars Are Not the Future

Paris Marx – Medium

If you fall into the trap of thinking that technology-driven change is about the technology, you risk missing something important. No new technology arrives in a pristine environment, there are always complex interactions with the existing social, political, cultural, economic, environmental and no doubt other contexts. This post is a polemic challenging the inevitability – and practicality – of self-driving cars, drawing very much on that perspective.

The result is something which is interesting and entertaining in its own right, but which also makes a wider point. Just as it’s not technology that’s disrupting our jobs, it’s not technology which determines how self-driving cars disrupt our travel patterns and land use. And over and over again, the hard bit of predicting the future is not the technology but the sociology,

It’s Not Technology That’s Disrupting Our Jobs

Louis Hyman – The New York Times

This is a good reminder that the development and, even more, the application of technology are always driven by their social. economic and political context. There is a tendency to see technological change as somehow natural and unstoppable, which is dangerous not because it is wholly wrong, but because it is partly right and so can easily be confused with being wholly right.

New technologies cannot be uninvented (usually) or ignored, but how they are developed and deployed is always a matter of choice, even if that choice isn’t always self-evident. This article focuses on the implications for employment, where too often the destruction of jobs is assumed to be both inevitable and undesirable (leaving only the numbers up for debate). But the nature of the change, the accrual of the benefits of greater efficiency and of the costs of disruption and transition are all social choices. That’s a very helpful reframing – which creates the space to ask how we might retain the benefits of traditional employment structures, while adding (rather than substituting) the advantages which come from new ways of working.

Q & A with Ellen Broad – Author of Made by Humans

Ellen Broad – Melbourne University Publishing

Ellen Broad’s new book is high on this summer’s reading list. Both provenance and subject matter mean that confidence in its quality can be high. But while waiting to read it, this short interview gives a sense of the themes and approach. Among many other virtues, Ellen recognises the power of language to illuminate the issues, but also to obscure them. As she says, what is meant by AI is constantly shifting, a reminder of one of the great definitions of technology, ‘everything which doesn’t work yet’ – because as soon as it does it gets called something else.

The book itself is available in the UK, though Amazon only has it in kindle form (but perhaps a container load of hard copies is even now traversing the globe).

Team Human – mastering the digital

Douglas Rushkoff

This video is twenty minutes of fireworks from Douglas Rushkoff, pushing back on the technocratic view of technology. He is speaking at the recent FutureFest, and starts by describing a long-ago talk he had given called ‘why futurists suck’, thereby establishing his credentials and biting the hand that was feeding him in one simple line.

Once he gets going though, he gets onto the very interesting idea that technological determinism has led to a view that describing the future is essentially an exercise in prediction (and watching him act out a two by two matrix is a joy in itself) – when instead we should see the future as a thing we are creating. The central line, on which much of his argument then hangs, is that ‘We have been trained to see humanity as the problem, and technology as the solution’ – and that that is precisely the wrong way round.

This was more barnstorming than developed argument – but there are some really interesting implications, and there was enough there to suggest that it will be worth looking out for the book of the talk – also called Team Human – when it comes out next year.

Looking at historical parallels to inform digital rights policy

Justine Leblanc – IF

Past performance, it is often said, is not a guide to future performance. That may be sound advice in some circumstances, but is more often than not a sign that people are paying too little attention to history, over too short a period, rather than that there is in fact nothing to learn from the past. To take a random but real example, there are powerful insights to be had on contemporary digital policy from looking at the deployment of telephones and carrier pigeons in the trenches of the first world war.

That may be an extreme example, but it’s a reason why the idea of explicitly looking for historical parallels for current digital policy questions is a good one. This post introduces a project to do exactly that, which promises to be well worth keeping an eye on.

The value of understanding history, in part to avoid having to repeat it, is not limited to digital policy, of course. That’s a reason for remembering the value of the History and Policy group, which is based on “the belief that history can and should improve public policy making, helping to avoid reinventing the wheel and repeating past mistakes.”

Pivoting ‘the book’ from individuals to systems

Pia Waugh – Pipka

It’s a sound generalisation that people do the best they can within the limits of the systems they find themselves in. That best may include pushing at those limits, but even if it does, that doesn’t make them any less real. Two things follow from that. The first is that it is pointless blaming individuals for operating within the constraints of the system. The second is that if you want to change the system, you have to change the system.

That’s not to say that people are powerless or that we can all resign personal and moral accountability. On the contrary, the systems are themselves human constructs and can only be challenged and changed by the humans who are actors within them. That’s where this post comes in, which is in effect a prospectus for a not yet written book. What different systems do changes in social, economic and technological contexts demand, where are the contradictions which need to be resolved? The book, when it comes, promises to be fascinating; the post is well worth reading in its own right in the meantime.

TV, retail, advertising and cascading collapses

Benedict Evans

The bigger the underlying change, the bigger the second (and higher) order effects. Those effects often get overlooked in looking at the impact of change (and in trying to understand why expected impacts haven’t happened). Benedict Evans has always been good at spotting and exploring the more distant consequences of technology-driven change, for example in his recent piece on ten-year futures. ‘Cascading collapse’ is a good way of putting it: if the long-heralded but slow to materialise collapse of physical retail is beginning to appear, what consequences flow from that?

Today HMRC announced that 92.5% of this year’s tax returns were submitted online. That too has been a slow but inexorable growth, taking twenty years to go from expensive sideshow to near complete dominance. There is more to do to reflect on the cascading collapses that that and other changes will wreak not just on government, but through government to society and the economy more widely.

Future Historians Probably Won’t Understand Our Internet

Alexis Madrigal – The Atlantic

Archiving documents is easy. You choose which ones to keep and put them somewhere safe. Archiving the digital equivalents of those documents throws up different practical problems, but is conceptually not very different. But often, and increasingly, our individual and collective digital footprints don’t fit neatly into that model. The relationships between things and the experience of consuming them become very different, less tangible and less stable. As this article discusses, there is an archive of Twitter in theory, but not in any practical sense, and not one of Facebook at all. And even if there were, the constant tweaking of interfaces and algorithms and increasingly finely tuned individualisation make it next to impossible to get hold of in any meaningful way.

So in this new world, perhaps archivists need to map, monitor and even create both views of the content and records of what it means to experience it. And that will be true not just of social media but increasingly of knowledge management in government and other organisations.

Ten Year Futures

Benedict Evans

Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.

They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.

This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.

 

Thinking about the future

Ben Hammersley

This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?

His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?

It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.

It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.

A Sociology of the Smartphone

Adam Greenfield – Longreads

Sometimes the best way of thinking about something completely familiar is to treat it as wholly alien. If you had to explain a smartphone to somebody recently arrived from the 1990s, how would you describe what it is and, even more importantly, what it does?

In a way, that’s what this article is doing, painstakingly describing both the very familiar, and the aspects of its circumstances we prefer not to know – cheap phones have a high human and environmental price. An arresting starting point is to consider what people routinely carried around with them in 2005, and how much of that is now subsumed in a single ubiquitous device.

That’s fascinating in its own right, but it’s also an essential perspective for any kind of strategic thinking about government (or any other) services, for reasons pithily explained by Benedict Evans:

Smartphones are technological marvels. But they are also powerful instruments of sociological change. Understanding them as both is fundamental to understanding them at all.

Technology for the Many: A Public Policy Platform for a Better, Fairer Future

Chris Yiu – Institute for Global Change

This wide ranging and fast moving report hits the Strategic Reading jackpot. It provides a bravura tour of more of the topics covered here than is plausible in a single document, ticking almost every category box along the way. It moves at considerable speed, but without sacrificing coherence or clarity. That sets the context for a set of radical recommendations to government, based on the premise established at the outset that incremental change is a route to mediocrity, that ‘status quo plus’ is a grave mistake.

Not many people could pull that off with such aplomb. The pace and fluency sweep the reader along through the recommendations, which range from the almost obvious to the distinctly unexpected. There is a debate to be had about whether they are the best (or the right) ways forward, but it’s a debate well worth having, for which this is an excellent provocation.

 

Tales from three disruption “sherpas”

Martin Stewart-Weeks – Public Purpose

This is an artful piece – the first impression is of a slightly unstructured stream of consciousness, but underneath the beguilingly casual style, some great insights are pulled out, as if effortlessly. Halfway down, we are promised ‘three big ideas’, and the fulfilment does not disappoint. The one which struck home most strongly is that we design institutions not to change (or, going further still, the purpose of institutions is not to change). There is value in that – stability and persistence bring real benefits – but it’s then less surprising that those same institutions struggle to adapt to rapidly changing environments. A hint of an answer comes with the next idea: if everything is the product of a design choice, albeit sometimes an unspoken and unacknowledged one, then it is within the power of designers to make things differently.

How do you solve a problem like technology? A systems approach to digital regulation

Rachel Coldicutt – doteveryone

It is increasingly obvious that ways of regulating and controlling digital technologies struggle to keep pace with the technologies themselves. Not only are they ever more pervasive, but their control is ever more consolidated. Regulations – such as the EU cookie consent rules – deal with real problems, but in ways which somehow fail to get to the heart of the issue, and which are circumvented or superseded by fast-moving developments.

This post takes a radical approach to the problem: rather than focusing on specific regulations, might we get to a better place if we take a systems approach, identifying (and nurturing) a number of approaches, rather than relying on a single, brittle, rules based approach? Optimistically, that’s a good way of creating a more flexible and responsive way of integrating technology regulation into wider social and political change. More pessimistically, the coalition of approaches required may be hard to sustain, and is itself very vulnerable to the influence of the technology providers. So this isn’t a panacea – but it is a welcome attempt to ask some of the right questions.

Reclaiming personal data for the common good

Theo Bass – DECODE

The internet runs on personal data. It is the price we pay for apparently free services and for seamless integration. That’s a bargain most have been willing to make – or at least one which we feel we have no choice but to consent to. But the consequences of personal data powering the internet reverberate ever more widely, and much of the value has been captured by a small number of large companies.

That doesn’t just have the effect of making Google and Facebook very rich, it means that other potential approaches to managing – and getting value from – personal data are made harder, or even impossible. This post explores some of the challenges and opportunities that creates – and perhaps more importantly serves as an introduction to a much longer document – Me, my data and I:The future of the personal data economy –  which does an excellent job both of surveying the current landscape and of telling a story about how the world might be in 2035 if ideas about decentralisation and personal control were to take hold – and what it might take to get there.