Kate Crawford and Vladan Joler
An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.
It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.
Paris Marx – Medium
If you fall into the trap of thinking that technology-driven change is about the technology, you risk missing something important. No new technology arrives in a pristine environment, there are always complex interactions with the existing social, political, cultural, economic, environmental and no doubt other contexts. This post is a polemic challenging the inevitability – and practicality – of self-driving cars, drawing very much on that perspective.
The result is something which is interesting and entertaining in its own right, but which also makes a wider point. Just as it’s not technology that’s disrupting our jobs, it’s not technology which determines how self-driving cars disrupt our travel patterns and land use. And over and over again, the hard bit of predicting the future is not the technology but the sociology,
Louis Hyman – The New York Times
This is a good reminder that the development and, even more, the application of technology are always driven by their social. economic and political context. There is a tendency to see technological change as somehow natural and unstoppable, which is dangerous not because it is wholly wrong, but because it is partly right and so can easily be confused with being wholly right.
New technologies cannot be uninvented (usually) or ignored, but how they are developed and deployed is always a matter of choice, even if that choice isn’t always self-evident. This article focuses on the implications for employment, where too often the destruction of jobs is assumed to be both inevitable and undesirable (leaving only the numbers up for debate). But the nature of the change, the accrual of the benefits of greater efficiency and of the costs of disruption and transition are all social choices. That’s a very helpful reframing – which creates the space to ask how we might retain the benefits of traditional employment structures, while adding (rather than substituting) the advantages which come from new ways of working.
Ellen Broad – Melbourne University Publishing
Ellen Broad’s new book is high on this summer’s reading list. Both provenance and subject matter mean that confidence in its quality can be high. But while waiting to read it, this short interview gives a sense of the themes and approach. Among many other virtues, Ellen recognises the power of language to illuminate the issues, but also to obscure them. As she says, what is meant by AI is constantly shifting, a reminder of one of the great definitions of technology, ‘everything which doesn’t work yet’ – because as soon as it does it gets called something else.
The book itself is available in the UK, though Amazon only has it in kindle form (but perhaps a container load of hard copies is even now traversing the globe).
This video is twenty minutes of fireworks from Douglas Rushkoff, pushing back on the technocratic view of technology. He is speaking at the recent FutureFest, and starts by describing a long-ago talk he had given called ‘why futurists suck’, thereby establishing his credentials and biting the hand that was feeding him in one simple line.
Once he gets going though, he gets onto the very interesting idea that technological determinism has led to a view that describing the future is essentially an exercise in prediction (and watching him act out a two by two matrix is a joy in itself) – when instead we should see the future as a thing we are creating. The central line, on which much of his argument then hangs, is that ‘We have been trained to see humanity as the problem, and technology as the solution’ – and that that is precisely the wrong way round.
This was more barnstorming than developed argument – but there are some really interesting implications, and there was enough there to suggest that it will be worth looking out for the book of the talk – also called Team Human – when it comes out next year.
Justine Leblanc – IF
Past performance, it is often said, is not a guide to future performance. That may be sound advice in some circumstances, but is more often than not a sign that people are paying too little attention to history, over too short a period, rather than that there is in fact nothing to learn from the past. To take a random but real example, there are powerful insights to be had on contemporary digital policy from looking at the deployment of telephones and carrier pigeons in the trenches of the first world war.
That may be an extreme example, but it’s a reason why the idea of explicitly looking for historical parallels for current digital policy questions is a good one. This post introduces a project to do exactly that, which promises to be well worth keeping an eye on.
The value of understanding history, in part to avoid having to repeat it, is not limited to digital policy, of course. That’s a reason for remembering the value of the History and Policy group, which is based on “the belief that history can and should improve public policy making, helping to avoid reinventing the wheel and repeating past mistakes.”
Pia Waugh – Pipka
It’s a sound generalisation that people do the best they can within the limits of the systems they find themselves in. That best may include pushing at those limits, but even if it does, that doesn’t make them any less real. Two things follow from that. The first is that it is pointless blaming individuals for operating within the constraints of the system. The second is that if you want to change the system, you have to change the system.
That’s not to say that people are powerless or that we can all resign personal and moral accountability. On the contrary, the systems are themselves human constructs and can only be challenged and changed by the humans who are actors within them. That’s where this post comes in, which is in effect a prospectus for a not yet written book. What different systems do changes in social, economic and technological contexts demand, where are the contradictions which need to be resolved? The book, when it comes, promises to be fascinating; the post is well worth reading in its own right in the meantime.
The bigger the underlying change, the bigger the second (and higher) order effects. Those effects often get overlooked in looking at the impact of change (and in trying to understand why expected impacts haven’t happened). Benedict Evans has always been good at spotting and exploring the more distant consequences of technology-driven change, for example in his recent piece on ten-year futures. ‘Cascading collapse’ is a good way of putting it: if the long-heralded but slow to materialise collapse of physical retail is beginning to appear, what consequences flow from that?
Today HMRC announced that 92.5% of this year’s tax returns were submitted online. That too has been a slow but inexorable growth, taking twenty years to go from expensive sideshow to near complete dominance. There is more to do to reflect on the cascading collapses that that and other changes will wreak not just on government, but through government to society and the economy more widely.
Alexis Madrigal – The Atlantic
Archiving documents is easy. You choose which ones to keep and put them somewhere safe. Archiving the digital equivalents of those documents throws up different practical problems, but is conceptually not very different. But often, and increasingly, our individual and collective digital footprints don’t fit neatly into that model. The relationships between things and the experience of consuming them become very different, less tangible and less stable. As this article discusses, there is an archive of Twitter in theory, but not in any practical sense, and not one of Facebook at all. And even if there were, the constant tweaking of interfaces and algorithms and increasingly finely tuned individualisation make it next to impossible to get hold of in any meaningful way.
So in this new world, perhaps archivists need to map, monitor and even create both views of the content and records of what it means to experience it. And that will be true not just of social media but increasingly of knowledge management in government and other organisations.
Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.
They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.
This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.
This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?
His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?
It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.
It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.
Adam Greenfield – Longreads
Sometimes the best way of thinking about something completely familiar is to treat it as wholly alien. If you had to explain a smartphone to somebody recently arrived from the 1990s, how would you describe what it is and, even more importantly, what it does?
In a way, that’s what this article is doing, painstakingly describing both the very familiar, and the aspects of its circumstances we prefer not to know – cheap phones have a high human and environmental price. An arresting starting point is to consider what people routinely carried around with them in 2005, and how much of that is now subsumed in a single ubiquitous device.
That’s fascinating in its own right, but it’s also an essential perspective for any kind of strategic thinking about government (or any other) services, for reasons pithily explained by Benedict Evans:
Smartphones are technological marvels. But they are also powerful instruments of sociological change. Understanding them as both is fundamental to understanding them at all.
Chris Yiu – Institute for Global Change
This wide ranging and fast moving report hits the Strategic Reading jackpot. It provides a bravura tour of more of the topics covered here than is plausible in a single document, ticking almost every category box along the way. It moves at considerable speed, but without sacrificing coherence or clarity. That sets the context for a set of radical recommendations to government, based on the premise established at the outset that incremental change is a route to mediocrity, that ‘status quo plus’ is a grave mistake.
Not many people could pull that off with such aplomb. The pace and fluency sweep the reader along through the recommendations, which range from the almost obvious to the distinctly unexpected. There is a debate to be had about whether they are the best (or the right) ways forward, but it’s a debate well worth having, for which this is an excellent provocation.
Laura Gardiner – Resolution Foundation
This is a good post in both form and function: a complex and important policy area, neatly summarised in a set of well chosen charts, mixing objective and attitudinal data, and quietly prompting some very big strategic questions.
Martin Stewart-Weeks – Public Purpose
This is an artful piece – the first impression is of a slightly unstructured stream of consciousness, but underneath the beguilingly casual style, some great insights are pulled out, as if effortlessly. Halfway down, we are promised ‘three big ideas’, and the fulfilment does not disappoint. The one which struck home most strongly is that we design institutions not to change (or, going further still, the purpose of institutions is not to change). There is value in that – stability and persistence bring real benefits – but it’s then less surprising that those same institutions struggle to adapt to rapidly changing environments. A hint of an answer comes with the next idea: if everything is the product of a design choice, albeit sometimes an unspoken and unacknowledged one, then it is within the power of designers to make things differently.
Rachel Coldicutt – doteveryone
It is increasingly obvious that ways of regulating and controlling digital technologies struggle to keep pace with the technologies themselves. Not only are they ever more pervasive, but their control is ever more consolidated. Regulations – such as the EU cookie consent rules – deal with real problems, but in ways which somehow fail to get to the heart of the issue, and which are circumvented or superseded by fast-moving developments.
This post takes a radical approach to the problem: rather than focusing on specific regulations, might we get to a better place if we take a systems approach, identifying (and nurturing) a number of approaches, rather than relying on a single, brittle, rules based approach? Optimistically, that’s a good way of creating a more flexible and responsive way of integrating technology regulation into wider social and political change. More pessimistically, the coalition of approaches required may be hard to sustain, and is itself very vulnerable to the influence of the technology providers. So this isn’t a panacea – but it is a welcome attempt to ask some of the right questions.
Theo Bass – DECODE
The internet runs on personal data. It is the price we pay for apparently free services and for seamless integration. That’s a bargain most have been willing to make – or at least one which we feel we have no choice but to consent to. But the consequences of personal data powering the internet reverberate ever more widely, and much of the value has been captured by a small number of large companies.
That doesn’t just have the effect of making Google and Facebook very rich, it means that other potential approaches to managing – and getting value from – personal data are made harder, or even impossible. This post explores some of the challenges and opportunities that creates – and perhaps more importantly serves as an introduction to a much longer document – Me, my data and I:The future of the personal data economy – which does an excellent job both of surveying the current landscape and of telling a story about how the world might be in 2035 if ideas about decentralisation and personal control were to take hold – and what it might take to get there.
It’s tempting to try to understand social change in terms of generations – and it is a temptation widely succumbed to. Millennials are pitted against baby boomers, generation X is succeeded by generation Y, lost generations are found again, and stereotypes abound. This article and an earlier more detailed one are an attempt to challenge that framing. A part of that is recognising that people criticise younger generations for the same faults which their elders once ascribed to them; the more interesting part is challenging the idea that there are shared experiences which are best understood in terms of generations. That’s not to say of course that there are no social and economic changes to which governments and others need to respond. But it is a useful reminder that focusing on differences between generations tends to obscure differences within them (and differences which aren’t generational at all).
John Quiggin – Inside Story
All the signs that you would expect to see in the labour market and wider economy if robots were displacing jobs are absent: productivity is not growing rapidly, labour turnover is not going up, and employment remains high.
That’s not to say, of course, that automation isn’t happening – and Surowiecki is careful not to say it – or that what has happened up to now is an infallible guide to what will happen in the future. But this article does contribute to the recognition that technological progress, the social and economic adoption of that progress, and the wider impact of that adoption are all very different things, potentially with very significant lags between them. That perspective is now coming through more strongly elsewhere as well – which should mean that the debate can be more balanced.
James Surowiecki – Wired
Does the power of big data combined with location awareness result in our being supported by butlers or harassed by stalkers? There’s a fine line (or perhaps not such a fine line) between being helpful and being intrusive. Quite where it will be drawn is a function of commercial incentives, consumer responses and legal constraints (not least the new GDPR). In the public sector, the balance of those forces may well be different, but versions of the same factors will be in play. All of that, of course, is ultimately based on how we answer the question of whose data it is in the first place and whether we will switch much more to sharing state information rather than the underlying data.
Nicola Millard – mycustomer