Data and AI

The wired brain: How not to talk about an AI-powered future

It’s tempting to think that artificial intelligence is on a trajectory to becoming like human intelligence except, well, artificial. But just as vacuum cleaners aren’t motorised brooms, as the futurists of 1900 imagined, AI isn’t about automated humans; Self-driving cars don’t have a robot sitting behind the steering wheel, they solve the driving problem very differently. So we need to make sure that we find better questions before we can assume we are finding better answers – and this post argues that we will be better off doing that by avoiding misleading analogies with brains and robots – and indeed artificial intelligence as a label.

Ines Montani

Data and AI Service design

Making it clear when machines make decisions

Most writing about algorithmic decision making is at a very high level, and often implies that a very wide range of decisions and processes will be affected in very similar ways. This article – originally a submission to a select committee inquiry on the subject – take a more granular approach, looking at how different approaches might be appropriate for different aspects of a hypothetical National Benefits Service, with an emphasis on ensuring that it is always clear how a decision has been reached, as well as what that decision is.

Matthew Sheret – Projects by IF

Data and AI

Our Machines Now Have Knowledge We’ll Never Understand

The transition from systems based on explicit rules to systems based on emergent algorithms is much more than a new generation of technology. It raises questions about what it is for humans to know things and about how we decide whether the output of a system is right or wrong (and whether that is even a meaningful question). That may sound esoteric and abstract, but it is vitally important to any organisation which operates rules based processes and to any person who may be the subject of such processes – which is pretty much everybody. David Weinberger has been writing clearly and thoughtfully about knowledge and technology for a long time, this article ranges from Galileo to Heidegger and from flood control to credit scores, to get to some important issues about how we understand and use the technology of the future.

David Weinberger – Backchannel

Data and AI

Cars and second order consequences

Working out the implications of technological change is hard. It’s hard because guessing which technologies will mature when is not straightforward more than a short time ahead; it’s even harder because the real impact comes from second and third order effects which may not be immediately (or at all) obvious just from understanding the technology itself. This post explores the implications of autonomous vehicles for the design of the vehicles themselves and the roads they run on, as well as for land use, employment, in-car entertainment and murder investigations – interesting not just as a case study, but as a way of thinking about these kinds of uncertainties.

Benedict Evans

Data and AI

Governments are recklessly putting their heads in the sand about automation

Is the sudden surge of interest in automation a sign that we are on the cusp of major change, or is it another fad which will blow over, leaving everything pretty much unchanged? There are some good reasons for thinking it’s more the former than the latter, but we can’t know for sure. This is partly an argument that people slip too easily into the less threatening assumption, but perhaps more importantly is about the need to plan for uncertainty rather than assuming it away.

Martin Bryant – Medium

Data and AI Service design

Baidu’s Artificial Intelligence Lab Unveils Synthetic Speech System

Communicating with computers by natural speech is a dream which goes back to Star Trek and 2001 (and well beyond, but this is not a history of science fiction). Recently, there have been clear advances in how computers understand humans – Alexa, Siri and their friends, as well as the new levels of call centre hell. But computers speaking to humans still sound robotic, because they are chaining words together and the intonation never sounds quite right. But now that’s changing too, with speech being constructed on the computationally intensive fly. And that may be important not just in its own right but as a step towards more fundamental changes in how people and machines interact with each other.

MIT Technology Review

Data and AI

How to Hypnotise an Artificial Intelligence

This is fiction. Sort of. One of the problems with artifical intelligence is that if it is trained on real world human data, it will reflect the prejudices, foibles and distortions of real world humans. And if it isn’t trained that way, it’s usefulness in the real world will be pretty limited. But what if that training could be deliberately gamed?

Terence Eden

Data and AI

A Hippocratic Oath for AI developers? It may only be a matter of time

Developing artifical intelligence is often seen as essentially a technical problem. But it also raises difficult ethical issues about accoutability, transparency, discrimination and privacy. Does that mean that the developers of such systems should be subject to some form of hippocratic oath?  That seems both to be an important question and a rather naive one. In the month where we learned that Uber deployed software to evade regulatory oversight, it’s clear that this is about organisations and their culture and about social norms and expectations as much as it is about the behaviour of individual developers.

Benedict Dellot – RSA

Data and AI Service design

Data and service design

A great set of slides  on the need for data-driven service design – a set of pithy one liners, but adding up to a powerful manifesto for doing things differently.

Kit Collingwood – Service Design in Government

Data and AI

How much faith should we have in data?

An excellent talk by the chief executive of the Open Data Institute, reflecting on how to increase our safeguards against algorithmic bias in big data applications.

Jeni Tennison – ODI Fridays

Data and AI Future of work Innovation

JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

A telling example of the kinds of work automation is now reaching: automated interpretation of complex legal documents removing the need for skilled human scrutiny. Also interesting on the focus on technology innovation – high levels of investment and explicit recognition that some initiatives will fail.

Hugh Son – Bloomberg

Data and AI Future of work

 Would life be better if robots did all the work?

Socratic dialogue on Radio 4, exploring the ethical issues around the automation of work. In a world where so much social, as well as economic, value comes from work, what happens if the humans aren’t needed any more? And would that be an improvement (and if so, for whom)?

Michael Sandel – The Public Philosopher

Data and AI Service design

In praise of cash

Does paying for things by card (and phone and watch and…) represent liberation from the need to carry coins around and enable faster, simpler transactions? Or is it a dangerous slide towards the privatisation of money and the advent of universal financial surveillance? And, most importantly it seems, can you get a coke from a machine when you want one?

Brett Scott – Aeon

Data and AI

Machine intelligence makes human morals more important

Zeynep Tufekci is a computer programmer turned sociologist, whose book Twitter and Tear Gas is coming out in a couple of months. This TED talk is the video parallel of Cathy O’Neil’s book, Weapons of Math Destruction. The core point is the same – that machines we don’t understand, trained on imperfect data, are as likely to be amplifying human biases as to be emobodying objectivity.

Zeynep Tufekci – TED

Data and AI

Weapons of Math Destruction

A polemic against the misuse of big data models by a reformed hedge fund quant – the book’s subtitle, ‘how big data increases inequality and threatens democracy’, is a pretty good indicator of what is to come. Using examples from policing to insurance and teacher evaluation, she shows that the underlying models often encode and reinforce prejudices, rather than being the embodiment of objectivity often claimed for them. It’s very US focused, both in its examples and in its style (a half way decent copy editor could easily make it a third shorter), but it’s a good, simple and readable introduction to some important issues.

Cathy O’Neil – Weapons of Math Destruction