Data and AI Government and politics

YouTube, the Great Radicalizer

Zeynep Tufekci – New York Times

This article has been getting extensive and well-deserved coverage over the last few days. Essentially, it is demonstrating that the YouTube recommendation engine tends to lead to more extreme material, more or less whatever your starting point. In  short, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

The reason for including it here is not because of the specific algorithm or the specific behaviour it generates. It is because it’s a very clear example of a wider phenomenon. It’s a pretty safe assumption that the observed behaviour is not the result of a cabal of fringe conspirators deep in the secret basements of Google setting out a trail to recruit people into extremist groups or attitudes. The pretty obvious motivation is that what they are actually trying to do is to tempt people into spending as long as possible watching YouTube videos, because that’s the way they can put most advertising in front of most eyeballs.

In other words, algorithmic tools can have radically unintended consequences. That’s made worse in this case because the unintended consequences are not a sign of the intended goal not being achieved; on the contrary, they are the very means by which that intended goal is being achieved. So it is not just the case that YouTube has some strong incentives not to fix the problem, the problem may not be obvious to them in the first place.

This is a clear example. But we need to keep asking the same questions about other systems: what are the second order effects, will we recognise them when we see them, and will we be ready to – and able to – address them?

Data and AI Government and politics

A roadmap for AI: 10 ways governments will change (and what they risk getting wrong)

Geoff Mulgan – NESTA

This is a great summary of where AI stands in the hype cycle. Its focus is the application to government, but most of it is more generally relevant. It’s really helpful in drawing out what ought to be the obvious point that AI is not one thing and that it therefore doesn’t have a single state of development maturity.

The last of the list of ten is perhaps the most interesting. Using AI to apply more or less current rules in more or less current contexts and systems is one thing (and is a powerful driver of change in its own right). But the longer term opportunity is to change the nature of the game. That could be a black box dystopia, but it could instead be an opportunity to break away from incremental change and find more radical opportunities to change the system. But that depends, as this post rightly concludes, on not getting distracted by the technology as a goal in its own right, but focusing instead on what better government might look like.

Data and AI

UK police are using AI to make custodial decisions – but it could be discriminating against the poor

Matt Burgess – Wired

In abstract, AI is a transformational technology. It may bring perfect and rigorous decision analysis, sweeping away human foibles. Or it may displace human sensitivity and judgement – and indeed the humans themselves – and usher in an era of opaque and arbitrary decision making.

This article, which focuses on the introduction of AI to Durham Constabulary, is a good antidote to those caricature extremes. Reality is, as ever, messier than that. Predictability and accountability are not straightforward. Humans tend to revert, perhaps unwisely, to confidence in their own judgements. It is not clear that some kinds of data are appropriately used in prediction models at all (though the black boxes of human brains are equally problematic). In short, the application of AI to policing decisions isn’t simple and clear cut, it is instead a confused and uncertain set of policy problems. That shouldn’t be surprising.

Data and AI Future of work

How AI will transform the Digital Workplace (and how it already is)

Sharon O’Dea – Intranetizen

AI is often written about in terms of sweeping changes resulting in the wholesale automation of tasks and jobs. But as this post sets out, there is also a lower key version, where forms of AI appear as feature enhancements (and thus may not be apparent at all). Perhaps self-generating to do lists are the real future – though whether that will be experienced as liberation or enslavement is very much a matter of taste. Either way, AI won’t be experienced as robots, breaking into the building to take our jobs; instead tasks will melt away, enhanced in ways which never quite feel revolutionary.

Data and AI

10 Principles for Public Sector use of Algorithmic Decision Making

Eddie Copeland – NESTA

This is a really interesting attempt to set out a a set of regulatory principles for the use of algorithms in the public sector. It brings what can easily be quite an abstract debate down to earth: we can talk about open data and open decision making, but what actually needs to be open (and to whom) to make that real?

The suggested principles mostly look like a sensible starting point for debate. Two of them though seem a little problematic, one trivially, the other much more significantly. The trivial one is principle 9, that public sector organisations should insure against errors, which isn’t really a principle at all, though the provision of compensation might be. The important one is principle 5, “Citizens must be informed when their treatment has been informed wholly or in part by an algorithm”. On the face of it, that’s innocuous and reasonable. Arguably though, it’s the equivalent of having a man with a red flag walking in front of a car. Government decisions are already either based on algorithms (often called “laws” or “regulations”) or they are based on human judgements, likely to be more opaque than any computer algorithm. Citizens should absolutely be entitled to an explanation and a justification for any decision affecting them – but the means by which the decision at issue was made should have no bearing on that right.

Data and AI

Will We Own AIs, or Will They Own Us?

Tim Weber – Techonomy

Neither, seems to be the answer to the question posed by the title. But if one thing AIs do is filter the world for us, the question of who does the filtering and in whose interest they do it becomes very important. As with other free services, free algorithms will be provided in expectation of a benefit to somebody, and that somebody may very well not be the end user. So far so unexceptional (and putting it under the heading of AI doesn’t change the substance of an issue which has been around a good while). But if this is a problem, what are the pressures and processes which will work to relieve it rather than reinforce it? Here, the argument rather fades away: we are told we need clear laws and well-accepted procedures to regulate AI, but there is little suggestion here about what they would say or how we would get to them. It’s slightly unfair to single this piece out for what is quite a common problem: when challenges are technology driven, but solutions need to be socially driven, it’s a lot easier to talk about the first than the second.

Data and AI

AI in a week

Laura Caccia – Oxford Insights

This post is just a teaser, but a teaser for something interesting in both style and substance. Starting next Monday, we are promised a five day intensive course on artificial intelligence. AI is for many people at the stage where there is lots of fragmented insight and understanding, but little which brings the fragments together to form a coherent whole. So there is a gap here well worth filling – and this looks to be a neat low key way of filling it. Watch that space.

Data and AI

The black box is a state of mind

Kathrin Passig – Eurozine

The idea of the black box pervades a lot of thinking and writing about AI. Mysterious algorithms do inscrutable things which impinge on people’s lives in inexplicable ways. That is alarming in its own right, but doubly so because this is new and uncharted territory. Except that, as this post painstakingly points out, it’s not actually new at all. People have been writing software about which they could not predict the outputs from the inputs since pretty much since they have been writing software at all – in a sense, that’s precisely the point of it. And if you want to look at it that way, the ultimate black box is the human brain, where the evidence that we don’t understand the reasons for our own decisions, never mind anybody else’s, is pretty overwhelming.

The need for precision at one level – software doesn’t cope well with typos and syntax errors – doesn’t translate into precision at a higher level, of understanding what that precisely written software will actually do. That thought came from Marvin Minsky in 1967, but people had been writing about black boxes for years before that, when the complexity of software was a tiny fraction of what is normal now.

The fact that this is neither new nor newly recognised doesn’t in itself change the nature of the challenge. What it does perhaps suggest, though, is that strategies developed for coping with these uncertainties in the past may well still be relevant for the future.

Data and AI

The Latest Data Privacy Debacle

Zeynep Tufekci – The New York Times

Issues of data aggregation and de-anonymisation are hardly new, but there’s nothing like a good example to make an issue more visible – and secret US bases revealed through aggregated data from fitness trackers are about as good as it gets.

The real issue though is less such revelations and more the implications for data and privacy more generally. This article argues powerfully that to see this as an issue of individuals and clickthrough privacy policies is to miss a very important point. People can’t consent to the ways their personal data will be used and the risks that carries, because service providers don’t and can’t understand those things themselves, and so can’t explain them in a way which makes consent meaningful. That has some important data policy implications, including much stronger liability for data breaches, and keeping the amount of data captured and held to a minimum in the first place. Those are not new suggestions, of course, so as ever the real question is not how the risks could be managed better, but how incentives can be aligned to ensure that the risks are in fact managed. And that is a political and social problem, not a technical one.

Data and AI

Artificial Intelligence is a Horseless Carriage

Terence Eden

A whimsical twitter thread of etymological onion peeling, now crystallised into a blog post, results in a splendid definition of AI. Starting with ‘complicated algorithms running on very fast computers’, we end up with AI helpfully described as

The method by which an old Persian magician uses counting stones, to move other stones, by way of amber resin, such that a casual observer thinks the stones are moving themselves.

Data and AI Democracy

Personal Data Representatives: An Idea

Tom Steinberg – Medium

In dealing with digital services – indeed in dealing with organisations generally – power is very asymmetric. Amazon does not invite you to negotiate the terms on which it is prepared to sell you things (though of course you retain the power not to buy). Digital services and apps give the illusion of control (let’s think about whether to accept these cookies…) but have developed a habit and a reputation or helping themselves to data and making their own judgement about what to do with it. That’s not necessarily because individual consumers can’t control permissions, but is also because the cost and complexity of doing so make it burdensome. Tom Steinberg brings a potential solution to that problem: what if we had somebody negotiating all that on our behalf, could that asymmetry be addressed? Typically, he recognises the difficulties as well as the potential, but even if the answers are hard, the question is important.

Data and AI

How to Build Self-Conscious Artificial Intelligence

Hugh Howey – Wired

The title of this article is a bit of a false flag, since it could easily have continued ‘… and why that would be a really bad idea’. It is though an interesting – though considerably longer – complement to the argument that the idea of general artificial intelligence is based on a false analogy between human brains and computers. This article takes the related but distinct approach that self-consciousness exists as compensation for structural faults in human brains, particularly driven by the fact that having a sophisticated theory of mind is a useful evolutionary trait and that it would be pointless (rather than impossible) to replicate that – because perhaps the most notable thing about human introspection about consciousness is how riddled it is with error and self-contradiction. That being so, AI will continue to get more powerful and sophisticated. But it won’t become more human, because it makes no sense to make it so.

Curation Data and AI

204 examples of artificial intelligence in action

Chris Yiu

This is a page of links which provides over two hundred examples of artificial intelligence in action – ranging from mowing the lawn, through managing hedge funds and sorting cucumbers all the way to writing AI software. Without clicking a single one of the links, it provides a powerful visual indicator of how pervasive AI has already become. There is inevitably a bit of a sense of never mind the quality, feel the width – but the width is itself impressive, and the quality is often racing up as well.

There is a linked twitter account which retweets AI-related material – though in a pleasing inversion, it shows every sign of being human-curated.

Data and AI

Ethics and ethicists

Ellen Broad

This is a short tweet thread making the point that ethics in AI – and in technology generally – needs to be informed by ethical thinking developed in other contexts (and over several millenia). That should be so obvious as to be hardly worth saying, but it has often become one of those questions which people doing new things fall into the trap of believing themselves to be solving for the first time.

Data and AI Future of work

Why the robot boost is yet to arrive

Tim Harford – the Undercover Economist

One of the problems with predicting the future is working out when it’s going to happen. That’s not quite as silly as it sounds: there is an easy assumption that the impact of change follows closely on the change itself, but that assumption is often wrong. That in turn can lead to the equally wrong assumption that because there has been limited impact in the short term, the impact will be equally limited in the long term. As Robert Solow famously put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ In this post, Tim Harford updates the thought from computers to robots. The robot takeover isn’t obviously consistent with high employment and low productivity growth, but that is what we can currently observe. The conclusion – and the resolution of the paradox is disarmingly simple, if rather frustrating: wait and see.

Data and AI Future of work

Don’t believe the hype: work, robots, history

Michael Weatherburn – Resolution Foundation

This post introduces a longer paper which takes the idea of understanding the future by reflecting on the past to a new level. The central argument is that digital technologies have been influencing and shaping the industry sectors it examines for a long time, and that that experience strongly suggests that the more dramatic current forecasts about the impact of technology on work are overblown.

The paper’s strengths come from its historical perspective – and, unusually for this topic, from being written by a historian. It is very good on the underlying trends driving changing patterns of work and service delivery and distinguishing them from the visible emanations of them in web services. It does though sweep a lot of things together under the general heading of ‘the internet’ in a way which doesn’t always add to understanding – the transformation of global logistics driven by ERP systems is very different from the creation of the gig economy in both cause and effect.

The paper is less good in providing strong enough support for its main conclusion to justifying making it the report’s title. It is true that the impacts of previous technology-driven disruptions have been slower and less dramatic to manifest themselves than contemporary hype expected. But the fact that hype is premature does not indicate that the underlying change is insubstantial – the railway mania of the 1840s was not a sign that the impact of railways had peaked. It is also worth considering seriously whether this time it’s different – not because it necessarily is, but because the fact that it hasn’t been in the past is a reason to be cautious, not a reason to be dismissive.

Data and AI

What we talk about when we talk about fair AI

Fionntán O’Donnell – BBC News Labs

Courtesy of Laura Amaya from the Noun Project

This is an exceptionally good non-technical overview of fairness, accountability and transparency in AI. Each issue in turn is systematically disassembled and examined.  It is particularly strong on accountability, bringing out clearly that it can only rest on human agency and social and legal context. ‘My algorithm made me do it’ has roughly the same moral and intellectual depth as ‘a big boy made me do it’.

I have one minor, but not unimportant, quibble about the section on fairness. The first item on the suggested checklist is ‘Does the system fit within the company’s ethics?’ That is altogether too narrow a formulation, both in principle and in practice. It’s wrong in practice because there is no particular reason to suppose that a company’s (or any other organisation’s) ethics can be relied on to impose any meaningful standards. But it’s also wrong in principle: the relevant scope of ethical standards is not the producers of an algorithm, but the much larger set of people who use it or have it applied to them.

But that’s a detail. Overall, the combination of clear thinking and practical application makes this well worth reading.

Data and AI

Machine learning, defined

Sarah Jamie Lewis

There’s a whole emerging literature summarised in those words. But it underlines how much of the current debate is still as much about what machine learning is as what it does.

Data and AI

The impossibility of intelligence explosion

François Chollet – Medium

Last week, there was another flurry of media coverage for AI, as Google’s AlphaZero went from no knowledge of the rules of chess to beating the current (computer) world champion in less than a day. And that inevitably prompts assumptions that very specific domain expertise will somehow translate into ever accelerating general intelligence, until humans become pets of the AI, if they are suffered to live at all.

This timely article systematically debunks that line of thought, demonstrating that intelligence is a social construct and arguing that it is in many ways a property of our civilization, not of each of us as individuals within it. Human IQ (however flawed a measure that is) does not correlate with achievement, let alone with world domination, beyond a fairly narrow range – raw cognition, it seems, is far from being the only relevant component of intelligence.

Or in a splendid tweet length dig at those waiting expectantly for the singularity:

Data and AI Social and economic change Strategy

Thinking about the future

Ben Hammersley

This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?

His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?

It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.

It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.