Q & A with Ellen Broad – Author of Made by Humans

Ellen Broad – Melbourne University Publishing

Ellen Broad’s new book is high on this summer’s reading list. Both provenance and subject matter mean that confidence in its quality can be high. But while waiting to read it, this short interview gives a sense of the themes and approach. Among many other virtues, Ellen recognises the power of language to illuminate the issues, but also to obscure them. As she says, what is meant by AI is constantly shifting, a reminder of one of the great definitions of technology, ‘everything which doesn’t work yet’ – because as soon as it does it gets called something else.

The book itself is available in the UK, though Amazon only has it in kindle form (but perhaps a container load of hard copies is even now traversing the globe).

7 Steps to Data Transformation

Edwina Dunn – Starcount

Edwina Dunn is one of the pioneers of data science and this short paper is the distillation of more than twenty years’ experience of using meticulous data analysis to understand and respond to customers – most famously in the form of the Tesco Clubcard. It is worth reading both for some pithy insights – data is art as well as science – and, more unexpectedly, for what feels like a slightly dated approach. “Data is the new oil” may be true in the sense that is a transformational opportunity, with Zuckerberg as the new Rockefeller, but data is not finite, it is not destroyed by use and it is not fungible. More tellingly she makes the point that ‘Owning the customer is not a junior or technical role; it’s one of the most important differentiators of future winners and losers.’ You can see what she means, but shopping at a supermarket is not supposed to be a form of slavery, still less so (if that were possible) is that a good way of thinking about the users of public services.

It doesn’t sound as though the Cluetrain Manifesto has been a major influence on this school of thought. Perhaps it should be.

Basic instincts

Matthew Hutson – Science

This article is an interesting complement to one from last week which argued that AI is harder than you think. It builds a related argument from a slightly different starting point: that big data driven approaches to artificial intelligence have been demonstrably powerful in the short term, but may never break through to produce general problem solving skills. That’s because there is no solution in sight to the problem of creating common sense – which turns out not to be very common at all. Humans possess some basic instincts which are hard coded into us and might need to be hard coded into AI as well – but to do so would be to cut across the self-learning approach to AI which now dominates. If there is reason to think that babies can make judgements and distinctions which elude current AI, perhaps AI has more to learn from babies than babies from AI.

Robot Future

XKCD

A pithy but important reminder that the autonomy of AI is not what we should most worry about. Computers are ultimately controlled by humans and do what humans want them to do. Understanding the motivation of the humans will be more important than attempting to infer the motivation of the robots for a good while to come.

A.I. Is Harder Than You Think

Gary Marcus and Ernest Davis – New York Times

Coverage of Google’s recent announcement of a conversational AI which can sort out your restaurant bookings for you has largely taken one of two lines. The first is about the mimicry of human speech patterns: is it ethical for computers to um and er in a way which can only be intended to deceive their interlocutors into thinking that they are dealing with a real human being, or should it always be made clear, by specific announcement or by robotic tones, that a computer is a computer? The second – which is where this article comes in – positions this as being on the verge of artificial general intelligence: today a conversation about organising a hair cut, tomorrow one about the meaning of life. That is almost completely fanciful, and this article is really good at explaining why.

It does so in part by returning to a much older argument about computer intelligence. For a long time, the problem of AI was treated as a problem of finding the right set of rules which would generate a level of behaviour we would recognise as intelligent. More recently that has been overtaken by approaches based on extracting and replicating patterns from big data sets. That approach has been more visibly successful – but those successes don’t in themselves tell us whether they are steps towards a universal solution or a brief flourishing within what turns out to be a dead end. Most of us can only be observers of that debate – but we can guard against getting distracted by potential not yet realised.

We can’t have nice things (yet)

Alex Blandford – Medium

Data is a word which conjures up images of objectivity and clarity. It lives in computers and supports precise binary decisions.

Except, of course, none of that is true, or at least none of it is reliably true, especially the bit about supporting decisions. Decisions are framed by humans, and the data which supports them is as much social construct as it is an emergent property of reality. That means that the role of people in curating data and the decision making it supports is vital, not just in constructing the technology, but in managing the psychology, sociology and anthropology which frame them. Perhaps that’s not a surprising conclusion in a post written by an anthropologist, but that doesn’t make it any less right.

Understanding Algorithms

Tim Harford

Tim Harford recommends some books about algorithms. There’s not much more to be said than that – except perhaps to follow up on one of the implications of Prediction Machines, the book which is the main focus of the post.

One way of looking at artificial intelligence is as a tool for making predictions. Good predictions reduce uncertainty. Really good predictions may change the nature of a problem altogether. In a different sense, the purpose of strategy can also be seen as a way of reducing uncertainty: by making some choices (or bets), other choices drop out of the problem space. Putting those two thoughts together suggests that better AI may be a tool to support better strategies.

AI in the UK: ready, willing and able?

House of Lords Select Committee on Artificial Intelligence

There is something slightly disconcerting about reading a robust and comprehensive account of public policy issues in relation to artificial intelligence in the stately prose style of a parliamentary report. But the slightly antique structure shouldn’t get in the way of seeing this as a very useful and systematic compendium.

The strength of this approach is that it covers the ground systematically and is very open about the sources of the opinions and evidence it uses. The drawback, oddly, is that the result is an curiously unpolitical document – mostly sensible recommendations are fired off in all directions, but there is little recognition, still less assessment, of the forces in play which might result in the recommendations being acted on. The question of what needs to be done is important, but the question of what it would take to get it done is in some ways even more important – and is one a House of Lords committee might be expected to be well placed to answer.

One of the more interesting chapters is a case study of the use of AI in the NHS. What comes through very clearly is that there is a fundamental misalignment betweeen the current organisational structure of the NHS and any kind of sensible and coherent use – or even understanding- of the data it holds and of the range of uses, from helpful to dangerous, to which it could be put. That’s important not just in its own right, but as an illustration of a much wider issue of institutional design noted by Geoff Mulgan.

The Risk of Machine-Learning Bias (and How to Prevent It)

Chris DeBrusk – Sloan MIT Management Review

This article is a good complement to the previous post, providing some pragmatic rigour on the risk of bias in machine learning and ways of countering it. Perhaps the most important point is one of the simplest:

It is safe to assume that bias exists in all data. The question is how to identify it and remove it from the model.

There is some good practical advice on how to do just that. But there is an obvious corollary: if human bias is endemic in data, it risks being no less endemic in attempts to remove it. That’s not a counsel of despair, this is an area where good intentions really do count for something. But it does underline the importance of being alert to the opposite, that unless it is clear that bias has been thought about and countered, the probability is high that it still remains. And of course it will be hard to calibrate the residual risk, whatever its level might be, particularly for the individual on the receiving end of the computer saying ‘no’.

Computer Says No: Part 1 Algorithmic Bias and Part 2 Explainability

These two (of a planned three) posts take an interesting approach to the ethical problems of algorithmic decision making, resulting in a much more optimistic view than most who write on this. It’s very much worth reading even though the arguments don’t seem quite as strong as they are made to appear.

Part 1 essentially side steps the problem of bias in decision making by asserting that automated decision systems don’t actually make decisions (humans still mostly do that), but should instead be thought of as prediction systems – and the test of a prediction system is in the quality of its predictions, not in the operations of its black box. The human dimension is a bit of a red herring as it’s not hard to think of examples where in practice the prediction outputs are all the decision maker has to go on, even if in theory the system is advisory. More subtly, there is an assumption that prediction quality can easily be assessed and an assertion that machine predictions can be made independent of the biases of those who create them, both of which are harder problems than the post implies.

The second post goes on to address explainability, with the core argument being that it is a red herring (an argument Ed Felten has developed more systematically): we don’t really care whether a decision can be explained, we care whether it can be justified, and the source of justification is in its predictive power, not in the detail of its generation. There are two very different problems with that. One is that not all individual decisions are testable in that way: if I am turned down for a mortgage, it’s hard to falsify the prediction that I wouldn’t have kept up the payments. The second is that the thing in need of explanation may be different for AI decisions from that for human decisions. The recent killing of a pedestrian by an autonomous Uber car illustrates the point: it is alarming precisely because it is inexplicable (or at least so far unexplained), but whatever went wrong, it seems most unlikely that a generally low propensity to kill people will be thought sufficiently reassuring.

None of that should be taken as a reason for not reading these posts. Quite the opposite: the different perspective is a good challenge to the emerging conventional wisdom on this and is well worth reflecting on.

Data as photography

Ansel Adams, adapted by Wesley Goatley

“A visualisation is usually looked at – seldom looked into.” – Ansel Adams “The sheer ease with which we can produce a superficial visualisation often leads to creative disaster.” – Ansel Adams “There's nothing worse than a sharp visualisation of a fuzzy concept.” – Ansel Adams “You don't collect a data set, you make it.” – Ansel Adams “There are always two people in every data visualisation: the creator and the viewer.” – Ansel Adams “To make art with data truthfully and effectively is to see beneath the surfaces.” – Ansel Adams “A great data visualisation is a full expression of what one feels about what is being visualised in the deepest sense, and is, thereby, a true expression of what one feels about life in its entirety.” – Ansel Adams “Data visualisation is more than a medium for factual communication of ideas. It is a creative art.” – Ansel Adams “We must remember that a data set can hold just as much as we put into it, and no one has ever approached the full possibilities of the medium.” – Ansel Adams “Data art, as a powerful medium...offers an infinite variety of perception, interpretation and execution.” – Ansel Adams “Twelve significant data points in any one year is a good crop.” – Ansel Adams

The idea that the camera does not lie is as old as photography. It has been untrue for just as long.

The exposure of film or sensor to light may be an objective process, but everything which happens before and after that is malleable and uncertain. There are some interesting parallels with data in that: the same appearance – and assertion – of accurately representing the real world, the same issues of both deliberate and unwitting distortion.

This tweet simply takes some of the things Ansel Adams, the great photographer of American landscapes, has written about photography and adapts them to be about data. It’s neatly done and provides good food for thought.

Don’t believe the hype about AI in business

Vivek Wadhwa – VentureBeat

If you want to know why artificial intelligence is like teenage sex, this is the post to read. After opening with that arresting comparison, the article goes on to make a couple of simple but important points. Most real world activities are not games with pre-defined rules and spaces. And for businesses – and arguably still more so for governments – it is critically important to be able to explain and account for decisions and outcomes. More pragmatically, it also argues that competitive advantage in the deployment of AI goes to those who can integrate many sets of disparate data to form a coherent set to which AI can be applied. Most companies – and, again, perhaps even more so most governments – are not very good at that. That might be the biggest challenge of all.

YouTube, the Great Radicalizer

Zeynep Tufekci – New York Times

This article has been getting extensive and well-deserved coverage over the last few days. Essentially, it is demonstrating that the YouTube recommendation engine tends to lead to more extreme material, more or less whatever your starting point. In  short, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

The reason for including it here is not because of the specific algorithm or the specific behaviour it generates. It is because it’s a very clear example of a wider phenomenon. It’s a pretty safe assumption that the observed behaviour is not the result of a cabal of fringe conspirators deep in the secret basements of Google setting out a trail to recruit people into extremist groups or attitudes. The pretty obvious motivation is that what they are actually trying to do is to tempt people into spending as long as possible watching YouTube videos, because that’s the way they can put most advertising in front of most eyeballs.

In other words, algorithmic tools can have radically unintended consequences. That’s made worse in this case because the unintended consequences are not a sign of the intended goal not being achieved; on the contrary, they are the very means by which that intended goal is being achieved. So it is not just the case that YouTube has some strong incentives not to fix the problem, the problem may not be obvious to them in the first place.

This is a clear example. But we need to keep asking the same questions about other systems: what are the second order effects, will we recognise them when we see them, and will we be ready to – and able to – address them?

A roadmap for AI: 10 ways governments will change (and what they risk getting wrong)

Geoff Mulgan – NESTA

This is a great summary of where AI stands in the hype cycle. Its focus is the application to government, but most of it is more generally relevant. It’s really helpful in drawing out what ought to be the obvious point that AI is not one thing and that it therefore doesn’t have a single state of development maturity.

The last of the list of ten is perhaps the most interesting. Using AI to apply more or less current rules in more or less current contexts and systems is one thing (and is a powerful driver of change in its own right). But the longer term opportunity is to change the nature of the game. That could be a black box dystopia, but it could instead be an opportunity to break away from incremental change and find more radical opportunities to change the system. But that depends, as this post rightly concludes, on not getting distracted by the technology as a goal in its own right, but focusing instead on what better government might look like.

UK police are using AI to make custodial decisions – but it could be discriminating against the poor

Matt Burgess – Wired

In abstract, AI is a transformational technology. It may bring perfect and rigorous decision analysis, sweeping away human foibles. Or it may displace human sensitivity and judgement – and indeed the humans themselves – and usher in an era of opaque and arbitrary decision making.

This article, which focuses on the introduction of AI to Durham Constabulary, is a good antidote to those caricature extremes. Reality is, as ever, messier than that. Predictability and accountability are not straightforward. Humans tend to revert, perhaps unwisely, to confidence in their own judgements. It is not clear that some kinds of data are appropriately used in prediction models at all (though the black boxes of human brains are equally problematic). In short, the application of AI to policing decisions isn’t simple and clear cut, it is instead a confused and uncertain set of policy problems. That shouldn’t be surprising.

How AI will transform the Digital Workplace (and how it already is)

Sharon O’Dea – Intranetizen

AI is often written about in terms of sweeping changes resulting in the wholesale automation of tasks and jobs. But as this post sets out, there is also a lower key version, where forms of AI appear as feature enhancements (and thus may not be apparent at all). Perhaps self-generating to do lists are the real future – though whether that will be experienced as liberation or enslavement is very much a matter of taste. Either way, AI won’t be experienced as robots, breaking into the building to take our jobs; instead tasks will melt away, enhanced in ways which never quite feel revolutionary.

10 Principles for Public Sector use of Algorithmic Decision Making

Eddie Copeland – NESTA

This is a really interesting attempt to set out a a set of regulatory principles for the use of algorithms in the public sector. It brings what can easily be quite an abstract debate down to earth: we can talk about open data and open decision making, but what actually needs to be open (and to whom) to make that real?

The suggested principles mostly look like a sensible starting point for debate. Two of them though seem a little problematic, one trivially, the other much more significantly. The trivial one is principle 9, that public sector organisations should insure against errors, which isn’t really a principle at all, though the provision of compensation might be. The important one is principle 5, “Citizens must be informed when their treatment has been informed wholly or in part by an algorithm”. On the face of it, that’s innocuous and reasonable. Arguably though, it’s the equivalent of having a man with a red flag walking in front of a car. Government decisions are already either based on algorithms (often called “laws” or “regulations”) or they are based on human judgements, likely to be more opaque than any computer algorithm. Citizens should absolutely be entitled to an explanation and a justification for any decision affecting them – but the means by which the decision at issue was made should have no bearing on that right.

Will We Own AIs, or Will They Own Us?

Tim Weber – Techonomy

Neither, seems to be the answer to the question posed by the title. But if one thing AIs do is filter the world for us, the question of who does the filtering and in whose interest they do it becomes very important. As with other free services, free algorithms will be provided in expectation of a benefit to somebody, and that somebody may very well not be the end user. So far so unexceptional (and putting it under the heading of AI doesn’t change the substance of an issue which has been around a good while). But if this is a problem, what are the pressures and processes which will work to relieve it rather than reinforce it? Here, the argument rather fades away: we are told we need clear laws and well-accepted procedures to regulate AI, but there is little suggestion here about what they would say or how we would get to them. It’s slightly unfair to single this piece out for what is quite a common problem: when challenges are technology driven, but solutions need to be socially driven, it’s a lot easier to talk about the first than the second.

AI in a week

Laura Caccia – Oxford Insights

This post is just a teaser, but a teaser for something interesting in both style and substance. Starting next Monday, we are promised a five day intensive course on artificial intelligence. AI is for many people at the stage where there is lots of fragmented insight and understanding, but little which brings the fragments together to form a coherent whole. So there is a gap here well worth filling – and this looks to be a neat low key way of filling it. Watch that space.

The black box is a state of mind

Kathrin Passig – Eurozine

The idea of the black box pervades a lot of thinking and writing about AI. Mysterious algorithms do inscrutable things which impinge on people’s lives in inexplicable ways. That is alarming in its own right, but doubly so because this is new and uncharted territory. Except that, as this post painstakingly points out, it’s not actually new at all. People have been writing software about which they could not predict the outputs from the inputs since pretty much since they have been writing software at all – in a sense, that’s precisely the point of it. And if you want to look at it that way, the ultimate black box is the human brain, where the evidence that we don’t understand the reasons for our own decisions, never mind anybody else’s, is pretty overwhelming.

The need for precision at one level – software doesn’t cope well with typos and syntax errors – doesn’t translate into precision at a higher level, of understanding what that precisely written software will actually do. That thought came from Marvin Minsky in 1967, but people had been writing about black boxes for years before that, when the complexity of software was a tiny fraction of what is normal now.

The fact that this is neither new nor newly recognised doesn’t in itself change the nature of the challenge. What it does perhaps suggest, though, is that strategies developed for coping with these uncertainties in the past may well still be relevant for the future.