Karen Hao – MIT Technology Review
What is artificial intelligence? It’s a beguilingly simple question, but one which lacks a beguilingly simple answer. There’s more than one way to approach the question, of course – Chris Yiu provides mass exemplificaiton, for example (his list had 204 entries when first linked from here in January, but has now grown to 501). Terence Eden more whimsically dives down through the etymology, while Fabio Ciucci provides a pragmatic approach based on the underlying technology.
This short post takes a different approach again – diagnose whether what you are looking at is AI by means of a simple flowchart. It’s a nice idea, despite inviting some quibbling about some of the detail (“looking for patterns in massive amounts of data” doesn’t sound like a complete account of “reasoning” to me). And it’s probably going to need a bigger piece of paper soon.
Sandra Wachter – Oxford Internet Institute
The ethical and legal issues around even relatively straightforward objectively factual personal data are complicated enough. But they seem simple beside the further complexity brought in by inferences derived from that data. Inferences are not new, of course: human beings have been drawing inferences about each other long before they had the assistance of machines. But as in other area, big data makes a big difference.
Inferences are tricky for several reasons. The ownership of an inference is clearly something different from ownership of the information from which the inference is drawn (even supposing that it is meaningful to talk about ownership in this context at all). An inference is often a propensity, which can be wrong without being falsifiable – ‘people who do x tend to like y‘ may remain true even I do x and don’t like y. And all that gets even more tricky over time – ‘people who do x tend to become y in later life’ can’t even be denied or contradicted at the individual level.
This lecture explores those questions and more, examining them at the intersection of law, technology and ethics – and then asks what rights we, as individuals, should have about the inferences which are made about us.
The same arguments are also explored in a blog post written by Wachter with her collaborator Brent Mittelstadt and in very much more detail in an academic paper, also written with Mittelstadt.
Irina Bolychevsky – Medium
As a corollary to the comment here a few weeks back on Tim Berners-Lee’s ideas for shifting the power balance of the web away from data-exploiting conglomerates and back towards individuals, this post is a good clear-headed account of why his goal – however laudable – may be hard to achieve in practice.
What makes it striking and powerful is that it is not written from the perspective of somebody critical of the approach. On the contrary, it is by a long-standing advocate of redecentralising the internet, but who has a hard-headed appreciation of what would be involved. It is a good critique, for example addressing the need to recognise that data does not perfectly map to individuals (and therefore what data counts as mine is nowhere near as straightforward as might be thought) and that for many purposes the attributes of the data, including the authority with which it is asserted, can be as important at the data itself.
One response to that and other problems could be to give up on the ambition for change in this area, and leave control (and thus power) with the incumbents. Instead, the post takes the more radical approach of challenging current assumptions about data ownership and control at a deeper level, arguing that governments should be providing the common, open infrastructure which would allow very different models of data control to emerge and flourish.
Richard Pope – Platform Land
New writing from Richard Pope is always something to look out for: he has been thinking about and doing the intersection of digital and government more creatively and for longer than most. This post is about the myriad ways in which government is not real time – you can’t track the progress of your benefit claim in anything like the way in which you can track your Amazon delivery. And conversely, at any given moment, Amazon has a very clear picture of who its active customers are and what they are doing, in a way which is rather less true of operators of government services.
He is absolutely right to make the point that many services would be improved if they operated – or at least conveyed information – in real time, and he is just as right that converted (rather than transformed) paper processes and overnight batch updates account for some of that. So it shouldn’t detract from his central point to note that some of his examples are slightly odd ones, which may come from an uncharacteristic confusion between real time and event triggered. There is a notification to potential school leavers of their new national insurance number – but since children’s sixteenth birthdays are highly predictable, that notification doesn’t need to be real time in the sense meant here. It was very useful to be told that my passport was about to expire – but since they were helpfully giving me three months’ notice, the day and the hour of the message was pretty immaterial.
Of course there are government services which should operate on less leisurely cycles than that, and of course those services should be as fast and as transparent as they reasonably can be. But perhaps the real power of real-time government is from the other side, less in shortening the cycle times of service delivery and much more in shortening the cycle times of service improvement.
Cathy O’Neil – RSA
This is a brilliant two and a half minute animation, explaining what algorithms are, what they are not, and why they are inherently not neutral.
Eddie Copeland – NESTA
A few months ago, Eddie Copeland shared 10 Principles for Public Sector use of Algorithmic Decision Making. They later apparently morphed into twenty questions to address, and now the twenty have been slimmed down to ten. They are all good questions, but one very important one seems to be missing – how can decisions based on the algorithm be challenged? (and what, therefore, do people affected by a decision need to understand about how it was reached?)
Yuval Noah Harari – The Atlantic
The really interesting effects of technology are often the second and third order ones. The invention of electricity changed the design of factories. The invention of the internal combustion engine changed the design of cities. The invention of social media shows signs of changing the design of democracy.
This essay is a broader and bolder exploration of the consequences of today’s new technologies. That AI will destroy jobs is a common argument, that it might destroy human judgement and ability to make decisions is a rather bolder one (apparently a really creative human chess move is now seen as an indicator of potential cheating, since creativity in chess is now overwhelmingly the province of computers).
The most intriguing argument is that new technologies destroy the comparative advantage of democracy over dictatorship. The important difference between the two, it asserts, is not between their ethics but between their data processing models. Centralised data and decision making used to be a weakness; increasingly it is a strength.
There is much to debate in all that, of course. But the underlying point, that those later order effects are important to recognise, understand and address, is powerfully made.
This post – which is actually a set of tweets gathered together – is a beautifully short and simple explanation of why some basic stuff really matters in efficiently integrating data and the services it supports (and is actually quite important as well in ensuring that things don’t get joined up which shouldn’t be). Without common identifiers, simple and value adding connections get difficult, expensive and unreliable – a point powerfully made in a post linked from that one which sets out a bewildering array of unique identifiers for property in the UK – definitely unique in the sense that there is a one to one mapping between identifier and place, but ludicrously far from unique in their proliferation.
There is a huge appetite for making more effective use of data. The appetite needs to be as strong for creating the conditions which make that possible.
Jeni Tennison – Exponential View
Azeem Azhar’s Exponential View is one of the very few weekly emails which earns regular attention, and it is no disrespect to him to say that the occasional guest authors he invites add further to the attraction. This edition is by Jeni Tennison, bringing her very particular eye to the question of data ownership.
Is owning data just like owning anything else? The simple answer to that is ‘no’. But if it isn’t, what does it mean to talk about data as property? To which the only simple answer is that there is no simple answer. This is not the place to look for detailed exposition and analysis, but it is very much the place to look for a set of links to a huge range of rich content, curated by somebody who is herself a real expert in the field.
Guido Noto La Diega
This is by way of a footnote to the previous post – a bit more detail on one small part of the enormous ecosystem described there.
If you buy an Amazon Echo then, partly depending on what you intend to do with it, you may be required to accept 17 different contracts, amounting to close to 50,000 words, not very far short of the length of a novel. You will also be deemed to be monitoring them all for any changes, and to have accepted any such changes by default.
That may be extreme in length and complexity, but the basic approach has become normal to the point of invisibility. That raises a question about the reasonableness of Amazon’s approach. But it raises a much more important question about our wider approach to merging new technologies into existing social, cultural and legal constructs. This suggests, to put it mildly, that there is room for improvement.
(note that the link is to a conference agenda page rather than directly to the presentation, as that is a 100Mb download, but if needed this is the direct link)
Kate Crawford and Vladan Joler
An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.
It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.
Tristan Greene – The Next Web
One of the many concerns about automated decision making is its lack of transparency. Particularly (but by no means only) for government services, accountability requires not just that decisions are well based, but that they can be challenged and explained. AI black boxes may be efficient and accurate, but they are not accountable or transparent.
This is an interesting early indicator that those issues might be reconciled. It’s in the special – and much researched – area of image recognition, so a long way from a general solution, but it’s encouraging to see systematic thought being addressed to the problem.
Catherine Howe – Curious?
The eight tribes of digital (which were once seven) have become nine.
The real value of the tribes – other than that they are the distillation of four years of observation, reflection and synthesis – is not so much in whether they are definitively right (which pretty self-evidently they aren’t, and can’t be) but as a prompt for understanding why individuals and groups might behave as they do. And of course, the very fact that there can be nine kinds of digital is another way of saying that there is no such thing as digital
Margaret Boden – Aeon
The phrase ‘artificial intelligence’ is a brilliant piece of marketing. By starting with the artificial, it makes it easy to overlook the fact that there is no actual intelligence involved. And if there is no intelligence, still less are there emotions or psychological states.
The core of this essay is the argument that computers and robots do not, and indeed cannot, have needs or desires which have anything in common with those experienced by humans. In the short to medium term, that has both practical and philosophical implications for the use and usefulness of machines and the way they interact with humans. And in the long term (though this really isn’t what the essay is about), it means that we don’t have to worry unduly about a future in which humanity survives – at best – as pets of our robot overlords.
Ellen Broad – Melbourne University Publishing
Ellen Broad’s new book is high on this summer’s reading list. Both provenance and subject matter mean that confidence in its quality can be high. But while waiting to read it, this short interview gives a sense of the themes and approach. Among many other virtues, Ellen recognises the power of language to illuminate the issues, but also to obscure them. As she says, what is meant by AI is constantly shifting, a reminder of one of the great definitions of technology, ‘everything which doesn’t work yet’ – because as soon as it does it gets called something else.
The book itself is available in the UK, though Amazon only has it in kindle form (but perhaps a container load of hard copies is even now traversing the globe).
Edwina Dunn – Starcount
Edwina Dunn is one of the pioneers of data science and this short paper is the distillation of more than twenty years’ experience of using meticulous data analysis to understand and respond to customers – most famously in the form of the Tesco Clubcard. It is worth reading both for some pithy insights – data is art as well as science – and, more unexpectedly, for what feels like a slightly dated approach. “Data is the new oil” may be true in the sense that is a transformational opportunity, with Zuckerberg as the new Rockefeller, but data is not finite, it is not destroyed by use and it is not fungible. More tellingly she makes the point that ‘Owning the customer is not a junior or technical role; it’s one of the most important differentiators of future winners and losers.’ You can see what she means, but shopping at a supermarket is not supposed to be a form of slavery, still less so (if that were possible) is that a good way of thinking about the users of public services.
It doesn’t sound as though the Cluetrain Manifesto has been a major influence on this school of thought. Perhaps it should be.
Matthew Hutson – Science
This article is an interesting complement to one from last week which argued that AI is harder than you think. It builds a related argument from a slightly different starting point: that big data driven approaches to artificial intelligence have been demonstrably powerful in the short term, but may never break through to produce general problem solving skills. That’s because there is no solution in sight to the problem of creating common sense – which turns out not to be very common at all. Humans possess some basic instincts which are hard coded into us and might need to be hard coded into AI as well – but to do so would be to cut across the self-learning approach to AI which now dominates. If there is reason to think that babies can make judgements and distinctions which elude current AI, perhaps AI has more to learn from babies than babies from AI.
A pithy but important reminder that the autonomy of AI is not what we should most worry about. Computers are ultimately controlled by humans and do what humans want them to do. Understanding the motivation of the humans will be more important than attempting to infer the motivation of the robots for a good while to come.
Gary Marcus and Ernest Davis – New York Times
Coverage of Google’s recent announcement of a conversational AI which can sort out your restaurant bookings for you has largely taken one of two lines. The first is about the mimicry of human speech patterns: is it ethical for computers to um and er in a way which can only be intended to deceive their interlocutors into thinking that they are dealing with a real human being, or should it always be made clear, by specific announcement or by robotic tones, that a computer is a computer? The second – which is where this article comes in – positions this as being on the verge of artificial general intelligence: today a conversation about organising a hair cut, tomorrow one about the meaning of life. That is almost completely fanciful, and this article is really good at explaining why.
It does so in part by returning to a much older argument about computer intelligence. For a long time, the problem of AI was treated as a problem of finding the right set of rules which would generate a level of behaviour we would recognise as intelligent. More recently that has been overtaken by approaches based on extracting and replicating patterns from big data sets. That approach has been more visibly successful – but those successes don’t in themselves tell us whether they are steps towards a universal solution or a brief flourishing within what turns out to be a dead end. Most of us can only be observers of that debate – but we can guard against getting distracted by potential not yet realised.
Alex Blandford – Medium
Data is a word which conjures up images of objectivity and clarity. It lives in computers and supports precise binary decisions.
Except, of course, none of that is true, or at least none of it is reliably true, especially the bit about supporting decisions. Decisions are framed by humans, and the data which supports them is as much social construct as it is an emergent property of reality. That means that the role of people in curating data and the decision making it supports is vital, not just in constructing the technology, but in managing the psychology, sociology and anthropology which frame them. Perhaps that’s not a surprising conclusion in a post written by an anthropologist, but that doesn’t make it any less right.