Personal Data Representatives: An Idea

Tom Steinberg – Medium

In dealing with digital services – indeed in dealing with organisations generally – power is very asymmetric. Amazon does not invite you to negotiate the terms on which it is prepared to sell you things (though of course you retain the power not to buy). Digital services and apps give the illusion of control (let’s think about whether to accept these cookies…) but have developed a habit and a reputation or helping themselves to data and making their own judgement about what to do with it. That’s not necessarily because individual consumers can’t control permissions, but is also because the cost and complexity of doing so make it burdensome. Tom Steinberg brings a potential solution to that problem: what if we had somebody negotiating all that on our behalf, could that asymmetry be addressed? Typically, he recognises the difficulties as well as the potential, but even if the answers are hard, the question is important.

How to Build Self-Conscious Artificial Intelligence

Hugh Howey – Wired

The title of this article is a bit of a false flag, since it could easily have continued ‘… and why that would be a really bad idea’. It is though an interesting – though considerably longer – complement to the argument that the idea of general artificial intelligence is based on a false analogy between human brains and computers. This article takes the related but distinct approach that self-consciousness exists as compensation for structural faults in human brains, particularly driven by the fact that having a sophisticated theory of mind is a useful evolutionary trait and that it would be pointless (rather than impossible) to replicate that – because perhaps the most notable thing about human introspection about consciousness is how riddled it is with error and self-contradiction. That being so, AI will continue to get more powerful and sophisticated. But it won’t become more human, because it makes no sense to make it so.

204 examples of artificial intelligence in action

Chris Yiu

This is a page of links which provides over two hundred examples of artificial intelligence in action – ranging from mowing the lawn, through managing hedge funds and sorting cucumbers all the way to writing AI software. Without clicking a single one of the links, it provides a powerful visual indicator of how pervasive AI has already become. There is inevitably a bit of a sense of never mind the quality, feel the width – but the width is itself impressive, and the quality is often racing up as well.

There is a linked twitter account which retweets AI-related material – though in a pleasing inversion, it shows every sign of being human-curated.

Ethics and ethicists

Ellen Broad

This is a short tweet thread making the point that ethics in AI – and in technology generally – needs to be informed by ethical thinking developed in other contexts (and over several millenia). That should be so obvious as to be hardly worth saying, but it has often become one of those questions which people doing new things fall into the trap of believing themselves to be solving for the first time.

Why the robot boost is yet to arrive

Tim Harford – the Undercover Economist

One of the problems with predicting the future is working out when it’s going to happen. That’s not quite as silly as it sounds: there is an easy assumption that the impact of change follows closely on the change itself, but that assumption is often wrong. That in turn can lead to the equally wrong assumption that because there has been limited impact in the short term, the impact will be equally limited in the long term. As Robert Solow famously put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ In this post, Tim Harford updates the thought from computers to robots. The robot takeover isn’t obviously consistent with high employment and low productivity growth, but that is what we can currently observe. The conclusion – and the resolution of the paradox is disarmingly simple, if rather frustrating: wait and see.

Don’t believe the hype: work, robots, history

Michael Weatherburn – Resolution Foundation

This post introduces a longer paper which takes the idea of understanding the future by reflecting on the past to a new level. The central argument is that digital technologies have been influencing and shaping the industry sectors it examines for a long time, and that that experience strongly suggests that the more dramatic current forecasts about the impact of technology on work are overblown.

The paper’s strengths come from its historical perspective – and, unusually for this topic, from being written by a historian. It is very good on the underlying trends driving changing patterns of work and service delivery and distinguishing them from the visible emanations of them in web services. It does though sweep a lot of things together under the general heading of ‘the internet’ in a way which doesn’t always add to understanding – the transformation of global logistics driven by ERP systems is very different from the creation of the gig economy in both cause and effect.

The paper is less good in providing strong enough support for its main conclusion to justifying making it the report’s title. It is true that the impacts of previous technology-driven disruptions have been slower and less dramatic to manifest themselves than contemporary hype expected. But the fact that hype is premature does not indicate that the underlying change is insubstantial – the railway mania of the 1840s was not a sign that the impact of railways had peaked. It is also worth considering seriously whether this time it’s different – not because it necessarily is, but because the fact that it hasn’t been in the past is a reason to be cautious, not a reason to be dismissive.

What we talk about when we talk about fair AI

Fionntán O’Donnell – BBC News Labs

Courtesy of Laura Amaya from the Noun Project

This is an exceptionally good non-technical overview of fairness, accountability and transparency in AI. Each issue in turn is systematically disassembled and examined.  It is particularly strong on accountability, bringing out clearly that it can only rest on human agency and social and legal context. ‘My algorithm made me do it’ has roughly the same moral and intellectual depth as ‘a big boy made me do it’.

I have one minor, but not unimportant, quibble about the section on fairness. The first item on the suggested checklist is ‘Does the system fit within the company’s ethics?’ That is altogether too narrow a formulation, both in principle and in practice. It’s wrong in practice because there is no particular reason to suppose that a company’s (or any other organisation’s) ethics can be relied on to impose any meaningful standards. But it’s also wrong in principle: the relevant scope of ethical standards is not the producers of an algorithm, but the much larger set of people who use it or have it applied to them.

But that’s a detail. Overall, the combination of clear thinking and practical application makes this well worth reading.

Machine learning, defined

Sarah Jamie Lewis

There’s a whole emerging literature summarised in those words. But it underlines how much of the current debate is still as much about what machine learning is as what it does.

The impossibility of intelligence explosion

François Chollet – Medium

Last week, there was another flurry of media coverage for AI, as Google’s AlphaZero went from no knowledge of the rules of chess to beating the current (computer) world champion in less than a day. And that inevitably prompts assumptions that very specific domain expertise will somehow translate into ever accelerating general intelligence, until humans become pets of the AI, if they are suffered to live at all.

This timely article systematically debunks that line of thought, demonstrating that intelligence is a social construct and arguing that it is in many ways a property of our civilization, not of each of us as individuals within it. Human IQ (however flawed a measure that is) does not correlate with achievement, let alone with world domination, beyond a fairly narrow range – raw cognition, it seems, is far from being the only relevant component of intelligence.

Or in a splendid tweet length dig at those waiting expectantly for the singularity:

Thinking about the future

Ben Hammersley

This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?

His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?

It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.

It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.

Can A.I. Be Taught to Explain Itself?

Cliff Kuang – New York Times

The question which this article tries to answer is a critically important one. Sometimes – often – it matters not just that a decision has been made, but that it has been made correctly and appropriately, taking proper account of the factors which are relevant and no account of factors which are not.

That need is particularly obvious in, but not limited to, government decisions, even more so where a legal entitlement is at stake. But machine learning doesn’t work that way: decisions are emergent properties of systems, and the route to the conclusion may be neither known nor, in any ordinary sense, knowable.

The article introduces a new name for the challenge faced from the earliest days of the discipline, “explainable AI” – with a matching three letter acronym for it, XAI. The approach is engagingly recursive. The problem of describing the decision produced by an AI may itself be a problem of the type susceptible to analysis by AIs. Even if that works, it isn’t of course the end of it. We may have to wonder whether we need a third AI system which assures us that the explanation given by the second AI system of the decision made by the first AI system is accurate. And more prosaically, we would net to understand whether any such explanation is even capable of meeting the new GDPR standards.

But AI isn’t going away. And given that, XAI or something like it is going to be essential.

The morality of artificial intelligence

Moral Maze – BBC

Posts generally appear on Strategic Reading because they make powerful and interesting arguments or bring thought provoking information to bear. This 45 minute discussion is in a rather different category. It’s appearance here is to illustrate the alarmingly low level of thought being applied to some critically important questions. In part, it’s a classic two cultures problem, technologists who don’t seem to see the social and political implications of their work in a hopeless discourse with people who don’t seem to grasp the basics of the technology, in a discussion chaired by somebody capable of introducing the topic by referring to ‘computer algorithms – whatever they are.’ Matthew Taylor stands out among the participants for his ability to comment intelligently on both sides of the divide, while Michael Portillo is at least fluently pessimistic about the intrinsic imperfection of humanity.

Why then mention it at all? Partly to illustrate the scale and complexity of some of the policy questions prompted by artificial intelligence, which are necessarily beyond the scope of the technology itself. Partly also because the current state of maturity of AI makes it hard to get traction on the real problems. Everybody can project their hopes and fears on hypothetical AI developments – it’s not clear that people are agreeing on enough to have meaningful disagreements.

So despite everything, there is some value in listening to this – but with an almost anthropological cast of mind, to get some insight into the lack of sophistication on an important and difficult topic of debate.

 

We’ll need more than £40m* a year to get free maps – specifically politicians willing to share

Ed Parkes

This is the back story to one of yesterday’s budget announcements – £40 million a year for two years to give UK small businesses access to Ordnance Survey data. If you are interested in that you will find it gripping. But even if you are not, it’s well worth reading as a perceptive – if necessarily speculative – account of how policy gets made.

There are people lobbying for change – some outside government, some within. What they want done has a cost, but more importantly entails changing the way that the problem is thought about, not just in the bit of government which owns the policy, but in the Treasury, which is going to have to pay for it. A decision is made, but not one which is as clear cut or all embracing as the advocates would have liked. They have won, in a sense, but what they have won isn’t really what they wanted.

It’s also a good example of why policy making is hard. What seems at first to be a simple issue about releasing data quickly expands into wider questions of industrial and social strategy – is it a good idea to subsidise mapping data, even if the first order beneficiaries are large non-UK multinationals whose reputation for paying taxes is not the most positive? Is time limited pump-priming funding the right stimulus, or does it risk creating a surge of activity which then dies away? And, of course, this is a policy with no service design in sight.

Digital archiving: disrupt or be disrupted?

John Sheridan – The National Archives blog

This post works at two entirely different levels. It is a bold claim of right to the challenges of digital archiving, based on the longevity of the National Archives as an organisation, the trust it has earned and its commitment to its core mission – calling on a splendidly Bayesian historiography.

But it can be read another way, as an extended metaphor for government as a whole. There is the same challenge of managing modernity in long established institutions, the same need to sustain confidence during rapid systemic change. And there is the same need to grow new government services on the foundations of the old ones, drawing on the strength of old capabilities even as new ones are developed.

And that, of course, should be an unsurprising reading. Archival record keeping is changing because government itself is changing, and because archives and government both need to keep pace with the changing world.

Do social media threaten democracy? – Scandal, outrage and politics

The Economist

It’s interesting to read this Economist editorial alongside Zeynep Tufekci’s TED talk. It focuses on the polarisation of political discourse driven by the persuasion architecture Tufekci describes, resulting in the politics of contempt. The argument is interesting, but perhaps doubly so when the Economist, which is not know for its histrionic rhetoric, concludes that ‘the stakes for liberal democracy could hardly be higher.’

That has implications well beyond politics and persuasion and supports the wider conclusion that algorithmic decision making needs to be understood, not just assumed to be neutral.

We’re building a dystopia just to make people click on ads

Zeynep Tufekci – TED

This TED talk is a little slow to get going, but increasingly catches fire. The power of algorithmically driven media may start with the crude presentation of adverts for the thing we have already just bought, but the same powers of tracking and micro-segmentation create the potential for social and political manipulation. Advertising-based social media platforms are based on persuasion architectures, and those architectures make no distinction between persuasion to buy and persuasion to vote.

That analysis leads – among other things – to a very different perception of the central risk of artificial intelligence: it is not that technology will develop a will of its own, but that it will embody, almost undetectably, the will of those in a position to use it. The technology itself may, in some senses, be neutral; the business models it supports may well not be.

Technology for the Many: A Public Policy Platform for a Better, Fairer Future

Chris Yiu – Institute for Global Change

This wide ranging and fast moving report hits the Strategic Reading jackpot. It provides a bravura tour of more of the topics covered here than is plausible in a single document, ticking almost every category box along the way. It moves at considerable speed, but without sacrificing coherence or clarity. That sets the context for a set of radical recommendations to government, based on the premise established at the outset that incremental change is a route to mediocrity, that ‘status quo plus’ is a grave mistake.

Not many people could pull that off with such aplomb. The pace and fluency sweep the reader along through the recommendations, which range from the almost obvious to the distinctly unexpected. There is a debate to be had about whether they are the best (or the right) ways forward, but it’s a debate well worth having, for which this is an excellent provocation.

 

Five thoughts on design and AI

Richard Pope – IF

Some simple but very powerful thoughts on the intersection of automation and design. The complexity of AI, as with any other kind of complexity, cannot be allowed to get in the way of making the experience of a service simple and comprehensible. Designers have an important role to play in avoiding that risk, reinforced as the post notes by the requirement under GDPR for people to be able to understand and challenge decisions which affect them.

There is a particularly important point – often overlooked – about the need to ensure that transparency and comprehension are attributes of wider social and community networks, not just of individuals’ interaction with automated systems.

Your Data is Being Manipulated

danah boyd – Point

This the transcript of a conference address, less about the weaknesses of big data a machine learning and more about its vulnerability to attack and to the encoding of systematic biases – and how everything is going to get worse. There are some worrying case studies – how easy will it turn out to be to game the software behind self-driving cars to confuse one road sign with another? – but also some hope, from turning the strength of machine learning against itself, using adversarial testing for models to probe each other’s limits. Her conclusion though is stark:

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

(the preamble promises a link to a video of the whole thing, but what’s there is only one section of the piece, the rest is behind a paywall)

Tales from three disruption “sherpas”

Martin Stewart-Weeks – Public Purpose

This is an artful piece – the first impression is of a slightly unstructured stream of consciousness, but underneath the beguilingly casual style, some great insights are pulled out, as if effortlessly. Halfway down, we are promised ‘three big ideas’, and the fulfilment does not disappoint. The one which struck home most strongly is that we design institutions not to change (or, going further still, the purpose of institutions is not to change). There is value in that – stability and persistence bring real benefits – but it’s then less surprising that those same institutions struggle to adapt to rapidly changing environments. A hint of an answer comes with the next idea: if everything is the product of a design choice, albeit sometimes an unspoken and unacknowledged one, then it is within the power of designers to make things differently.