Killing many birds with one stone

Alan Mitchell – Mydex

Debates about personal data have a tendency to be more circular than they are productive. There is – it appears – a tension between individual privacy and control and the power unlocked by mass data collection and analysis. But because the current balance (or imbalance) between the two is largely an emergent property of the system, there is no reason to think that things have to be the way they are just because that is the way they are.

Given, though, that we are where we are, there are two basic approaches to doing something about it. One is essentially to accept the current system but to put controls of various kinds over it to ameliorate the most negative features – GDPR is the most prominent recent example, which also illustrates that different political systems will put the balance point in very different places. The other approach is to look more fundamentally at the underlying model and ask what different pattern of benefits might come from a more radically different approach. That’s what this post does, systematically coming up with what will look to many like a more attractive set of answers.

Mydex has been building practical systems based on these principles for a good while, so the post is based on solid experience. But therein lies the problem. Getting off the current path onto a different one is in part a technical and architectural one, but it is even more a social, political and economic one. As ever, the hard bit is not describing a better future, but working out how to get there from here.

Arguments against the autonomous vehicle utopia

Alexis Madrigal – The Atlantic

It should by now be beyond obvious that technology is never just about the technology, but somehow the hype is always with us. This article is a useful counter, listing and briefly explaining seven reasons why autonomous vehicles may not happen and may not be an altogether good thing if they do.

It’s worth reading not only – perhaps even not mainly – for its specific insights as for its method: thinking about the sociology and economics of technology may give more useful insights than thinking just about the technology itself.

A New Approach to Understanding How Machines Think

Been Kim – Quanta Magazine

The problem of the AI black box has been around for as long as AI itself: if we can’t trace how a decision has been made, how can we be confident that it has been made fairly and appropriately? There are arguments – for example by Ed Felten – that the apparent problem is not real, that such decisions are intrinsically no more or less explicable than decisions reached any other way. But that doesn’t seem to be an altogether satisfactory approach in a world where AI can mirror and even amplify the biases endemic in the data they draw on.

This interview describes a very different approach to the problem: building a tool which retrofits interpretability to a model which may not have been designed to be fully transparent. At one level this looks really promising: ‘is factor x significant in determining the output of the model?’ is a useful question to be able to answer. But of course real world problems throw up more complicated questions than that, and there must be a risk of infinite recursion: our model of how the first model reaches a conclusion itself becomes so complicated that we need a model to explain its conclusions…

But whether or not that is a real risk, there are some useful insights here into identifying materiality is assessing a model’s accuracy and utility.

Your robot needs a passport

David Birch – Wired

David Birch is one of a pretty small group of people who write sense about  money and identity – and he is pretty much unique in doing so with wit and lightness of touch. This short article draws out the connection between identity and attribution. We will increasingly need to know and trust the attributes of robots and systems, we will increasingly be interested in what attributes people assert about themselves – and at the intersection of those needs there will be a particularly precious attribute:

In time, IS_A_PERSON will be the most valuable credential of all.

Help us start a data revolution for government

Kit Collingwood and Robin Linacre – Data in government

There is lots being written – a small subset of it captured on Strategic Reading – about data and its implications as a driver of new ways of doing things and new things which can be done. There’s a lot written about the strategic (and ethical and legal…) issues and of course there is a vast technical literature. What there seems to be less of is more practical approaches to making data useful and used. That’s a gap which this post starts to fill. it’s not only full of good sense in its own right, it’s also a pointer to an approach which it would be good to see more of: given a strategic opportunity or goal, what are the practical things which need to be done to enhance the probability of success? Strategising is the easy bit of strategy; getting things done to move towards the goal is a great deal harder.

Why Data Is Never Raw

Nick Barrowman – New Atlantis 

There is increasing – if belated – recognition that analysis and inference built on on data is vulnerable to bias of many different kinds and levels of significance. But there is a lingering unspoken hope that data itself is somehow still pure: a fact is, after all, a fact. Except that of course it isn’t, and as this post neatly argues, while raw data may sound less underhand than cooked data, its apparent virtue can be illusory:

In the ordinary use of the term “raw data,” “raw” signifies that no processing was performed following data collection, but the term obscures the various forms of processing that necessarily occur before data collection.

Tired of the same old clichés about the future of work? You’re not alone

Benedict Dellot – RSA

There is no shortage of material on the future of work in general, or on its displacement by automation in particular, but much of it has a strong skew to the technocratically simplistic (though posts chosen for sharing here are selected in part with the aim of avoiding that trap).

There has been a steady stream of material from the RSA which takes a more subtle approach, of which this is the latest. It takes the form of a set of short essays from a variety of perspectives, the foreword to which is also the accompanying blog post. The questions they address arise from automation, but go far beyond the first order effects. What are the implications of the emergence of a global market for online casual labour? Does automation drive exploitation or provide the foundations for a leisured society? Given that automation will continue to destroy jobs (as it always has), will they get replaced in new areas of activity (as they always have – so far)?

Buried in the first essay is an arresting description of why imminent exponential change is hard to spot, even if things have been changing exponentially:

because each step in an exponential process is equal to the sum of all the previous steps, it always looks like you are the beginning, no matter how long it has been going on.

And that in many ways is the encapsulation of the uncertainty around this whole set of questions. There is a technological rate of change, driven by Moore’s law and its descendants, and there is a socio-economic rate of change, influenced by but distinct from the technological rate of change. It is in their respective rates and the relationship between them that much controversy lies.

Is this AI? We drew you a flowchart to work it out

Karen Hao – MIT Technology Review

What is artificial intelligence? It’s a beguilingly simple question, but one which lacks a beguilingly simple answer. There’s more than one way to approach the question, of course – Chris Yiu provides mass exemplificaiton, for example (his list had 204 entries when first linked from here in January, but has now grown to 501). Terence Eden more whimsically dives down through the etymology, while Fabio Ciucci provides a pragmatic approach based on the underlying technology.

This short post takes a different approach again – diagnose whether what you are looking at is AI by means of a simple flowchart. It’s a nice idea, despite inviting some quibbling about some of the detail (“looking for patterns in massive amounts of data” doesn’t sound like a complete account of “reasoning” to me). And it’s probably going to need a bigger piece of paper soon.

Show Me Your Data and I’ll Tell You Who You Are

Sandra Wachter – Oxford Internet Institute

The ethical and legal issues around even relatively straightforward objectively factual personal data are complicated enough. But they seem simple beside the further complexity brought in by inferences derived from that data. Inferences are not new, of course: human beings have been drawing inferences about each other long before they had the assistance of machines. But as in other area, big data makes a big difference.

Inferences are tricky for several reasons. The ownership of an inference is clearly something different from ownership of the information from which the inference is drawn (even supposing that it is meaningful to talk about ownership in this context at all). An inference is often a propensity, which can be wrong without being falsifiable – ‘people who do x tend to like y‘ may remain true even I do x and don’t like y. And all that gets even more tricky over time – ‘people who do x tend to become y in later life’ can’t even be denied or contradicted at the individual level.

This lecture explores those questions and more, examining them at the intersection of law, technology and ethics – and then asks what rights we, as individuals, should have about the inferences which are made about us.

The same arguments are also explored in a blog post written by Wachter with her collaborator Brent Mittelstadt and in very much more detail in an academic paper, also written with Mittelstadt.

How solid is Tim’s plan to redecentralize the web?

Irina Bolychevsky – Medium

As a corollary to the comment here a few weeks back on Tim Berners-Lee’s ideas for shifting the power balance of the web away from data-exploiting conglomerates and back towards individuals, this post is a good clear-headed account of why his goal – however laudable – may be hard to achieve in practice.

What makes it striking and powerful is that it is not written from the perspective of somebody critical of the approach. On the contrary, it is by a long-standing advocate of redecentralising the internet, but who has a hard-headed appreciation of what would be involved. It is a good critique, for example addressing the need to recognise that data does not perfectly map to individuals (and therefore what data counts as mine is nowhere near as straightforward as might be thought) and that for many purposes the attributes of the data, including the authority with which it is asserted, can be as important at the data itself.

One response to that and other problems could be to give up on the ambition for change in this area, and leave control (and thus power) with the incumbents. Instead, the post takes the more radical approach of challenging current assumptions about data ownership and control at a deeper level, arguing that governments should be providing the common, open infrastructure which would allow very different models of data control to emerge and flourish.

Real-time government

Richard Pope – Platform Land

New writing from Richard Pope is always something to look out for: he has been thinking about and doing the intersection of digital and government more creatively and for longer than most. This post is about the myriad ways in which government is not real time – you can’t track the progress of your benefit claim in anything like the way in which you can track your Amazon delivery. And conversely, at any given moment, Amazon has a very clear picture of who its active customers are and what they are doing, in a way which is rather less true of operators of government services.

He is absolutely right to make the point that many services would be improved if they operated – or at least conveyed information – in real time, and he is just as right that converted (rather than transformed) paper processes and overnight batch updates account for some of that. So it shouldn’t detract from his central point to note that some of his examples are slightly odd ones, which may come from an uncharacteristic confusion between real time and event triggered. There is a notification to potential school leavers of their new national insurance number – but since children’s sixteenth birthdays are highly predictable, that notification doesn’t need to be real time in the sense meant here.  It was very useful to be told that my passport was about to expire – but since they were helpfully giving me three months’ notice, the day and the hour of the message was pretty immaterial.

Of course there are government services which should operate on less leisurely cycles than that, and of course those services should be as fast and as transparent as they reasonably can be. But perhaps the real power of real-time government is from the other side, less in shortening the cycle times of service delivery and much more in shortening the cycle times of service improvement.

10 questions to answer before using AI in public sector algorithmic decision making

Eddie Copeland – NESTA

A few months ago, Eddie Copeland shared 10 Principles for Public Sector use of Algorithmic Decision Making. They later apparently morphed into twenty questions to address, and now the twenty have been slimmed down to ten. They are all  good questions, but one very important one seems to be missing – how can decisions based on the algorithm be challenged? (and what, therefore, do people affected by a decision need to understand about how it was reached?)

 

Why Technology Favors Tyranny

Yuval Noah Harari – The Atlantic

The really interesting effects of technology are often the second and third order ones. The invention of electricity changed the design of factories. The invention of the internal combustion engine changed the design of cities. The invention of social media shows signs of changing the design of democracy.

This essay is a broader and bolder exploration of the consequences of today’s new technologies. That AI will destroy jobs is a common argument, that it might destroy human judgement and ability to make decisions is a rather bolder one (apparently a really creative human chess move is now seen as an indicator of potential cheating, since creativity in chess is now overwhelmingly the province of computers).

The most intriguing argument is that new technologies destroy the comparative advantage of democracy over dictatorship. The important difference between the two, it asserts, is not between their ethics but between their data processing models. Centralised data and decision making used to be a weakness; increasingly it is a strength.

There is much to debate in all that, of course. But the underlying point, that those later order effects are important to recognise, understand and address, is powerfully made.

Identifiers and data sharing

Leigh Dodds

This post – which is actually a set of tweets gathered together – is a beautifully short and simple explanation of why some basic stuff really matters in efficiently integrating data and the services it supports (and is actually quite important as well in ensuring that things don’t get joined up which shouldn’t be). Without common identifiers, simple and value adding connections get difficult, expensive and unreliable – a point powerfully made in a post linked from that one which sets out a bewildering array of unique identifiers for property in the UK – definitely unique in the sense that there is a one to one mapping between identifier and place, but ludicrously far from unique in their proliferation.

There is a huge appetite for making more effective use of data. The appetite needs to be as strong for creating the conditions which make that possible.

Data as property

Jeni Tennison – Exponential View

Azeem Azhar’s Exponential View is one of the very few weekly emails which earns regular attention, and it is no disrespect to him to say that the occasional guest authors he invites add further to the attraction. This edition is by Jeni Tennison, bringing her very particular eye to the question of data ownership.

Is owning data just like owning anything else? The simple answer to that is ‘no’. But if it isn’t, what does it mean to talk about data as property? To which the only simple answer is that there is no simple answer. This is not the place to look for detailed exposition and analysis, but it is very much the place to look for a set of links to a huge range of rich content, curated by somebody who is herself a real expert in the field.

Fading out the Echo of Consumer Protection: An empirical study at the intersection of data protection and trade secrets

Guido Noto La Diega

This is by way of a footnote to the previous post – a bit more detail on one small part of the enormous ecosystem described there.

If you buy an Amazon Echo then, partly depending on what you intend to do with it, you may be required to accept 17 different contracts, amounting to close to 50,000 words, not very far short of the length of a novel. You will also be deemed to be monitoring them all for any changes, and to have accepted any such changes by default.

That may be extreme in length and complexity, but the basic approach has become normal to the point of invisibility. That raises a question about the reasonableness of Amazon’s approach. But it raises a much more important question about our wider approach to merging new technologies into existing social, cultural and legal constructs. This suggests, to put it mildly, that there is room for improvement.

(note that the link is to a conference agenda page rather than directly to the presentation, as that is a 100Mb download, but if needed this is the direct link)

Anatomy of an AI System

Kate Crawford and Vladan Joler

An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.

It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.

MIT taught a neural network how to show its work

Tristan Greene – The Next Web

One of the many concerns about automated decision making is its lack of transparency. Particularly (but by no means only) for government services, accountability requires not just that decisions are well based, but that they can be challenged and explained. AI black boxes may be efficient and accurate, but they are not accountable or transparent.

This is an interesting early indicator that those issues might be reconciled. It’s in the special – and much researched – area of image recognition, so a long way from a general solution, but it’s encouraging to see systematic thought being addressed to the problem.

Spoiler alert – there are now 9 tribes of digital

Catherine Howe – Curious?

The eight tribes of digital (which were once seven) have become nine.

The real value of the tribes – other than that they are the distillation of four years of observation, reflection and synthesis – is not so much in whether they are definitively right (which pretty self-evidently they aren’t, and can’t be) but as a prompt for understanding why individuals and groups might behave as they do. And of course, the very fact that there can be nine kinds of digital is another way of saying that there is no such thing as digital