It’s often said that decisions generated by algorithms are inexplicable or lack transparency. This post asks what that challenge really means, and argues that there is nothing distinctive about decisions made by algorithms which makes them intrinsically less explicable than decisions made by human brains. Of the four meanings Felten considers, the argument from complexity seems the most prevalent and relevant – and where his counter argument is weakest. But this is an important and useful way of clarifying and exploring the problem.
‘Digital’ risks becoming ever more shapeless as a word – as it increasingly means everything, it necessarily means nothing. In a post three years ago, Catherine Howe brought some rigour to at least one aspect of the issue, identifying seven tribes of digital. Now she has felt the need to add an eighth – the robot army, reflecting the shift she see from large scale automation being an interesting theory to becoming a practical reality.
An interesting take on the problem of algorithmic reliability: treat it as an economic problem and apply economic analysis tools to it. Algorithms are adopted because they are expected to create value; a rational algorithm adopter will choose algorithms which maximise value. One dimension of that is that if the consequential cost of errors is low, the value of an improved algorithm will also be low (setting an inconvenient appointment time matters less than making an incorrect entitlement decision). More generally, decision making value is maximised when the marginal value of an extra decision equals its marginal cost.
One consequence of taking an economics-based approach which this article doesn’t cover is the importance of externalities: the decision maker about an algorithm (typically an organisation) may pay insufficient weight to the costs and benefits experienced by the subject of a decision (often an individual), so producing a socially sub-optimal outcome.
This article on the biases of algorithmic decision making is notable for two reasons. The first is that it comes from a national newspaper, suggesting that the issue is becoming more visible to less specialised audiences. The second is that it includes a superbly pithy statement of the problem:
Our past dwells within our algorithms.
There is also an eight minute TEDx talk, mostly covering the same ground, but putting a strong emphasis in the final minute on the need for diversity and inclusion in coding and introducing the Algorithmic Justice League, a collective which aims to highlight and address algorithmic bias.
— Google (@Google) May 17, 2017
Jobs, we keep being told, will increasingly be automated. But if, in the modern welfare system, claimants are required to demonstrate that they are, in effect, working full time at looking for work, what happens when looking for work is just another job which gets automated?
Last week Google announced a new Google for Jobs service which isn’t quite that, but which is clearly a step in that direction, and it’s a safe bet that there will be more steps to come. This post reflects on the implications of that for people who are seeking to use their time productively while looking for paid work – and for the welfare systems which support them as they are doing so.
There is no reason to think that governments as organisations are any less vulnerable to the disruptive effects of automation than other kinds of organisations. As process delivery organisations, they are not fundamentally different from other process delivery organisations, and are certainly not immune to the pressures which are reshaping them (though they may be slower to respond to changing expectations). How far, though, might AI take over the policy development functions of government? More than you might think, is the argument here, asserting that governments have a moral obligation to make the best use of AI.
The growing gig economy is often associated with low wages and exploitation, with the flexibility it offers advantaging the employer rather than the worker (and as one of the speakers at the recent RSA event said, flexibility is fine, so long as it works in both directions). Some of that is to do with ambiguities in legal status which haven’t kept pace with the changing labour market, but some of it is about power imbalances – another reflection of the changing relationship between technology and work. This report attempts to answer the question of what a good gig economy would look like, with government given the primary role for creating the conditions for success.
A thirty minute discussion on job automation. Building on Michael Osborne’s work on the levels of job automation, Ryan Avent paints a dystopian future where, paradoxically, humans are forced into low skill and low wage work – and Judy Wajcman points out that the impact of technology is not inexorably deterministic, but is a function of social and political choices. As in previous industrial revolutions, there may be many losers in the transition, even if in the long run, society as a whole is better off, bringing a clear need to avoid technology driving social and political division. The goal seems obvious – that automation should lessen the burdens of work as far as possible – but the means of getting there requires many assumptions to be challenged and reset.
If, as the World Economic Forum has argued, five million jobs are about to be automated out of existence, it becomes important to know which skills will be less in demand and which align with future jobs growth. This article argues that there are two important dimensions – the ‘soft’ skills, such as sharing and negotiation, and mathematical ability, and that it is the combination of the two which will lead to greatest success.
This is a more technical post than most which appear here, on the apparently arcane point of whether blockchains actually are (or even should be) as immutable as is sometimes claimed for them. That matters to those interested in the use of blockchains, rather than their technology, for two reasons. The first is that although the distributed design of a blockchain – particularly a public blockchain such as bitcoin – makes it hard to compromise, hard is not the same as impossible, and understanding where motive and opportunity might overlap is important to decisions about how and when it is sensible to use them. The second is that the pattern of interests and opportunities may be different for large institutions such as government. Public blockchains are kept honest by large, computationally intensive processes which, by design, are expensive and inefficient, but which allow participants in exchanges to be trusted without needing to prove that they are trustworthy. If other forms of trust are available, the overheads of bitcoin-like blockchains can be avoided, bringing a slightly different mix of risks and opportunities.
The article also contains a neat and succinct description of how blockchain actually works (though it might not be the ideal starting point for people completely new to the subject).
Work is a critically important part of life, and it matters enormously that work should be good not bad, that we should be interested in the quality of work as well as its quality. That’s the central premise of this thoughtful article by Matthew Taylor which covers similar, but not identical, ground to his lecture, delivered the same day, on good work for all.
A short, sharp lecture – the main part is less than twenty minutes – on the nature of work, and particularly what should count as ‘good work’ in a modern economy, covering similar ground to the article Matthew Taylor published the same day. It is followed by responses from Carolyn Fairbairn, Carol Black and Peter Cheese, which are a slightly more mixed bag, but interesting for what they both do and do not say directly.
If Weapons of Math Destruction shows how data models can be bad, that still leaves us with the question of how to tell the good from the bad, and how to make judgements about what might count as good. This post sets out some potential ethical criteria to help data scientists – and even more importantly, those who commission and use data science – to make good decisions. There are some strong parallels with Jeni Tennison’s talk on countering bias in data, perhaps not surprisingly given a common background in ODI.
As a neat little example of the general proposition that all models are wrong, but some are useful, David Weinberger unearths a mechanical tide prediction engine from 1914, which refined the accuracy of its forecasts by using an increasingly unrealistic model of the solar system, with planets and moons being invented with gay abandon to make the results come out right. That’s a beautiful example of one of the points Weinberger is making in his article about machines, knowledge and artificial intelligence – that there will be complex consequences of systems which increasingly appear to have useful outputs without it being easy – or even possible – to work out how they were derived.
Universal basic income can be seen as a way of sustaining society in a world beyond work. Or it can be seen as a way of supporting a society and economy which is still based on work. The latter may open up more interesting possibilities and opportunities than the former, but this post argues that the debate is much less constructive and open minded in the UK – or rather in England – than it is in a number of other countries, with particular criticism for the recent report from the Work and Pensions Select Committee.