“Charbonneau Loops” and government IT contracting
‘Charbonneau loop’ turns out to be one of those terms which we didn’t know we needed until it was called into existence, but draws attention to something all too easily overlooked. It describes a form of moral hazard, which is simple, obvious, and largely invisible:
Charbonneau Loops ultimately happen when the “pool” of companies (receiving public sector contracts for a given type of work) is small enough that the same companies are sometimes overseeing, and sometimes overseen, by their peers in that same pool. Even if they never actually coordinate with each other – even if they don’t have any conversations whatsoever – they’re all incentivized to be a little bit less critical of each other as a result.
The concept takes its name from an investigation into construction corruption in Quebec, but it can clearly apply to sectors other than construction and far beyond Quebec. It can also be extended beyond the simple two role form of the loop. The story of Grenfell Tower is a multi-player version where a complicated set of public and private sector organisations carefully positioned themselves not to identify risks and not to be resposible for resolving them.
The question, of course, is what can be done to break the loop and restore – or perhaps create – conditions in which the institutional incentives act differently, supporting effective challenge, rather than muddled complacency. The simple answer, set out in the post, is to strengthen in-house capacity and to increase the pool of suppliers. But as so often, the harder question is how to get there from here, and how to avoid letting progress be undermined by regulatory revolving doors which create a form of personal Charobonneau loops embedded in the more institutional ones.
Provocation: Redesigning Artificial Intelligence – From Australia Out
Ellen not only always has interesting things to say, she is also unusually effective in finding interesting ways of saying them. This latest piece defies categorisation. It is an essay about AI. It is a reflection on extreme utilitarianism. It is a call to action on the hidden costs of social harmony. It is about edge cases where the edges are sharp and cause harm to those whose lives place them there. It is a call to bring the messiness of cybernetics and systems to the delusional clarity of dehumanised AI. It is a discussion of issues not discussed. It is a challenge to do better.
We are more aware of the threads that bind us together. We have had a glimpse of the fragility of the foundations on which our lives of easy comfort are built. When the exchange for that comfort is the discomfort of others. And so in this space is room to imagine some place else.
And as well as all those things, it is an audio-visual experience, with a soundscape which drifts beyond music and imagery which is not quite illustration. The tone is neither soothing nor haranguing. But in its matter of factness there is great power.
Hosni Mubarak – My Part In His Downfall
Social media gives voice to aggressive extremists, provides powerful tools for like-minded people to find each other and reinforce the thinking of the group, and allows lies and disinformation to be propagated at speed. Social media companies come under pressure to do something about all that and aren’t widely regarded as being sufficiently focused on their intent or sufficiently successful in their achievement.
This is an insider’s view of why that is harder than it looks and especially hard to scale, setting out clearly and logically how this can work and why it can’t. It’s very much worth reading for the clarity with which it does that. But it also aims to demonstrate support for the assertion that those working on this within the social media platforms are “good people making hard decisions as best they can.” The question for the rest of us is whether their doing the best they can is good enough – and the reassurance that Facebook knows best is perhaps not quite as reassuring as its supporters might hope.
When part of your job is *not* caring
Strategic Reading – and indeed strategy – tends to the lofty, the grand scale and the dispassionate. So at first sight, this personal and emotional reflection by Terence Eden might seem out of place here. But it is precisely because of the lofty perspective that his point is so important. Thinking about and, even more so, making decisions about issues which affect thousands or millions of people can never be about each of them as individuals. And few real world complex problems have a reassuring Pareto-optimal solution where we can sleep easy knowing that we have made things better for some and worse for none.
Abstracting from the individual can be a very necessary thing to do. But that’s not at all the same as forgetting that there are individuals, real people with real lives which can be made better or worse by distant decisions. To lose sight of that is to become less human. The first step to treating people badly is to strip them of their individuality. More insidiously, stripping people of their individuality is a step towards the risk of treating them badly. And yet as Terence says,
I simply cannot think about them as individuals. No one’s brain has room to contemplate the pain and joy and heartbreak and elation of so many people. It is unfair of me to care about any one person more than another.
The dilemma is inescapable. Being aware of it is the very least we should expect of those whose work forces that issue upon them.
Things Fall Apart
There is an almost universal belief, no less strong for being almost as universally unspoken, that the UK political system is an exemplar of stability and moderation. There is a related belief, near universal among those most affected by it, that being a non-political civil servant is unproblematic, precisely because of those characteristics of the wider political system.
Those beliefs have been pretty resistant to evidence. Reflections on civil service ethics generate little interest. The remarkable resignation letter of a British diplomat in the USA cracked the facade, but the crack is already healing. This post takes that resignation as its starting point for a deeper examination of the fragility of civil society. It is short and pointed; alarmed but not alarmist. It sets a challenge. It is not clear where an effective response to that challenge will come from.
The Moral Economy of Tech
This is another piece which isn’t new but which provides some good provocative food for thought, on how applying a computer programming perspective to problems which are fundamentally social can – and does – lead to unfortunate results. It’s written by Maciej Cegłowski, who brings elegant erudition to an unlikely range of subjects, in this case how an approach based on controlling closed systems breaks down when confronted with messily indeterminate systems, with a scattering of provocative one liners which combine challenge and simplicity, such as
Machine learning is like money laundering for bias.
But his conclusion is much broader – and an even greater challenge – than that single line suggests:
We have to stop treating computer technology as something unprecedented in human history. Not every year is Year Zero. This is not the first time an enthusiastic group of nerds has decided to treat the rest of the world as a science experiment. Earlier attempts to create a rationalist Utopia failed for interesting reasons, and since we bought those lessons at a great price, it would be a shame not to learn them.
Rethink government with AI
Helen Margetts and Cosmina Dorobantu – Nature
Much of what is written about the use of new and emerging technologies in government fails the faster horse test. It is based on the tacit assumption that technology can be different, but the structure of the problems, services and organisations that it is applied to remain fundamentally the same.
This article avoids that trap and is clear that the opportunities (and risks) from AI in particular look rather different – and of course that they are about policy and organisations, not just about technology. But arguably even this is just scratching the surface of the longer term potential. Individualisation of services, identification of patterns, and modelling of alternative realities all introduce new power and potential to governments and public services. Beyond that, though, it becomes possible to discern the questions that those developments will prompt in turn. The institutions of government and representative democracy are shaped by the information and communications technologies of past centuries and the more those technologies change, the greater the challenge to the institutional structures they support. That’s beyond the scope of this article, but it does start to show why those bigger questions will need to be answered.
Computer says no
Ellen Broad is a technologist who is expert on policy, an ethicist who is an incisive thinker, a writer who has published an important and well argued book. She also, as it happens, is a woman. We all know that that last statement should be irrelevant to the preceding three. We all know that it isn’t.
But we can all too easily persuade ourselves that everything is more or less all right, that blatant discrimination is a thing of the past, that even though there are rotten apples, the barrel is sound. This article calmly and comprehensively demolishes that easy optimism. It is a very deliberate breaking of self-imposed silence, a discarding of the assumption that quiet compliance will somehow make things better:
If there is no “right” way as a woman to speak about gender issues — if there is no “right” way for a woman to take up space, to take credit — then silence won’t serve me or save me either. The only way forward from here is to start speaking.
In this essay, Ellen has spoken. She needs to be heard.
Ethics won’t make software engineering better
Rachel Coldicutt – Doteveryone
The subtitle of this post lays down a challenge:
Why a social scientist could be the most important person on your product team
Leaving aside the point that it might be an even better challenge if ‘philosopher’ were substituted for ‘social scientist’, this is an important issue. There is much talk (and much writing) about the need for ethics in data and software – though curiously rather less so in service design, where it is no less important.
But ethics is not some esoteric form of quality assurance added as a final overlay to activities otherwise devoid of any moral compass. It is perhaps better understood (in this context) as the encapsulation of a deep and pervasive view that technology should work for humanity, not the other way round.
What would computer science look like if it included the perspective of humanities and social sciences from the outset? And what if that perspective came not from some thinker in residence, but from people who brought a fusion of perspectives and understanding to problem solving?
And whatever the answers to those questions might be, there is a wider one still: where does that fusion not have a place? The Amalgamated Union of Philosophers, Sages, Luminaries, and Other Professional Thinking Persons may be due for a resurgence.