Strategy Systems

Contradictions of government and its impossible standards

Martin Stewart-Weeks – The Mandarin

If it is hard to think and act systemically about the long term, it’s also worth reflecting on patterns of behaviour which get in the way even of the attempt. The rhetoric of innovation, of openness, of fearless honesty runs into a reality which seems designed to punish and constrain precisely those behaviours. And of course ‘design’ is precisely the wrong word here: these characteristics are emergent rather than intended (which does not, of course, mean that it would be impossible to design them to be different). There are many reasons why that is an unfortunate state of affairs, one which is rightly given some emphasis is that it risks crowding out the strategic and the systemic:

The real dilemma is that we’re so busy honing the efficiency of the pieces that we’ve failed to work out how to put the puzzle together or work out what the puzzle is or should be.

Strategy Systems

Exploring change and how to scale it

Pia Waugh – Pipka

This is a characteristically excellent post, examining in some detail both what it takes for change to succeed and, perhaps even more importantly, how to scale it.

The short answer is that if you want to change the system, you have to change the system. And to do that on the fifty plus year scale which is the level of ambition behind this post, requires rigour and discipline. Five questions are set out, including the two which are the most critical: what future do you want? And what are you going to do today?

Scaling from an idea of the future to systematic government and national level change can’t be done by exhortation – and simple observation suggests can only with the greatest difficulty be done at all. The recommendations here are an intriguing mixture of the very slow burn (supporting long term varied career development, to reduce aversion to new thinking) to the much more immediate (mandating the use of user research in funding bids).

All that still leaves the question of how best to start this whole process, but this is a manifesto of what should be done, or rather how it should be done; it doesn’t purport to be a set of instructions for making it happen.

Curation Organisational change

Governance as a service: a reading list

Richard McLean – Medium

While we are on the subject, this is a really useful compendium of reading on governance. Some of it is general, but much of it, one or another, is about how fast moving projects meet slow moving organisations, and how to dissipate the heat from the friction which results.

Organisational change

We all need governance

Richard McLean – Medium

If headlines are designed to excite the readers and entice them into the words which follow, this one would win no prizes. It is not fashionable to be excited about governance and indeed it often becomes one of those irregular verbs – I do agile; you do project management; they do governance. And who wants to be like them?

This post is a healthy challenge to that point of view, refreshingly written in human speak, which is something not always found in writing on this subject. It sets out some of the value governance provides in terms of setting direction, committing resources and assuring progress more form the perspective of the user of governance than of its imposer. That’s not of course to say that the current way of doing those things provides the optimal balance – and there is a tantalising promise of a second post on the biggest single thing we can do to improve governance. Presumably that one is still going through the approvals process.

Organisational change Systems

Can government stop losing its mind?

Gavin Starks – NESTA
This is an interesting report which asks almost the right question. Government is at little risk of losing its mind, or its short term memory. The two better questions – which in practice are the ones this report starts to address – are whether government can stop losing its longer term memory, and how the power of the government’s mind can be enhanced by better ways of connecting and amplifying its component parts.

Those are important questions. It’s already all too easy to forget how long we have been worrying about the ease of forgetting things. Aspects of the problem have been recognised, and solutions attempted, since the earliest days of what we no longer call electronic government. None has been more than partially successful.

The two questions are also closely related. People are reluctant to incur even the smallest overhead in storing things for the benefit of posterity, so the benefit needs to come from improving the effectiveness of current work. Conversely, tools which facilitate collaboration, sharing and serendipity will often (though with some very important exceptions) also be tools which facilitate the storage and discovery of what has been created. That was one of the key themes of a series of blog posts I wrote a couple of years ago, which covered some (though by no means all) of the same ground – including the observation, echoed in this report, that the web was invented to be an intranet and knowledge management tool; the world wide bit came rather later.

Where this report adds to the debate is in its more explicit recognition not just that we need to be thinking about texts rather than documents, but that a lot of what we need to be thinking about isn’t conventional text in the first place, making the paginated document an even less useful starting point for thinking about all this.

And there is a delicious irony that this blog – and my blogging generally – exists in large part to serve as my outboard memory, now with well over a decade of material created in part as protection against the weaknesses of institutional knowledge preservation.

Data and AI Government and politics

AI in the UK: ready, willing and able?

House of Lords Select Committee on Artificial Intelligence

There is something slightly disconcerting about reading a robust and comprehensive account of public policy issues in relation to artificial intelligence in the stately prose style of a parliamentary report. But the slightly antique structure shouldn’t get in the way of seeing this as a very useful and systematic compendium.

The strength of this approach is that it covers the ground systematically and is very open about the sources of the opinions and evidence it uses. The drawback, oddly, is that the result is an curiously unpolitical document – mostly sensible recommendations are fired off in all directions, but there is little recognition, still less assessment, of the forces in play which might result in the recommendations being acted on. The question of what needs to be done is important, but the question of what it would take to get it done is in some ways even more important – and is one a House of Lords committee might be expected to be well placed to answer.

One of the more interesting chapters is a case study of the use of AI in the NHS. What comes through very clearly is that there is a fundamental misalignment betweeen the current organisational structure of the NHS and any kind of sensible and coherent use – or even understanding- of the data it holds and of the range of uses, from helpful to dangerous, to which it could be put. That’s important not just in its own right, but as an illustration of a much wider issue of institutional design noted by Geoff Mulgan.

Innovation Systems

Deeply intertwingled laws

John Sheridan

Beyond even the bonus points for talking about laws being ‘intertwingled’, this is an important and interesting post at the intersection of law, policy and automation. It neatly illustrates why the goal of machine-interpretable legislation,such as the recent work by the New Zealand government, is a much harder challenge than it first appears – law can have tacit external interpretation rules, which means that the highly structured interpretation which is normal, and indeed necessary, for software just doesn’t work. Which is why legal systems have judges and programming languages generally don’t – and why the New Zealand project is so interesting.

Innovation Systems

LabPlus: Better Rules for Government Discovery Report

Nadia Webster – NZ Digital Government

The rather dry title of this post belies the importance and interest of its content. Lots of people have spotted that laws are systems of rules, computer code is systems of rules and that somehow these two fact should illuminate each other. Quite how that should happen is much less clear. Ideas have ranged from developing systems to turn law into code to adapting software testing tools to check legislative compliance. This post records an experiment with a different approach again, exploring the possibility of creating legislative rules in a way which is designed to make them machine consumable. That’s an approach with some really interesting possibilities, but also some very deep challenges. As John Sheridan has put it, law is deeply intertwingled: the meaning of legislation is only partly conveyed by the words of a specific measure, which means that transcoding the literal letter of the law will never be enough. And beyond that again, the process of delivering and experiencing a service based on a particular set of legal rules will include a whole set of rules and norms which are not themselves captured in law.

That makes it sensible to start, as the work by the New Zealand government reported here has done, with exploratory thinking, rather than jumping too quickly to assumptions about the best approach.  The recommendations for areas to investigate further set out in their full report are an excellent set of questions, which will be of interest to governments round the world.

Innovation Service design

Usability of Key Distribution in BlockChain Backed Electronic Voting

Terence Eden

This is a good post on the very practical difficulties in establishing secure digital identity, in this case for the purpose of voting in elections. It’s included here mainly as a timely but inadvertent illustration of the point in the previous post that even technology fixes are harder than they look. Implementing some form of online voting wouldn’t be too difficult; implementing a secure and trustworthy electoral system would be very hard indeed.

Innovation Service design

A New Approach to Digital Identity

Chris Yiu and Harvey Redgrave – Institute for Global Change

Digital identity (like digital voting) sounds as though it ought to be a problem with a reasonably straightforward solution, but which looks a lot more complicated when it comes to actually doing it. Like everything with the word ‘digital’ attached to it, that’s partly a problem of technical implementation. But also like everything with the word ‘digital’ attached to it, particularly in the public and political space, it’s a problem with many social aspects too.

This post makes a brave attempt at offering a solution to some of the technical challenges. But the reason why the introduction of identity cards has been highly politically contentious in the UK, but not in other countries, has a lot to do with history and politics and very little to do with technology. So better technology may indeed to be better, but that doesn’t in itself constitute a new approach to identity. Even if the better technology is in fact better (and as Paul Clarke spotted, ‘attestation’ is doing a lot more work as a word than it first appears), there are some much wider issues (some flagged by Peter Wells) which would also need to be addressed as part of an overall approach.

Service design

What do we mean when we talk about services?

Stephanie Marsh – GDS

A service is not an interaction on a website; it is not an immediate transaction. A service has a beginning, a middle and an end. The problem is that the service designer is at risk of only seeing the middle, and while a well designed middle is a good thing, it is not the whole thing. From the point of view of the person who has a need they want to resolve, the starting point may come much earlier and the resolution much later.

So it’s very encouraging to see GDS recognising this and making it clear that service design should be seen broadly, not narrowly. There’s room for debate about where the lines are drawn from the supply side perspective (the difference between ‘supporting content’ and ‘things which support’ is lost on me, for example) and perhaps more significantly a definition of a user journey which is too producer focused. But the underlying approach is very much the right one.

Strategy

The perils of bad strategy

Richard Rumelt – McKinsey Quarterly

If we want to create a good strategy, there is some value in understanding what makeas a bad one. This paper sets out to do exactly that and ends even more helpfully by reversing that into three key characteristics of a good strategy – understanding the problem; describing a guiding approach to addressing it; and setting out a coherent set of actions to deliver the approach. This is a classic article – which is a way of saying both that it’s a few years old, while also being pretty timeless. It derives from a book, but as is not uncommon, the book is very much longer without adding value in proportion.

Data and AI

The Risk of Machine-Learning Bias (and How to Prevent It)

Chris DeBrusk – Sloan MIT Management Review

This article is a good complement to the previous post, providing some pragmatic rigour on the risk of bias in machine learning and ways of countering it. Perhaps the most important point is one of the simplest:

It is safe to assume that bias exists in all data. The question is how to identify it and remove it from the model.

There is some good practical advice on how to do just that. But there is an obvious corollary: if human bias is endemic in data, it risks being no less endemic in attempts to remove it. That’s not a counsel of despair, this is an area where good intentions really do count for something. But it does underline the importance of being alert to the opposite, that unless it is clear that bias has been thought about and countered, the probability is high that it still remains. And of course it will be hard to calibrate the residual risk, whatever its level might be, particularly for the individual on the receiving end of the computer saying ‘no’.

Data and AI

Computer Says No: Part 1 Algorithmic Bias and Part 2 Explainability

These two (of a planned three) posts take an interesting approach to the ethical problems of algorithmic decision making, resulting in a much more optimistic view than most who write on this. It’s very much worth reading even though the arguments don’t seem quite as strong as they are made to appear.

Part 1 essentially side steps the problem of bias in decision making by asserting that automated decision systems don’t actually make decisions (humans still mostly do that), but should instead be thought of as prediction systems – and the test of a prediction system is in the quality of its predictions, not in the operations of its black box. The human dimension is a bit of a red herring as it’s not hard to think of examples where in practice the prediction outputs are all the decision maker has to go on, even if in theory the system is advisory. More subtly, there is an assumption that prediction quality can easily be assessed and an assertion that machine predictions can be made independent of the biases of those who create them, both of which are harder problems than the post implies.

The second post goes on to address explainability, with the core argument being that it is a red herring (an argument Ed Felten has developed more systematically): we don’t really care whether a decision can be explained, we care whether it can be justified, and the source of justification is in its predictive power, not in the detail of its generation. There are two very different problems with that. One is that not all individual decisions are testable in that way: if I am turned down for a mortgage, it’s hard to falsify the prediction that I wouldn’t have kept up the payments. The second is that the thing in need of explanation may be different for AI decisions from that for human decisions. The recent killing of a pedestrian by an autonomous Uber car illustrates the point: it is alarming precisely because it is inexplicable (or at least so far unexplained), but whatever went wrong, it seems most unlikely that a generally low propensity to kill people will be thought sufficiently reassuring.

None of that should be taken as a reason for not reading these posts. Quite the opposite: the different perspective is a good challenge to the emerging conventional wisdom on this and is well worth reflecting on.

One Team Government

Crossing the ‘Valley of Death’ – how we can bridge the gap between policy creation and delivery

Tony Meggs – Civil Service Quarterly

A policy which cannot be – or is not – implemented is a pretty pointless thing. The value in policy and strategy is not in the creation of documents or legislation (essential though that might be), but in making something, somewhere better for someone. Good policy is designed with delivery very much in mind. Good delivery can trace a clear and direct line to the policy intention it is supporting.

That’s easily said, but as we all know, there is no shortage of examples where that’s not what has happened in practice. More positively, there is also no shortage of people and organisations focused on making it work better. Much of that has been catalysed – more or less directly – through digital and service design, with the idea now widely accepted (albeit still sometimes more in principle than in practice) that teams should be formed from the outset by bringing together a range of disciplines and perspectives. But as this post reminds us, there is another way of thinking about how to bring design and delivery together, focusing on implementation and programme management.

But perhaps most importantly, the post stresses the need to recognise and manage the pressures in a political system to express delivery confidence at an earlier stage and with greater precision than can be fully justified. Paradoxically (it might appear), embracing uncertainty is a powerful way of enhancing delivery confidence.

Service design

Why it’s never a good time for service design

Lou Downe

headless chicken deliveringIt’s really hard to do things as well responding to a crisis as when they are properly planned. It’s really hard to do proper planning if all your time and energy is taken up by responding to crises. Service design is one of the leading indicators of that problem: there’s no (perceived) time to do it when it’s urgent; but there’s no urgency to do it when there’s time.

The solution to that conundrum argued here is very simple: slow degradation over time has to be recognised as being as bad as the catastrophic failure which occurs when the degradation hits a tipping point – “we need to make doing nothing as risky as change.”

Simple in concept is, of course, a very long way from being simple to realise, and the lack of attention given to fixing things before they actually break is a problem not limited to service design – slightly more terrifyingly it applies just as much to nuclear weapons (and in another example from that post, to apparently simple services which cross organisational boundaries and which it isn’t quite anybody’s responsibility to fix). Changing that won’t be easy, but that doesn’t make it any less important.