Service design

Eight things I’ve learnt from designing services for colleagues

Steve Borthwick – DWP Digital

Civil servants are users too. Indeed, as Steph Gray more radically claims, civil servants are people too. And as users, and even more so as people, they have needs. Some of those needs are for purely internal systems and processes, others are as users of systems supporting customer services.

In the second category, the needs of the internal user are superficially similar to the needs of the external user – to gather and record the information necessary to move the service forward. That for a time led to a school of thought that the service for internal and external users should be identical, to the greatest possible extent. But as this post recognises, there is a critical difference between somebody who completes a transaction once a year or once in a blue moon and somebody who completes that same transaction many times a day.

That shouldn’t be an excuse for complexity and confusion: just because people on the payroll can learn to fight their way through doesn’t mean it’s a good idea to make them. But it is one good reason for thinking about internal user needs in their own right – and this excellent post provides seven more reasons why that’s a good thing to do.

Meanwhile, the cartoon here remains timeless – it prompted a blog post almost exactly ten years ago arguing that there is a vital difference between supporting expert users (good) and requiring users to be expert (bad). We need to keep that difference clearly in sight.

Data and AI Government and politics

YouTube, the Great Radicalizer

Zeynep Tufekci – New York Times

This article has been getting extensive and well-deserved coverage over the last few days. Essentially, it is demonstrating that the YouTube recommendation engine tends to lead to more extreme material, more or less whatever your starting point. In  short, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

The reason for including it here is not because of the specific algorithm or the specific behaviour it generates. It is because it’s a very clear example of a wider phenomenon. It’s a pretty safe assumption that the observed behaviour is not the result of a cabal of fringe conspirators deep in the secret basements of Google setting out a trail to recruit people into extremist groups or attitudes. The pretty obvious motivation is that what they are actually trying to do is to tempt people into spending as long as possible watching YouTube videos, because that’s the way they can put most advertising in front of most eyeballs.

In other words, algorithmic tools can have radically unintended consequences. That’s made worse in this case because the unintended consequences are not a sign of the intended goal not being achieved; on the contrary, they are the very means by which that intended goal is being achieved. So it is not just the case that YouTube has some strong incentives not to fix the problem, the problem may not be obvious to them in the first place.

This is a clear example. But we need to keep asking the same questions about other systems: what are the second order effects, will we recognise them when we see them, and will we be ready to – and able to – address them?