Data and AI

Data as photography

Ansel Adams, adapted by Wesley Goatley

“A visualisation is usually looked at – seldom looked into.” – Ansel Adams “The sheer ease with which we can produce a superficial visualisation often leads to creative disaster.” – Ansel Adams “There's nothing worse than a sharp visualisation of a fuzzy concept.” – Ansel Adams “You don't collect a data set, you make it.” – Ansel Adams “There are always two people in every data visualisation: the creator and the viewer.” – Ansel Adams “To make art with data truthfully and effectively is to see beneath the surfaces.” – Ansel Adams “A great data visualisation is a full expression of what one feels about what is being visualised in the deepest sense, and is, thereby, a true expression of what one feels about life in its entirety.” – Ansel Adams “Data visualisation is more than a medium for factual communication of ideas. It is a creative art.” – Ansel Adams “We must remember that a data set can hold just as much as we put into it, and no one has ever approached the full possibilities of the medium.” – Ansel Adams “Data art, as a powerful medium...offers an infinite variety of perception, interpretation and execution.” – Ansel Adams “Twelve significant data points in any one year is a good crop.” – Ansel Adams

The idea that the camera does not lie is as old as photography. It has been untrue for just as long.

The exposure of film or sensor to light may be an objective process, but everything which happens before and after that is malleable and uncertain. There are some interesting parallels with data in that: the same appearance – and assertion – of accurately representing the real world, the same issues of both deliberate and unwitting distortion.

This tweet simply takes some of the things Ansel Adams, the great photographer of American landscapes, has written about photography and adapts them to be about data. It’s neatly done and provides good food for thought.

Government and politics Service design

Designing digital services that are accountable, understood, and trusted

Richard Pope

This is a couple of years old, but is not in any way the worse for that. It’s an essay (originally a conference presentation), addressed to software developers, seeking to persuade them that in working in software or design, they are inescapably working in politics.

He’s right about that, but the implications for those on the other end of the connection are just as important. If the design of software is not neutral in political or policy terms, then people concerned with politics and policy need to understand this just as much. Thanks to Tom Loosemore for the enthusiastic reminder of its existence.

Strategy

Are we still talking about digital transformation?

Gavin Beckett – Perform Green

Apparently we still are. Whether we should be is another matter. There is certainly a strong case against ‘digital’, my version of which was made in a blog post a couple of years ago, which stated firmly

Digital transformation is important. But it’s important because digital is a means of doing transformation, not because transformation enables digital.

That leaves us with ‘transformation’. Is that a word with enough problems of its own that we should avoid it as well? The case against is clear, and is well articulated in this post: transformation carries implications of one massive co-ordinated effort, of starting with stability, applying the intended change, and then returning to a new and better stability – and none of that happens in the real world. Instead, it’s better to see change from a more agile perspective, neatly summarised in a line quoted in the post

Approaching change in a more evolutionary way may be the best way of making effective progress.  Small steps towards a bigger picture, with wiggle room to alter the path.

Sometimes, though, that bigger picture is big enough to deserve being called transformational. Sometimes the first step is possible only when there is some sense of direction and of scale of ambition. Sometimes radical change is what’s needed – it’s not hard to look around and see systems and organisations crying our for transformation. We should be cautious about discarding the ambition just because, too often, the means deployed to achieve it have fallen short.

Indeed, perhaps the real problem with ‘transformation’ as word is that it has been applied to far too casually to things which haven’t been nearly transformational enough in their ambition. If digital transformation is to mean anything, it has to be more than technology supported process improvement.

Government and politics Social and economic change

Looking at historical parallels to inform digital rights policy

Justine Leblanc – IF

Past performance, it is often said, is not a guide to future performance. That may be sound advice in some circumstances, but is more often than not a sign that people are paying too little attention to history, over too short a period, rather than that there is in fact nothing to learn from the past. To take a random but real example, there are powerful insights to be had on contemporary digital policy from looking at the deployment of telephones and carrier pigeons in the trenches of the first world war.

That may be an extreme example, but it’s a reason why the idea of explicitly looking for historical parallels for current digital policy questions is a good one. This post introduces a project to do exactly that, which promises to be well worth keeping an eye on.

The value of understanding history, in part to avoid having to repeat it, is not limited to digital policy, of course. That’s a reason for remembering the value of the History and Policy group, which is based on “the belief that history can and should improve public policy making, helping to avoid reinventing the wheel and repeating past mistakes.”

Data and AI

Don’t believe the hype about AI in business

Vivek Wadhwa – VentureBeat

If you want to know why artificial intelligence is like teenage sex, this is the post to read. After opening with that arresting comparison, the article goes on to make a couple of simple but important points. Most real world activities are not games with pre-defined rules and spaces. And for businesses – and arguably still more so for governments – it is critically important to be able to explain and account for decisions and outcomes. More pragmatically, it also argues that competitive advantage in the deployment of AI goes to those who can integrate many sets of disparate data to form a coherent set to which AI can be applied. Most companies – and, again, perhaps even more so most governments – are not very good at that. That might be the biggest challenge of all.

Innovation Service design

… which way I ought to go from here?

Dave Snowden – Cognitive Edge

This is close to the beginning of what is billed as series of indefinite length on agility and Agility, which we are promised will at times be polemical and curmudgeonly, and are tangentially illustrated with references to Alice (the one in Wonderland, not the cryptographic exemplar). The first post in the series set some context; this second one focuses on the question of whether short-cycle software production techniques translate to business strategy. In particular, the argument is that scrum-based approaches to agile work best when the problem space is reasonably well understood and that this will be the case to different extents and different stages of an overall development cycle.

Dave Snowden is best known as the originator of the Cynefin framework, which is probably enough to guarantee that this series will be thought provoking. He positions scrum approaches within the Cynefin complex domain and as a powerful approach – but not the only or uniquely appropriate one. It will be well worth watching his arguments develop.

Service design

Eight things I’ve learnt from designing services for colleagues

Steve Borthwick – DWP Digital

Civil servants are users too. Indeed, as Steph Gray more radically claims, civil servants are people too. And as users, and even more so as people, they have needs. Some of those needs are for purely internal systems and processes, others are as users of systems supporting customer services.

In the second category, the needs of the internal user are superficially similar to the needs of the external user – to gather and record the information necessary to move the service forward. That for a time led to a school of thought that the service for internal and external users should be identical, to the greatest possible extent. But as this post recognises, there is a critical difference between somebody who completes a transaction once a year or once in a blue moon and somebody who completes that same transaction many times a day.

That shouldn’t be an excuse for complexity and confusion: just because people on the payroll can learn to fight their way through doesn’t mean it’s a good idea to make them. But it is one good reason for thinking about internal user needs in their own right – and this excellent post provides seven more reasons why that’s a good thing to do.

Meanwhile, the cartoon here remains timeless – it prompted a blog post almost exactly ten years ago arguing that there is a vital difference between supporting expert users (good) and requiring users to be expert (bad). We need to keep that difference clearly in sight.

Data and AI Government and politics

YouTube, the Great Radicalizer

Zeynep Tufekci – New York Times

This article has been getting extensive and well-deserved coverage over the last few days. Essentially, it is demonstrating that the YouTube recommendation engine tends to lead to more extreme material, more or less whatever your starting point. In  short, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

The reason for including it here is not because of the specific algorithm or the specific behaviour it generates. It is because it’s a very clear example of a wider phenomenon. It’s a pretty safe assumption that the observed behaviour is not the result of a cabal of fringe conspirators deep in the secret basements of Google setting out a trail to recruit people into extremist groups or attitudes. The pretty obvious motivation is that what they are actually trying to do is to tempt people into spending as long as possible watching YouTube videos, because that’s the way they can put most advertising in front of most eyeballs.

In other words, algorithmic tools can have radically unintended consequences. That’s made worse in this case because the unintended consequences are not a sign of the intended goal not being achieved; on the contrary, they are the very means by which that intended goal is being achieved. So it is not just the case that YouTube has some strong incentives not to fix the problem, the problem may not be obvious to them in the first place.

This is a clear example. But we need to keep asking the same questions about other systems: what are the second order effects, will we recognise them when we see them, and will we be ready to – and able to – address them?

Presentation and communication Strategy

Strategic thinking with blog posts and stickers

Giles Turnbull

Strategic thinking is best done by thinking out loud, on your blog, over a long period of time.

As someone clocking in with over a thousand blog posts of various shapes and sizes since 2005, that feels like a box well and truly ticked. Whether that makes up something which might be called strategic thinking is a rather different question – but that may be because all those blog posts have not yet generated a single sticker.

There’s an important point being made here. Even in a more traditional approach to strategy development, the final document is never the thing which carries the real value: it’s the process of development, and the engagement and debate that that entails which makes the difference. The test of a good strategy is that it helps solve problems, so as the problems change, so should the strategy. Whether that makes blog posts and stickers a sufficient approach to strategy development is a slightly different question. There might be a blog post in that.

Behavioural science Systems

Evidence-based policymaking: is there room for science in politics?

Jennifer Guay – Apolitical

To describe something as ‘policy based evidence making’ is to be deliberately rude at two levels, first because it implies the use of evidence to conceal rather than illuminate, but secondly because it implies a failure to recognise that evidence should drive policy (and thus, though often less explicitly, politics).

Evidence based policy, on the other hand, is a thing of virtue for which we should all be striving. That much is obvious to right-thinking people. In recent times, the generality of that thought has been reinforced by very specific approaches. If you haven’t tested your approach through randomised controlled trials, how can you know that your policy making has reached the necessary level of objective rigour?

This post is a thoughtful critique of that position. At one level, the argument is that RCTs tell you less than they might first appear to. At another level, that fact is a symptom of a wider problem, that human life is messy and multivariate and that optimising a current approach may at best get you to a local maximum. That is of course why the social sciences are so much harder than the so-called hard sciences, but that is probably a battle for another day.

Government and politics One Team Government Service design

Digital government: reasons to be cheerful

Janet Hughes

This is an energetic and challenging presentation on the state of digital government – or rather of digital government in the UK. It’s available in various formats, the critical thing is to make sure you read the notes as well as look at the slides.

The first part of the argument is that digital government has got to a critical mass of inexorability. That doesn’t mean that progress hasn’t sometimes been slow and painful and it doesn’t mean that individual programmes and even organisations will survive, or even that today’s forecasts about the future of government will be any more accurate in their detail than those of twenty years ago. It does though mean that the questions then and now were basically the right ones even if it has been – and is – a struggle to work towards good answers.

The second part of the argument introduces a neat taxonomy of the stages of maturity of digital government, with the argument that the UK is now somewhere between the integrate and reboot phases. That’s clearly the direction of travel, but it’s perhaps more debatable how much of government even now is at that point of inflexion. The present, like the future, remains unevenly distributed.

Government and politics One Team Government

Understanding Policy Better

Warren Fauvel – Medium

Sketch note by Laura Sorvala

Another paired post – following the argument that policy should be deprecated, this is a much more positive – and remarkably concise – set of statements about what policy is (with some useful hints about what it might be).

Government and politics One Team Government Service design

Forget policy — start with people

Beatrice Karol Burks – Designing Good Things

This is a short polemic against the idea of policy, and by extension against the (self) importance of those who make it. It clearly and strongly makes an important point – but in doing so misses something important about policy and politics.

It is certainly true that starting with people and their needs is a good way of approaching problems. But it doesn’t follow that anything called policy is necessarily vacuous or redundant. Policy making, and indeed politics, is all about making choices, and those choices would still be there even if the options to be considered were better grounded.

None of that makes the practical suggestions in this post wrong. But if we forget policy, we forget something important.

Data and AI Government and politics

A roadmap for AI: 10 ways governments will change (and what they risk getting wrong)

Geoff Mulgan – NESTA

This is a great summary of where AI stands in the hype cycle. Its focus is the application to government, but most of it is more generally relevant. It’s really helpful in drawing out what ought to be the obvious point that AI is not one thing and that it therefore doesn’t have a single state of development maturity.

The last of the list of ten is perhaps the most interesting. Using AI to apply more or less current rules in more or less current contexts and systems is one thing (and is a powerful driver of change in its own right). But the longer term opportunity is to change the nature of the game. That could be a black box dystopia, but it could instead be an opportunity to break away from incremental change and find more radical opportunities to change the system. But that depends, as this post rightly concludes, on not getting distracted by the technology as a goal in its own right, but focusing instead on what better government might look like.

Organisational change Strategy

Management vs managerialism

Chris Dillow – Stumbling and Mumbling

And along comes another one, on similar lines to the previous post on strategies, this time decrying managerialism. Management is good, managerialism tends to unjustified and unbounded faith in management as a generic skill, to imposing direction and targets from above – and to abstract concepts of strategy and vision. As ever, Chris Dillow hits his targets with gusto.

Another way of putting that is that there is good management and bad management, and that there is not enough of the former and too much of the latter. That sounds trivial, but it’s actually rather important: is there a Gresham’s law of management where bad displaces good, and if there is, what would it take to break it?

Strategy

Why strategy directors shouldn’t write strategies

Simon Parker – Medium

This post is fighting talk to a blog with the title and background of this one. Having a strategy – or at least having a document called a strategy – is an indication of institutional failure: once you get to the stage of having to pay people to describe the organisation to itself and to work out how the pieces fit together, something is already going badly wrong.

At its worst, strategy becomes about attempts to engineer reality to fit a top down narrative through the medium of graphs. … So don’t write strategies. At best they give institutions the time they need to mobilise against the change you want to create

Instead, strategists should go and do something more useful, more concrete, with a much better chance of making real improvements happen.

And yet. The answer to the co-ordination problem can’t in the short term (and the short term is likely to be pretty long) be to fragment organisations to the point where co-ordination is not needed. Even if that were practically and politically feasible, it might just redraw the boundaries of Coasian space leaving the underlying co-ordination problem unchanged, at the cost of sustained distraction from the real purpose. It’s not obvious how small an organisation has to be (or even whether smallness is the key factor) to avoid needing something you might want to call a strategy.

So perhaps the distinction is not that organisations shouldn’t need a strategy, it is that that need shouldn’t degenerate into the endless production of strategies as a self-perpetuating industry. That takes me back to Sophie Dennis’s approach, and in particular to her definition of strategy:

Strategy is a coherent plan to achieve a goal that will lead to significant positive change

That’s something which should have real value – without there needing to be a graph in sight. I’d be pretty confident that Simon has got one of those.

Data and AI

UK police are using AI to make custodial decisions – but it could be discriminating against the poor

Matt Burgess – Wired

In abstract, AI is a transformational technology. It may bring perfect and rigorous decision analysis, sweeping away human foibles. Or it may displace human sensitivity and judgement – and indeed the humans themselves – and usher in an era of opaque and arbitrary decision making.

This article, which focuses on the introduction of AI to Durham Constabulary, is a good antidote to those caricature extremes. Reality is, as ever, messier than that. Predictability and accountability are not straightforward. Humans tend to revert, perhaps unwisely, to confidence in their own judgements. It is not clear that some kinds of data are appropriately used in prediction models at all (though the black boxes of human brains are equally problematic). In short, the application of AI to policing decisions isn’t simple and clear cut, it is instead a confused and uncertain set of policy problems. That shouldn’t be surprising.

Social and economic change Strategy

Pivoting ‘the book’ from individuals to systems

Pia Waugh – Pipka

It’s a sound generalisation that people do the best they can within the limits of the systems they find themselves in. That best may include pushing at those limits, but even if it does, that doesn’t make them any less real. Two things follow from that. The first is that it is pointless blaming individuals for operating within the constraints of the system. The second is that if you want to change the system, you have to change the system.

That’s not to say that people are powerless or that we can all resign personal and moral accountability. On the contrary, the systems are themselves human constructs and can only be challenged and changed by the humans who are actors within them. That’s where this post comes in, which is in effect a prospectus for a not yet written book. What different systems do changes in social, economic and technological contexts demand, where are the contradictions which need to be resolved? The book, when it comes, promises to be fascinating; the post is well worth reading in its own right in the meantime.

Systems

Why can’t we make the trains run on time?

Paul Clarke – honestlyreal

On a morning where transport is disrupted across the UK by snow and cold winds, it’s worth returning to this post from a few years ago, which explains why small amounts of snow here are so much more disruptive than the much larger amounts which are easily managed elsewhere. In short, the marginal cost of being ready for severe weather, when there isn’t very much of it, isn’t justified by the benefits from another day or two a year of smooth operations. That is a very sensible trade off – the existence of which is immediately forgotten when the bad weather arrives.

It’s a trade off with much wider application than snow-covered railway tracks. Once you start looking, it can be seen in almost every area of public policy, culminating in the macro view that everybody (it is asserted) wants both lower taxes and better services. Being more efficient is the way of closing the gap which is simultaneously both clearly the right thing to do and an excellent way of ducking the question, but at best shifts the parameters without fundamentally changing the nature of the problem. Hypothecation is a related sleight of hand – let’s have more money, but only for virtuous things. In the end, though, public policy is about making choices. And letting the trains freeze up from time to time is a better one than it appears in the moment to the people whose trains have failed to come.

Data and AI Future of work

How AI will transform the Digital Workplace (and how it already is)

Sharon O’Dea – Intranetizen

AI is often written about in terms of sweeping changes resulting in the wholesale automation of tasks and jobs. But as this post sets out, there is also a lower key version, where forms of AI appear as feature enhancements (and thus may not be apparent at all). Perhaps self-generating to do lists are the real future – though whether that will be experienced as liberation or enslavement is very much a matter of taste. Either way, AI won’t be experienced as robots, breaking into the building to take our jobs; instead tasks will melt away, enhanced in ways which never quite feel revolutionary.