Machine Learning Meets Public Policy: What to Expect and How to Cope

Ed Felten

This is the video of a conference talk by Ed Felten, which is fascinating for a number of reason. He has been thinking hard about technology and the policy consequences of technology for a very long time, and doing so with deep technical expertise (on the explicability of algorithms, to take just one example).

But he also has been at the heart of the intersection of technology and public policy – a one man One Team Government – including a couple of years in the Obama White House. This talk is primarily about how machine learning lands in a public policy context and is immediately addressed to an audience at a big AI conference, whose perspective can be assumed to be technical.

Given that, the starting point is to underline a critical difference in perspective. At least in principle, science and engineering are about a search for truth. Democracy is not just not a search for truth, it is not really a search for anything. And that difference is simultaneously obvious, a strength and a source of deep confusion and misunderstanding

Democracy is not a search for truth; it is an algorithm for resolving disagreements

But this talk is interesting not just to an audience of technologist having the world of public policy explained by one of their own who has ventured into a strange and distant land. Given the importance of AI and machine learning – and indeed technology change more generally – to almost every aspect of policy, it is jut as important for policy makers and players in the democratic process to understand how their world is perceived. And from that perspective, this is a fascinating account of a strange world by a participant-observer who has retained his distance and brings a distinct professional perspective.

Toolkit Navigator

OECD Observatory of Public Sector Innovation

This rather mundane title is the gateway to a rich set of resources – a compendium of tools for public sector innovation and transformation, as the site’s subtitle has it. It’s a library organised by topics and actions, as well as supporting connections between people working on public sector innovation round the world. It’s very richness has the potential to be a bit overwhelming, so it’s well worth starting with a very clear blog post by Angela Hanson which introduces the approach OECD has taken.

Attempting to teach parliamentary procedure to machines

Michael Smethurst

There’s no getting away from the fact that parliamentary procedure is pretty arcane and that modelling that procedure adds a still more arcane overlay. But this is a beautifully reflective post which wears deep expertise very lightly to share thinking which is relevant well beyond the immediate parliamentary context.

Two points which should resonate far beyond the Palace of Westminster are worth pulling out. One is that parliamentary processes may have some extreme characteristics, but they also have some characteristics which people involved with other kinds of information flows will instantly recognise. It may or may not be possible to express definitively how the system should work; for different reasons it may or may not be possible to capture in detail how it does work, particularly if that is in some circumstances indeterminate. But taking an almost anthropological approach to understanding systems is both an art form and an investment which needs to be made.

The second is that for all the power of starting with user needs, that is necessarily limited if some kinds of needs come into being only as a result of building a system which satisfies them. In a nice nod to George Box, the post ends with a bold claim for the art of system modelling:

The models are only ever maps, but if they’re good enough to be useful they can be useful in ways the map designers never considered. No amount of requirements gathering or user research will ever compensate for omitting the work on modelling, because user needs are emergent from use and emergent from materials.

Too many projects, too much change

Naomi Stanford

Prioritisation is hard. One reason why it’s hard is that starting new things is always more attractive than stopping old ones. There are all sorts of reasons for that – many nicely set out in this post – which include the ease with which we overlook the opportunity cost: if we start this new thing, what do we no longer have the capacity or attention span to do? That of course is a problem for the organisation as a whole, not for the proponents of the new shiny thing, so it all too easily becomes one which is brushed aside, because there isn’t anybody whose job is to address it.

There is a closely related problem, pithily described, it appears, by Kurt Vonnegut:

Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.

That can have consequences from the irritatingly inefficient to the utterly terrifying, but all contributing to the wider problem, that the more change there is going on, the more likely it is that the changes will collide with each other unproductively, and the more it becomes important to understand and manage the dependencies and interactions between projects, as much as to understand and manage each of the contributing initiatives.

Government and digital technologies: the collision of two galaxies

Mark Foden

This is thinking at epic scale.

Consider the Milky Way crashing into Andromeda in about four billion years from now, taking another billion years to establish some form of stability.

Now consider the unstoppable force of technological (and social change) colliding with the established cultures and practices of government.

Now reflect on how good a metaphor the first is for the second. There’s probably less than four billions years to wait until we find out.

Ethics won’t make software engineering better

Rachel Coldicutt – Doteveryone

The subtitle of this post lays down a challenge:

Why a social scientist could be the most important person on your product team

Leaving aside the point that it might be an even better challenge if ‘philosopher’ were substituted for ‘social scientist’, this is an important issue. There is much talk (and much writing) about the need for ethics in data and software – though curiously rather less so in service design, where it is no less important.

But ethics is not some esoteric form of quality assurance added as a final overlay to activities otherwise devoid of any moral compass. It is perhaps better understood (in this context) as the encapsulation of a deep and pervasive view that technology should work for humanity, not the other way round.

What would computer science look like if it included the perspective of humanities and social sciences from the outset? And what if that perspective came not from some thinker in residence, but from people who brought a fusion of perspectives and understanding to problem solving?

And whatever the answers to those questions might be, there is a wider one still: where does that fusion not have a place? The Amalgamated Union of Philosophers, Sages, Luminaries, and Other Professional Thinking Persons may be due for a resurgence.

Is this AI? We drew you a flowchart to work it out

Karen Hao – MIT Technology Review

What is artificial intelligence? It’s a beguilingly simple question, but one which lacks a beguilingly simple answer. There’s more than one way to approach the question, of course – Chris Yiu provides mass exemplificaiton, for example (his list had 204 entries when first linked from here in January, but has now grown to 501). Terence Eden more whimsically dives down through the etymology, while Fabio Ciucci provides a pragmatic approach based on the underlying technology.

This short post takes a different approach again – diagnose whether what you are looking at is AI by means of a simple flowchart. It’s a nice idea, despite inviting some quibbling about some of the detail (“looking for patterns in massive amounts of data” doesn’t sound like a complete account of “reasoning” to me). And it’s probably going to need a bigger piece of paper soon.

A new machine age beckons and we are not remotely ready

Benedict Dellot and Fabian Wallace-Stephens – RSA

This is a refreshing post about the implications of work being displaced by machines, which isn’t about the work, the displacement, or the machines. Instead it puts forward a range of suggestions about what would need to be in place to make the consequences of that displacement socially and economically beneficial.

The ideas themselves are still fairly undeveloped at this stage – this is more a prospectus of issues to be explored than the substantive exploration – but even in embryonic form, they demonstate that a wider range of responses is possible than is often assumed. At first sight, some of the ideas look considerably more robust than others, but regardless of their specific merits, being imaginative about ways of dealing with the consequences of technology change must be a better strategy than trying to impede it.

Digital By Default, Lonely By Design

Rich Denyer-Bewick – CitizensOnline

We are better connected than ever before, through a bewildering array of devices and networks. And loneliness is an acute problem, undermining wellbeing and health. This post both explores that paradox and focuses more directly on its implications for the design of public services.

There is an apparently happy alignment between the improvements to quality which come from putting services online and the consequential efficiency savings which accrue to hard pressed public sector delivery organisations. But the reduction in human interaction which follows is a fundamental and deliberate feature of the new service design. It surely can’t be right that an occasional conversation with a harried bureaucrat will stave off the adverse effects of loneliness – but it always worth remembering that making services more impersonal is always likely disproportionately to affect those who are most vulnerable and most in need of support.

Why do so many digital transformations fail?

Michael Graber – Innovation Excellence

This short post asks a question which falls to be answered all too often. The answer it gives is that failure comes from the misperception that the most important thing about digital transformation is that it is digital:

Digital transformations are actually transformations of mindset, business model, culture, and operations. These are people problems, in the main, not technology issues.

AI, work and ‘outcome-thinking’

Richard Susskind – The British Academy Review

The debate about the scale of the impact of automation on employment rumbles on. Opinions vary enormously both on the numbers and types of jobs affected and on the more esoteric question of whether jobs or tasks are the more useful unit of measurement.

This short article neatly sidesteps that debate altogether. Its focus is on outcomes, the things we want to achieve. They will remain unchanged even as the means of achieving them changes radically. So the core question is not whether the way humans achieve the outcome can be replicated by robots and AI, but rather whether there is an alternative – and perhaps very different – way of achieving the same outcome in a way which is optimised for machines, not people.

Framing the question that way does two things. The first is that it brings some much needed clarity to a complex issue. The second is that all of us who have been congratulating ourselves on our irreplaceability need to start worrying much sooner than we might have thought.

Internet-era ways of working

Tom Loosemore – Public Digital

This is a deceptively simple list which describes ways of working in internet-era organisations. The GDS design principles are clearly among its antecedents, but this is a broader and deeper approach, setting out how to make things work – and work well – in organisations. It’s hard to argue with the thrust of the advice given here, and in any case it ends with an admonition to do sensible things in the right way rather than stick rigidly to the rules.

That doesn’t make the approach beyond criticism, both in detail and in approach, though it does have the happy consequence that challenge and consequent improvement are themselves part of the model being advocated. With that starting point, there are a couple of places where a further iteration could improve things further.

One is the instruction to treat data as infrastructure. The thought behind that is a good one: data matters, and it matters that it is managed well. Well ordered data is part of infrastructure at every level from the national (and international) downwards. But data is also part of the superstructure. Managing, processing, and creating value out of data are fundamental to the purpose and activities of organisations. Both aspects need to be understood and integrated.

A more subtle issue is that while it might be clear what counts as good internet-era ways of working, much of that work happens in organisations which are barely of the internet era at all. Precisely because it does challenge established approaches, established power structures and established infrastructure of every kind, the path to adoption is far from straightforward. Looked at in that light, this list is oddly impersonal: it is couched in the imperative, but without being clear who the orders are addressed to. There is a dimension of behavioural and organisational change which never quite makes it to the centre of the narrative, but which for organisations which are not native to the internet era is critically important.

None of that is a reason for not following the advice given here. But some of it might be part of the explanation of why it needs to be given in the first place.

 

Show Me Your Data and I’ll Tell You Who You Are

Sandra Wachter – Oxford Internet Institute

The ethical and legal issues around even relatively straightforward objectively factual personal data are complicated enough. But they seem simple beside the further complexity brought in by inferences derived from that data. Inferences are not new, of course: human beings have been drawing inferences about each other long before they had the assistance of machines. But as in other area, big data makes a big difference.

Inferences are tricky for several reasons. The ownership of an inference is clearly something different from ownership of the information from which the inference is drawn (even supposing that it is meaningful to talk about ownership in this context at all). An inference is often a propensity, which can be wrong without being falsifiable – ‘people who do x tend to like y‘ may remain true even I do x and don’t like y. And all that gets even more tricky over time – ‘people who do x tend to become y in later life’ can’t even be denied or contradicted at the individual level.

This lecture explores those questions and more, examining them at the intersection of law, technology and ethics – and then asks what rights we, as individuals, should have about the inferences which are made about us.

The same arguments are also explored in a blog post written by Wachter with her collaborator Brent Mittelstadt and in very much more detail in an academic paper, also written with Mittelstadt.

How solid is Tim’s plan to redecentralize the web?

Irina Bolychevsky – Medium

As a corollary to the comment here a few weeks back on Tim Berners-Lee’s ideas for shifting the power balance of the web away from data-exploiting conglomerates and back towards individuals, this post is a good clear-headed account of why his goal – however laudable – may be hard to achieve in practice.

What makes it striking and powerful is that it is not written from the perspective of somebody critical of the approach. On the contrary, it is by a long-standing advocate of redecentralising the internet, but who has a hard-headed appreciation of what would be involved. It is a good critique, for example addressing the need to recognise that data does not perfectly map to individuals (and therefore what data counts as mine is nowhere near as straightforward as might be thought) and that for many purposes the attributes of the data, including the authority with which it is asserted, can be as important at the data itself.

One response to that and other problems could be to give up on the ambition for change in this area, and leave control (and thus power) with the incumbents. Instead, the post takes the more radical approach of challenging current assumptions about data ownership and control at a deeper level, arguing that governments should be providing the common, open infrastructure which would allow very different models of data control to emerge and flourish.

Clearing the fog: Using outcomes to focus organisations

Kate Tarling and Matti Keltanen – Medium

This post is a deep and thoughtful essay on why large organisations struggle to find a clear direction and to sustain high quality delivery. At one level the solution is disarmingly simple: define what success looks like, work out how well the organisation is configured to deliver that success, and change the configuration if necessary – but in the meantime, since reconfiguration is slow and hard, be systematic and practical at developing and working through change.

If it were that easy, of course, everybody would have done it by now and all large organisations would be operating in a state of near perfection. Simple observation tells us that that is not the case, and simple experience tells us that it is not at all easy to fix. This post avoids the common trap of suggesting a simple – often simplistic – single answer, but instead acknowledges the need to find ways of moving forward despite the aspects of the organisational environment which hold things back. Even more usefully, it sets out an approach for doing that in practice based on real (and no doubt painful) experience.

If there were a weakness in this approach, it would be in appearing to underestimate some of the behavioural challenges, partly because the post notes, but doesn’t really address, the different powers and perspectives which come from different positions. The options – and frustrations – of a chief executive or board member are very different from those elsewhere in the organisation who may feel some of the problems more viscerally but find it harder to identify points of leverage to change things. The argument that in the absence of structures aligned to outcomes and goals we should fall back to alignment around purpose is a strong one, but the challenge of even achieving the fallback shouldn’t be underestimated.

It’s a pretty safe bet though that anybody struggling to find ways of helping large organisations to become fully effective will find ideas and insights here which are well worth reflecting on.

The art and practice of intelligence design

Martin Stewart-Weeks – Public Purpose

Geoff Mulgan has written a book about the power of collective intelligence. Martin Stewart-Weeks has amplified and added to Geoff’s work by writing a review. And now this note may spread attention and engagement a little further.

That is a ridiculously trite introduction to a deeply serious book. Spreading, amplifying, challenging and engaging with ideas and the application of those ideas are all critically important, and it’s hard to imagine serious disagreement with the proposition that it’s the right thing to do. But the doing of it is hard, to put it mildly. More importantly, that’s only one side of the driving problem: how do unavoidably collective problems get genuinely collective solutions? And in the end, that question is itself just such a problem, demanding just such a solution. Collectively, we need to find it. It’s well worth reading the book, but this review is a pretty good substitute.

Payday loans and the missed opportunity?

Jerry Fishenden – ntouk

Don’t be misled by the title, this isn’t really a post about payday loans. Instead, it explores the fascinating contrast between the approaches HMRC (for tax) and DWP (for benefits) have taken to opening their services to third parties. The basic story is pretty simple: HMRC has a long pre-internet history of working with third party intermediaries which it carried forward into its thinking abuot online services (at one stage their ambition was not directly to offer an online tax return service at all); DWP’s history is much more about direct delivery, and that tradition similarly has been carried forward into the online world. The post makes no pretence to neutrality on the central question of which was the better choice, HMRC is clearly seen to have won that argument hands down.

The post is good on the advantages of the open method and the opportunities that could create (including the ethical payday loans of the title). But it doesn’t address the fairly central question of whether there is a reason for the difference. After all, HMRC’s administration of tax credits, which are a benefit in everything but name, didn’t get the same open treatment as their revenue-raising lines of business. The question of whether a version of HMRC’s trust model for taxpayers and their agents could be translated to the benefits system is one well worth further reflection.

Why Futurism Has a Cultural Blindspot

Tom Vanderbilt – Nautilus

This post is more a string of examples than a fully constructed argument but is none the worse for that. The thread which holds the examples together is an important one: predicting the future goes wrong because we misunderstand behaviour, not because we misunderstand technology.

A couple of points stand out. One is the mismatch between social change and technology change: the shift of technology into the workplace turned out to be much easier to predict than the movement of women into the workplace. That’s a specific instance of the more general point that we both under- and over-predict the future. A second is that we over-weight the innovative in thinking about the future (and about the past and present); as Charlie Stross describes it, the near-future is comprised of three parts: 90% of it is just like the present, 9% is new but foreseeable developments and innovations, and 1% is utterly bizarre and unexpected.

None of that is a reason for abandoning attempts to think about the future. But the post is a strong – and necessary – reminder of the need to keep in mind the biases and distortions which all too easily skew the attempt.

Real-time government

Richard Pope – Platform Land

New writing from Richard Pope is always something to look out for: he has been thinking about and doing the intersection of digital and government more creatively and for longer than most. This post is about the myriad ways in which government is not real time – you can’t track the progress of your benefit claim in anything like the way in which you can track your Amazon delivery. And conversely, at any given moment, Amazon has a very clear picture of who its active customers are and what they are doing, in a way which is rather less true of operators of government services.

He is absolutely right to make the point that many services would be improved if they operated – or at least conveyed information – in real time, and he is just as right that converted (rather than transformed) paper processes and overnight batch updates account for some of that. So it shouldn’t detract from his central point to note that some of his examples are slightly odd ones, which may come from an uncharacteristic confusion between real time and event triggered. There is a notification to potential school leavers of their new national insurance number – but since children’s sixteenth birthdays are highly predictable, that notification doesn’t need to be real time in the sense meant here.  It was very useful to be told that my passport was about to expire – but since they were helpfully giving me three months’ notice, the day and the hour of the message was pretty immaterial.

Of course there are government services which should operate on less leisurely cycles than that, and of course those services should be as fast and as transparent as they reasonably can be. But perhaps the real power of real-time government is from the other side, less in shortening the cycle times of service delivery and much more in shortening the cycle times of service improvement.

The Ultimate Guide to Making Smart Decisions

Shane Parrish – Farnam Street

Who could not want not just any guide to making smart decisions, but the ultimate guide? That’s a big promise, but there is some substance to what is delivered. The post itself briskly covers categories of bad decisions before moving on to extensive sets of links to material on thinking in general and decision making in particular. I can’t imagined anyone wanting to work through all of that systematically, but if you need a way of homing in on an aspect of or approach to the subject, this could be a very good place to start.