This post – which is actually a set of tweets gathered together – is a beautifully short and simple explanation of why some basic stuff really matters in efficiently integrating data and the services it supports (and is actually quite important as well in ensuring that things don’t get joined up which shouldn’t be). Without common identifiers, simple and value adding connections get difficult, expensive and unreliable – a point powerfully made in a post linked from that one which sets out a bewildering array of unique identifiers for property in the UK – definitely unique in the sense that there is a one to one mapping between identifier and place, but ludicrously far from unique in their proliferation.
There is a huge appetite for making more effective use of data. The appetite needs to be as strong for creating the conditions which make that possible.
Jeni Tennison – Exponential View
Azeem Azhar’s Exponential View is one of the very few weekly emails which earns regular attention, and it is no disrespect to him to say that the occasional guest authors he invites add further to the attraction. This edition is by Jeni Tennison, bringing her very particular eye to the question of data ownership.
Is owning data just like owning anything else? The simple answer to that is ‘no’. But if it isn’t, what does it mean to talk about data as property? To which the only simple answer is that there is no simple answer. This is not the place to look for detailed exposition and analysis, but it is very much the place to look for a set of links to a huge range of rich content, curated by somebody who is herself a real expert in the field.
Guido Noto La Diega
This is by way of a footnote to the previous post – a bit more detail on one small part of the enormous ecosystem described there.
If you buy an Amazon Echo then, partly depending on what you intend to do with it, you may be required to accept 17 different contracts, amounting to close to 50,000 words, not very far short of the length of a novel. You will also be deemed to be monitoring them all for any changes, and to have accepted any such changes by default.
That may be extreme in length and complexity, but the basic approach has become normal to the point of invisibility. That raises a question about the reasonableness of Amazon’s approach. But it raises a much more important question about our wider approach to merging new technologies into existing social, cultural and legal constructs. This suggests, to put it mildly, that there is room for improvement.
(note that the link is to a conference agenda page rather than directly to the presentation, as that is a 100Mb download, but if needed this is the direct link)
Kate Crawford and Vladan Joler
An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.
It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.
Tristan Greene – The Next Web
One of the many concerns about automated decision making is its lack of transparency. Particularly (but by no means only) for government services, accountability requires not just that decisions are well based, but that they can be challenged and explained. AI black boxes may be efficient and accurate, but they are not accountable or transparent.
This is an interesting early indicator that those issues might be reconciled. It’s in the special – and much researched – area of image recognition, so a long way from a general solution, but it’s encouraging to see systematic thought being addressed to the problem.
Richard Watson and Anna Cupani – Imperial Tech Foresight
Here are a hundred disruptive technologies, set out in waves of innovation, with time to ubiquity on one axis and potential for disruption on the other. On that basis, smart nappies appear in the bottom left corner, as imminent and not particularly disruptive (though perhaps that depends on just how smart they are and on who is being disrupted), while towards the other end of the diagonal we get to transhuman technologies – and then who knows what beyond.
The authors are firm that this is scientific foresight, not idle futurism, though that’s an assertion which doesn’t always stand up to close scrutiny. Planetary colonisation is further into the future than implantable phones, but will apparently be less disruptive when it comes. Dream recording falls in to the distant future category (rather than fringe science, where it might appear more at home), rather oddly on the same time scale but three levels of disruption higher than fusion power.
The table itself demonstrates that dreams are powerful. But perhaps not quite that powerful. And it’s a useful reminder, yet again, that technology change is only ever partly about the technology, and is always about a host of other things as well.
David Eaves and Ben McGuire – Governing
Governments should move slowly and try not to break things. That’s a suggestion slightly contrary to the fashionable wisdom in some quarters, but has some solid reasoning behind it. There are good reasons for governments not to be leading edge adopters – government services should work; innovation is not normally a necessary counter to existential threats; service users are not able to trade stability for excitement.
That’s not an argument against innovation, but it is an argument for setting pace and risk appropriately. As a result, this post argues, the skills government needs are less to do cutting edge novelty, and much more to do with identifying and adopting innovations from elsewhere.
Matt Novak – Paleofuture
As a small footnote to the previous post, this is precisely what the title describes – predictions of all shapes, sizes and times about what robots should be doing, but aren’t. The inexorable path of technology doesn’t always lead where we like to think it does.
Paris Marx – Medium
If you fall into the trap of thinking that technology-driven change is about the technology, you risk missing something important. No new technology arrives in a pristine environment, there are always complex interactions with the existing social, political, cultural, economic, environmental and no doubt other contexts. This post is a polemic challenging the inevitability – and practicality – of self-driving cars, drawing very much on that perspective.
The result is something which is interesting and entertaining in its own right, but which also makes a wider point. Just as it’s not technology that’s disrupting our jobs, it’s not technology which determines how self-driving cars disrupt our travel patterns and land use. And over and over again, the hard bit of predicting the future is not the technology but the sociology,
Rose Hollister and Michael Watkins – Harvard Business Review
The hardest bit of strategy is not thinking up the goal and direction in the first place. It’s not even identifying the set of activities which will move things in the desired direction. The hardest bit is stopping all the things which aren’t heading in that direction or are a distraction of attention or energy from the most valuable activities. Stopping things is hard. Stopping things which aren’t necessarily failing to do the thing they were set up to do, but are nevertheless not the most important things to be doing, is harder. In principle, it’s easier to stop things before they have started than to rein them in once they have got going, but even that is pretty hard.
In all of that, ‘hard’ doesn’t mean hard in principle: the need, and often the intention, is clear enough. It means instead that observation of organisations, and particularly larger and older organisation, provides strong reason to think that it’s hard in practice. Finding ways of doing it better is important for many organisations.
This article clearly and systematically sets out what underlies the problem, what doesn’t work in trying to solve it – and offers some very practical suggestions for what does. Practical does not, of course, mean easy. But if we don’t start somewhere, project sclerosis will only get worse.
Catherine Howe – Curious?
The eight tribes of digital (which were once seven) have become nine.
The real value of the tribes – other than that they are the distillation of four years of observation, reflection and synthesis – is not so much in whether they are definitively right (which pretty self-evidently they aren’t, and can’t be) but as a prompt for understanding why individuals and groups might behave as they do. And of course, the very fact that there can be nine kinds of digital is another way of saying that there is no such thing as digital
Margaret Boden – Aeon
The phrase ‘artificial intelligence’ is a brilliant piece of marketing. By starting with the artificial, it makes it easy to overlook the fact that there is no actual intelligence involved. And if there is no intelligence, still less are there emotions or psychological states.
The core of this essay is the argument that computers and robots do not, and indeed cannot, have needs or desires which have anything in common with those experienced by humans. In the short to medium term, that has both practical and philosophical implications for the use and usefulness of machines and the way they interact with humans. And in the long term (though this really isn’t what the essay is about), it means that we don’t have to worry unduly about a future in which humanity survives – at best – as pets of our robot overlords.
Catherine Howe – Curious?
An odd thing about many large organisations is that change is seen as different from something called business as usual. That might make a kind of sense if change were an anomalous state, quickly reverting to the normality of stasis, but since it isn’t, it doesn’t.
If change is recognised as an essential element of business as usual, then lots of other ideas drop easily into place. One of the more important ones is that it allows and encourages better metaphors. The idea of change as something discrete which starts and stops, which has beginnings and ends, encourages mechanical parallels: like a machine, it can be turned on and off; like a machine, controlling the inputs will control the outputs. But if change permeates, if organisations and their environments are continually flexing, then metaphors naturally become more organic: the pace of change ebbs and flows; organisations adapt as a function of their place in a wider ecosystem; change is just part of what happens, not some special extra thing.
From that perspective, it’s a small step to recognising that there is real power in thinking about organisational change in terms of systems. But it’s a small step with big consequences, and those consequences are what this post is all about.
The world of system change provides a different framing of organisational change and a way of seeing it as part of an organic process and not something that is bolted onto an organisation. The simple but powerful shift from process to purpose is something that can make a profound difference to how you go about engaging the networks that already exist within your organisation. Once we acknowledge and bring to fore the networks that make up our organisations and the system they create can we ever really deny that all change is system change?
Louis Hyman – The New York Times
This is a good reminder that the development and, even more, the application of technology are always driven by their social. economic and political context. There is a tendency to see technological change as somehow natural and unstoppable, which is dangerous not because it is wholly wrong, but because it is partly right and so can easily be confused with being wholly right.
New technologies cannot be uninvented (usually) or ignored, but how they are developed and deployed is always a matter of choice, even if that choice isn’t always self-evident. This article focuses on the implications for employment, where too often the destruction of jobs is assumed to be both inevitable and undesirable (leaving only the numbers up for debate). But the nature of the change, the accrual of the benefits of greater efficiency and of the costs of disruption and transition are all social choices. That’s a very helpful reframing – which creates the space to ask how we might retain the benefits of traditional employment structures, while adding (rather than substituting) the advantages which come from new ways of working.
Billy Street – Transforming Together
This post provides a good introduction to The 7 Lenses of Transformation recently published by the UK government. Its power is in a form of modesty: there is no spurious promise that religiously following a methodology takes the risk and challenge out of transformational change, but instead provides a sensible framing of seven areas which need to be thought about and acted on to increase the chances of success. It is strewn with useful prompts, reminders and challenges. But it also prompts a couple of broader questions.
The first is what counts as transformation, as opposed to mere change. The definition in the guidance isn’t altogether satisfactory, as ‘reducing the costs of delivering services and improving our internal processes’ is sufficient to count. That’s not just a niggle about wording: if there is something distinctive about transformation, there needs to be some clarity about what it is. It’s tempting to fall back to simple scale – but some large scale changes aren’t particularly transformational, while some much smaller changes can have a really radical impact on the relationship between inputs, outputs and, most importantly, outcomes.
The second is an inherent problem with numbered lists, which is that they present themselves as self-contained. It’s worth reflecting on what an eighth item might be. One possible answer is that there is more – quite a lot more – to be said in expansion of the seventh lens, on people. The recognition that people need to be involved and enthused is a good start, but a communication campaign isn’t a sufficient means of achieving that: if change is transformational, it is almost certain that it expects – and depends on – people’s behaviour changing, and it is dangerous to assume that behavioural change is an automatic by-product of change programmes. And of course there will often be many more people affected than those in the programme team itself – a point the ‘red flags’ section seems to overlook.
And there is a small but subtly important issue in the title: the lens metaphor is an odd one, which doesn’t stand up to very much thought. That’s not to say that there is a single self-evidently better one, but moving away from language which implies inspection and distortion to language which hints more at engagement and multiple perspectives might be a stronger foundation for delivering real transformation.
John Naughton – Memex 1.1
A short post making the case for the assertion in its title – strategic changes are hard. It’s based on the example of Intel, taken from an essay by Walter Kiechel III which ends with this timeless warning:
Read over the tale of what it took to get there if, in a delusional moment, you’re ever tempted to think that putting strategy into practice is easy, even a seemingly emergent strategy.
Ellen Broad – Melbourne University Publishing
Ellen Broad’s new book is high on this summer’s reading list. Both provenance and subject matter mean that confidence in its quality can be high. But while waiting to read it, this short interview gives a sense of the themes and approach. Among many other virtues, Ellen recognises the power of language to illuminate the issues, but also to obscure them. As she says, what is meant by AI is constantly shifting, a reminder of one of the great definitions of technology, ‘everything which doesn’t work yet’ – because as soon as it does it gets called something else.
The book itself is available in the UK, though Amazon only has it in kindle form (but perhaps a container load of hard copies is even now traversing the globe).
Paul Evans – Medium
People who work in the Whitehall tradition of government tend to think of democracy as a mildly intriguing thing which happens somewhere else. That’s an approach which has pretty severe weaknesses even in its own terms, but becomes markedly more significant in a world where the alignment of political divisions with electoral structures is severely weakened. So thinking about democracy not as some distinct feature, but as an integral characteristic of the polity is both counter-cultural and essential.
This post is an important challenge to that complacency, part of an emerging and more widespread view that the western tradition of representative democracy is under threat and needs to be replaced by something better before it is replaced by something worse. Those who work in the non-political parts of government may like to think that that is not really their problem. But it is.
This is a short, perhaps even slightly cryptic, note on the purpose of organisations. Having had the unfair advantage of being part of the conversation which prompted it, my sense is that it captures two related, but distinct, issues.
The first is that not everything has a purpose at all, in any terribly useful or meaningful sense. We can observe and describe what elements of a system do, but that does not mean that each such element has a purpose, still less that any purpose it might have relates to the behaviour of the wider system of which it is part. Not being careful here can lead to spectacular errors of reverse causation – the purpose of noses is not, as Pangloss argued, to support the wearing of spectacles.
The second is that it is easy to look at human-made systems and assume that they have a purpose, and that that purpose can be both discerned and – should we wish it – amended. That’s an understandable hope, but not necessarily a realistic one. Organisations of any size are both complex systems in their own right and components of larger and yet more complex systems. What they do and how they do it cannot be reduced to a single simple proposition. That’s not, I take it, a nihilistic argument against trying to understand or influence; it is a recognition that we need to recognise and respect complexity, not wish it away.
Abbe Marks – NZ Digital government
The idea that it should be possible to capture legislative rules as code and that good things might result from doing so is not a new one. It sounds as though it should be simple: the re-expression of what has already been captured in one structured language in another. It turns out though not to be at all simple, partly because of what John Sheridan calls the intertwingling of law: the idea that law often takes effect through reference and amendment and that the precise effect of its doing so can be hard to discern.
There is interesting work going on in New Zealand experimenting with the idea of law and code in some limited domains, and this post is prompted by that work. What makes it distinctive is that it is written from a policy perspective, asking questions such as whether the discipline of producing machine consumable rules is a route to better policy development. It’s still unclear how far this approach might take us – but the developments in New Zealand are definitely worth keeping an eye on.