This is a brilliant two and a half minute animation, explaining what algorithms are, what they are not, and why they are inherently not neutral.
This is a brilliant two and a half minute animation, explaining what algorithms are, what they are not, and why they are inherently not neutral.
The world is probably not crying out for another 2×2 typology of strategy, but nevertheless still they come. This one is interesting less for it cells than for its axes. Degree of uncertainty is fairly standard, but degree of people impact is slightly more surprising. The people in question are those within the organisation being strategised about – is the relevant change marginal to business as usual, are jobs and careers at risk, how much emotional stress can be expected. All those are good questions, of course, and the approach is certainly a good counter to the tendency to see people as machine components in change, and then to be surprised when they turn out not to be. But it risks muddling up two rather different aspects of the people impact of strategy – those who conceive of the strategy and execute its projects on one hand, and those who are affected by it on the other – and raises the bigger question of whether an internal people focus is the best way of understanding strategy in the first place. And the answer to that feels more likely to be situational than universal.
Perhaps though it is the matrix itself which gets slightly in the way of understanding. This is not an argument that organisations choose or discover which cell to be in or by what route to move between them. Instead:
Our impression was that the most successful companies had learned to execute activities in all four quadrants, all the time, and had robust processes for managing the transition of an activity from one quadrant to the other.
Time Berners-Lee didn’t invent the internet. But he did invent the world wide web, and he does not altogether like what it has become. This post is his manifesto for reversing one of the central power relationships of the web, the giving and taking of data. Instead of giving data to other organisations and having to watch them abuse it, lose it and compromise it, people should keep control of their personal data and allow third parties to see and use it only under their control.
This is not a new idea. Under the names ‘vendor relationship management’ (horrible) and ‘volunteered personal information’ (considerably better but not perfect), the thinking stretches back a decade and more, developing steadily, but without getting much traction. If nothing else, attaching Berners-Lee’s name to it could start to change that, but more substantively it’s clear that there is money and engineering behind this, as well as thoughts and words.
But one of the central problems of this approach from a decade ago also feels just as real today, perhaps more so. As so often with better futures, it’s fairly easy to describe what they should look like, but remarkably difficult to work out how to get there from here. This post briefly acknowledges the problem, but says nothing about how to address it. The web itself is, of course, a brilliant example of how a clear and powerful idea can transform the world without the ghost of an implementation plan, so this may not feel as big a challenge to Berners-Lee as it would to any more normal person. But the web filled what was in many ways a void, while the data driven business models of the modern internet are anything but, and those who have accumulated wealth and power through those models will not go quietly.
It’s nearly ten years since Tim Wu wrote The Master Switch, a meticulous account of how every wave of communications technology has started with dispersed creativity and ended with centralised industrial scale. In 2010, it was possible to treat the question of whether that was also the fate of the internet as still open, though with a tipping point visible ahead. The final sentence of the book sets out the challenge:
If we do not take this moment to secure our sovereignty over the choices our information age has allowed us to enjoy, we cannot reasonably blame its loss on those who are free to enrich themselves by taking it from us in a way history has foretold
A decade on, the path dependence is massively stronger and will need to be recognised if it is to be addressed. technological creativity based on simple views of data ownership is unlikely to be enough by itself.
This is a post which earns itself a place here just by its title, though that’s not all that can be said in its favour. It doesn’t start very promisingly, setting up the shakiest of straw men in order to knock them down – does anybody really think that ‘writing long documents’ is a good test of being strategic? – but it improves after the first third, to focus much more usefully on doing three things which actually make for good strategy. As the post acknowledges, the suggestions are very much in the spirit of Richard Rumelt’s good and bad strategy approach. So you can read the book, read Rumelt’s HBR article which is an excellent summary of the book, or read this post. Rumelt’s article is probably the best of the three, but this shorter and simpler post isn’t a bad alternative starting point.
The story of how Estonia became the most e of e-governments is often told, but often pretty superficially and often with an implied – or even explicit – challenge to everybody else to measure themselves and their governments against the standard set by Estonia and despair. This post provides exactly the context which is missing from such accounts: Estonia is certainly the result of visionary leadership, which at least in principle could be found anywhere, but it is also the result of some very particular circumstances which can’t simply be copied or assumed to be universal. There is also a hint of the question behind Solow’s paradox: the real test is not the implementation of technology, but the delivery of better outcomes.
None of that is to knock Estonia’s very real achievements, but yet again to make clear that the test of the effectiveness of technology is not a technological one.
A few months ago, Eddie Copeland shared 10 Principles for Public Sector use of Algorithmic Decision Making. They later apparently morphed into twenty questions to address, and now the twenty have been slimmed down to ten. They are all good questions, but one very important one seems to be missing – how can decisions based on the algorithm be challenged? (and what, therefore, do people affected by a decision need to understand about how it was reached?)
The really interesting effects of technology are often the second and third order ones. The invention of electricity changed the design of factories. The invention of the internal combustion engine changed the design of cities. The invention of social media shows signs of changing the design of democracy.
This essay is a broader and bolder exploration of the consequences of today’s new technologies. That AI will destroy jobs is a common argument, that it might destroy human judgement and ability to make decisions is a rather bolder one (apparently a really creative human chess move is now seen as an indicator of potential cheating, since creativity in chess is now overwhelmingly the province of computers).
The most intriguing argument is that new technologies destroy the comparative advantage of democracy over dictatorship. The important difference between the two, it asserts, is not between their ethics but between their data processing models. Centralised data and decision making used to be a weakness; increasingly it is a strength.
There is much to debate in all that, of course. But the underlying point, that those later order effects are important to recognise, understand and address, is powerfully made.
This post – which is actually a set of tweets gathered together – is a beautifully short and simple explanation of why some basic stuff really matters in efficiently integrating data and the services it supports (and is actually quite important as well in ensuring that things don’t get joined up which shouldn’t be). Without common identifiers, simple and value adding connections get difficult, expensive and unreliable – a point powerfully made in a post linked from that one which sets out a bewildering array of unique identifiers for property in the UK – definitely unique in the sense that there is a one to one mapping between identifier and place, but ludicrously far from unique in their proliferation.
There is a huge appetite for making more effective use of data. The appetite needs to be as strong for creating the conditions which make that possible.
Azeem Azhar’s Exponential View is one of the very few weekly emails which earns regular attention, and it is no disrespect to him to say that the occasional guest authors he invites add further to the attraction. This edition is by Jeni Tennison, bringing her very particular eye to the question of data ownership.
Is owning data just like owning anything else? The simple answer to that is ‘no’. But if it isn’t, what does it mean to talk about data as property? To which the only simple answer is that there is no simple answer. This is not the place to look for detailed exposition and analysis, but it is very much the place to look for a set of links to a huge range of rich content, curated by somebody who is herself a real expert in the field.
This is by way of a footnote to the previous post – a bit more detail on one small part of the enormous ecosystem described there.
If you buy an Amazon Echo then, partly depending on what you intend to do with it, you may be required to accept 17 different contracts, amounting to close to 50,000 words, not very far short of the length of a novel. You will also be deemed to be monitoring them all for any changes, and to have accepted any such changes by default.
That may be extreme in length and complexity, but the basic approach has become normal to the point of invisibility. That raises a question about the reasonableness of Amazon’s approach. But it raises a much more important question about our wider approach to merging new technologies into existing social, cultural and legal constructs. This suggests, to put it mildly, that there is room for improvement.
(note that the link is to a conference agenda page rather than directly to the presentation, as that is a 100Mb download, but if needed this is the direct link)
An Amazon Echo is a simple device. You ask it do things, and it does them. Or at least it does something which quite a lot of the time bears some relation to the thing you ask it do. But of course in order to be that simple, it has to be massively complicated. This essay, accompanied by an amazing diagram (or perhaps better to say this diagram, accompanied by an explanatory essay), is hard to describe and impossible to summarise. It’s a map of the context and antecedents which make the Echo possible, covering everything from rare earth geology to the ethics of gathering training data.
It’s a story told in a way which underlines how much seemingly inexorable technology in fact depends on social choices and assumptions, where invisibility should not be confused with inevitability. In some important ways, though, invisibility is central to the business model – one aspect of which is illustrated in the next post.
One of the many concerns about automated decision making is its lack of transparency. Particularly (but by no means only) for government services, accountability requires not just that decisions are well based, but that they can be challenged and explained. AI black boxes may be efficient and accurate, but they are not accountable or transparent.
This is an interesting early indicator that those issues might be reconciled. It’s in the special – and much researched – area of image recognition, so a long way from a general solution, but it’s encouraging to see systematic thought being addressed to the problem.
Here are a hundred disruptive technologies, set out in waves of innovation, with time to ubiquity on one axis and potential for disruption on the other. On that basis, smart nappies appear in the bottom left corner, as imminent and not particularly disruptive (though perhaps that depends on just how smart they are and on who is being disrupted), while towards the other end of the diagonal we get to transhuman technologies – and then who knows what beyond.
The authors are firm that this is scientific foresight, not idle futurism, though that’s an assertion which doesn’t always stand up to close scrutiny. Planetary colonisation is further into the future than implantable phones, but will apparently be less disruptive when it comes. Dream recording falls in to the distant future category (rather than fringe science, where it might appear more at home), rather oddly on the same time scale but three levels of disruption higher than fusion power.
The table itself demonstrates that dreams are powerful. But perhaps not quite that powerful. And it’s a useful reminder, yet again, that technology change is only ever partly about the technology, and is always about a host of other things as well.
Governments should move slowly and try not to break things. That’s a suggestion slightly contrary to the fashionable wisdom in some quarters, but has some solid reasoning behind it. There are good reasons for governments not to be leading edge adopters – government services should work; innovation is not normally a necessary counter to existential threats; service users are not able to trade stability for excitement.
That’s not an argument against innovation, but it is an argument for setting pace and risk appropriately. As a result, this post argues, the skills government needs are less to do cutting edge novelty, and much more to do with identifying and adopting innovations from elsewhere.
If you fall into the trap of thinking that technology-driven change is about the technology, you risk missing something important. No new technology arrives in a pristine environment, there are always complex interactions with the existing social, political, cultural, economic, environmental and no doubt other contexts. This post is a polemic challenging the inevitability – and practicality – of self-driving cars, drawing very much on that perspective.
The result is something which is interesting and entertaining in its own right, but which also makes a wider point. Just as it’s not technology that’s disrupting our jobs, it’s not technology which determines how self-driving cars disrupt our travel patterns and land use. And over and over again, the hard bit of predicting the future is not the technology but the sociology,
The hardest bit of strategy is not thinking up the goal and direction in the first place. It’s not even identifying the set of activities which will move things in the desired direction. The hardest bit is stopping all the things which aren’t heading in that direction or are a distraction of attention or energy from the most valuable activities. Stopping things is hard. Stopping things which aren’t necessarily failing to do the thing they were set up to do, but are nevertheless not the most important things to be doing, is harder. In principle, it’s easier to stop things before they have started than to rein them in once they have got going, but even that is pretty hard.
In all of that, ‘hard’ doesn’t mean hard in principle: the need, and often the intention, is clear enough. It means instead that observation of organisations, and particularly larger and older organisation, provides strong reason to think that it’s hard in practice. Finding ways of doing it better is important for many organisations.
This article clearly and systematically sets out what underlies the problem, what doesn’t work in trying to solve it – and offers some very practical suggestions for what does. Practical does not, of course, mean easy. But if we don’t start somewhere, project sclerosis will only get worse.
The eight tribes of digital (which were once seven) have become nine.
The real value of the tribes – other than that they are the distillation of four years of observation, reflection and synthesis – is not so much in whether they are definitively right (which pretty self-evidently they aren’t, and can’t be) but as a prompt for understanding why individuals and groups might behave as they do. And of course, the very fact that there can be nine kinds of digital is another way of saying that there is no such thing as digital
The phrase ‘artificial intelligence’ is a brilliant piece of marketing. By starting with the artificial, it makes it easy to overlook the fact that there is no actual intelligence involved. And if there is no intelligence, still less are there emotions or psychological states.
The core of this essay is the argument that computers and robots do not, and indeed cannot, have needs or desires which have anything in common with those experienced by humans. In the short to medium term, that has both practical and philosophical implications for the use and usefulness of machines and the way they interact with humans. And in the long term (though this really isn’t what the essay is about), it means that we don’t have to worry unduly about a future in which humanity survives – at best – as pets of our robot overlords.
An odd thing about many large organisations is that change is seen as different from something called business as usual. That might make a kind of sense if change were an anomalous state, quickly reverting to the normality of stasis, but since it isn’t, it doesn’t.
If change is recognised as an essential element of business as usual, then lots of other ideas drop easily into place. One of the more important ones is that it allows and encourages better metaphors. The idea of change as something discrete which starts and stops, which has beginnings and ends, encourages mechanical parallels: like a machine, it can be turned on and off; like a machine, controlling the inputs will control the outputs. But if change permeates, if organisations and their environments are continually flexing, then metaphors naturally become more organic: the pace of change ebbs and flows; organisations adapt as a function of their place in a wider ecosystem; change is just part of what happens, not some special extra thing.
From that perspective, it’s a small step to recognising that there is real power in thinking about organisational change in terms of systems. But it’s a small step with big consequences, and those consequences are what this post is all about.
The world of system change provides a different framing of organisational change and a way of seeing it as part of an organic process and not something that is bolted onto an organisation. The simple but powerful shift from process to purpose is something that can make a profound difference to how you go about engaging the networks that already exist within your organisation. Once we acknowledge and bring to fore the networks that make up our organisations and the system they create can we ever really deny that all change is system change?