Yaneer Bar-Yam – Science Friday
An understanding of quantum field theory apparently demonstrates that in large convoluted organisations, hierarchical structures with one person in charge can’t work, because the level of complexity becomes impossible to manage. That’s essentially the long standing perspective of systems thinking – if you want to change a system, you have to change the system – and while it’s entertaining to see the point made from a different standpoint, the real question is not whether this approach can provide a diagnosis, but whether it can offer a prescription for change.
It’s almost certainly unfair to make a judgement about that on the basis of the transcript of a short radio interview, which is what this is, but what’s striking is how quickly the prescription becomes a platitude. If political decision making were more distributed, as decisions in the brain are distributed between neurons, better decisions would result. That may well be true, but doesn’t get us very far. Part of the suggestion here seems to be a form of subsidiarity, which is a good start, but one big reason politics is hard is because decisions really are interdependent. What we have to do, apparently, is create mechanisms whereby participation translates into actual decision making. Well yes (or at least, well maybe), but asserting a solution falls a very long way short of describing it.
It’s included here despite all that for two reasons. The first is as a reminder that politics is hard and that insights from other disciplines are unlikely to provide magic answers to long standing and intractable problems. The second is that problems of political decision making are long standing and intractable and answers, ideally less magical, are still very much needed.
Noah Smith – Bloomberg
This is a fairly straightforward tour of the basic income landscape. Perhaps it is most useful for something unintended, drawing out the extent to which the debate on the virtues or otherwise of a basic income is conducted at cross purposes. People use the phrase ‘basic income’ to mean two very different things (probably many more, but two will do to start), which we might call ‘adequate’ and ‘supplementary’.
An adequate basic income is one which is enough to live on, not luxuriously but, well, basically. A supplementary basic income is not in itself enough to live on, but is enough to make a real difference to people’s lives and choices, particularly at the lower end of the income scale. This article concludes that a UBI doesn’t distort work incentives, but concludes that from looking at examples of supplementary basic income. Even if that conclusion is robust, it can’t in itself tell us anything about adequate basic incomes. This piece does better than many in not obscuring the distinction, but even here the ringing answer of the headline (almost certainly not written by the journalist) is bolder and broader than the article claims.
A whimsical twitter thread of etymological onion peeling, now crystallised into a blog post, results in a splendid definition of AI. Starting with ‘complicated algorithms running on very fast computers’, we end up with AI helpfully described as
The method by which an old Persian magician uses counting stones, to move other stones, by way of amber resin, such that a casual observer thinks the stones are moving themselves.
Tom Steinberg – Medium
In dealing with digital services – indeed in dealing with organisations generally – power is very asymmetric. Amazon does not invite you to negotiate the terms on which it is prepared to sell you things (though of course you retain the power not to buy). Digital services and apps give the illusion of control (let’s think about whether to accept these cookies…) but have developed a habit and a reputation or helping themselves to data and making their own judgement about what to do with it. That’s not necessarily because individual consumers can’t control permissions, but is also because the cost and complexity of doing so make it burdensome. Tom Steinberg brings a potential solution to that problem: what if we had somebody negotiating all that on our behalf, could that asymmetry be addressed? Typically, he recognises the difficulties as well as the potential, but even if the answers are hard, the question is important.
Erin Winick – MIT Technology Review
This devastatingly simple short post brings together estimates of the employment effects of automation, and assesses their consistency and coherence. There turns out to be none: ‘we have no idea how many jobs will actually be lost to the march of technological progress.’
Jared Spool – UX Immersion Interactions
A couple of weeks ago, the people of Hawaii were told that they were under missile attack. They weren’t, but that didn’t stop the warning being terrifying.
The cause was quickly narrowed down to poor user interface design. But poor user interface design is of course but one step in the chain of whys. This post follows several more links in the chain – giving a level of detail which at one level is more than most people will want or need, but using that to make some important points of much wider application. One is that critical designs need critical testing – and more generally that the value of design is not in the presence (or absence) of veneer. Another is that maintaining things is important and can be particularly difficult for systems funded on the basis that when they have been built, they are finished. The consequences of that approach may be irritating or they may be close to catastrophic, but they can be addressed only when there is recognition that, as David Eaves put it, you can either iterate before you fail, or you can do it after you fail, but you’ll do it either way.
Dave reads and reflects and shares both the reading and the reflections, on topics which are often closely linked to themes covered here. He has just announced a slightly different approach to sharing the material he finds, including a dedicated category on his blog (which comes with a selective RSS feed). Well worth following – though there is no obvious reason to filter out his own posts, which are always worth reading in their own right.
Nina Timmers – Futuregov
The easy mantra of ‘fail fast’ is one of many (mis)translated from agile thought and practice. The positive case is easy to understand, especially in contrast with slower project management approaches which consume all their time and money before discovering they have built the wrong thing. But in failing fast, the cost and the impact of the failure need to be understood too. In many public services that cost can be very high and, even more importantly, may fall on those least able to meet it.
This post is a powerful description of an extreme case of that – but in describing the extreme, there is plenty to reflect on for a much wider range of services. Sometimes failure is really not an option.
Hugh Howey – Wired
The title of this article is a bit of a false flag, since it could easily have continued ‘… and why that would be a really bad idea’. It is though an interesting – though considerably longer – complement to the argument that the idea of general artificial intelligence is based on a false analogy between human brains and computers. This article takes the related but distinct approach that self-consciousness exists as compensation for structural faults in human brains, particularly driven by the fact that having a sophisticated theory of mind is a useful evolutionary trait and that it would be pointless (rather than impossible) to replicate that – because perhaps the most notable thing about human introspection about consciousness is how riddled it is with error and self-contradiction. That being so, AI will continue to get more powerful and sophisticated. But it won’t become more human, because it makes no sense to make it so.
This is a page of links which provides over two hundred examples of artificial intelligence in action – ranging from mowing the lawn, through managing hedge funds and sorting cucumbers all the way to writing AI software. Without clicking a single one of the links, it provides a powerful visual indicator of how pervasive AI has already become. There is inevitably a bit of a sense of never mind the quality, feel the width – but the width is itself impressive, and the quality is often racing up as well.
There is a linked twitter account which retweets AI-related material – though in a pleasing inversion, it shows every sign of being human-curated.
Charlotte Augst – Kaleidoscope Health
One ever present risk in thinking strategically is to be too strategic. Or rather, to be too abstract, losing sight of the messiness of today in the excitement of the far tomorrows. Convincing strategies address recognisable problems (even if making the problems recognisable is part of the strategic process) and, perhaps most importantly, convincing strategies get to the future by starting in the present. There is no value in the most glorious of futures if you can’t get there from here.
This post is a brilliant example of why that is. How, it asks, with clearsighted perspective of very personal experience, can we hope to deliver a future strategy without understanding and addressing the gap between where we are and where we want to be?
This is a short tweet thread making the point that ethics in AI – and in technology generally – needs to be informed by ethical thinking developed in other contexts (and over several millenia). That should be so obvious as to be hardly worth saying, but it has often become one of those questions which people doing new things fall into the trap of believing themselves to be solving for the first time.