This the transcript of a conference address, less about the weaknesses of big data a machine learning and more about its vulnerability to attack and to the encoding of systematic biases – and how everything is going to get worse. There are some worrying case studies – how easy will it turn out to be to game the software behind self-driving cars to confuse one road sign with another? – but also some hope, from turning the strength of machine learning against itself, using adversarial testing for models to probe each other’s limits. Her conclusion though is stark:
We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.
(the preamble promises a link to a video of the whole thing, but what’s there is only one section of the piece, the rest is behind a paywall)