The question which this article tries to answer is a critically important one. Sometimes – often – it matters not just that a decision has been made, but that it has been made correctly and appropriately, taking proper account of the factors which are relevant and no account of factors which are not.
That need is particularly obvious in, but not limited to, government decisions, even more so where a legal entitlement is at stake. But machine learning doesn’t work that way: decisions are emergent properties of systems, and the route to the conclusion may be neither known nor, in any ordinary sense, knowable.
The article introduces a new name for the challenge faced from the earliest days of the discipline, “explainable AI” – with a matching three letter acronym for it, XAI. The approach is engagingly recursive. The problem of describing the decision produced by an AI may itself be a problem of the type susceptible to analysis by AIs. Even if that works, it isn’t of course the end of it. We may have to wonder whether we need a third AI system which assures us that the explanation given by the second AI system of the decision made by the first AI system is accurate. And more prosaically, we would net to understand whether any such explanation is even capable of meeting the new GDPR standards.
But AI isn’t going away. And given that, XAI or something like it is going to be essential.