MIT taught a neural network how to show its work

Tristan Greene – The Next Web

One of the many concerns about automated decision making is its lack of transparency. Particularly (but by no means only) for government services, accountability requires not just that decisions are well based, but that they can be challenged and explained. AI black boxes may be efficient and accurate, but they are not accountable or transparent.

This is an interesting early indicator that those issues might be reconciled. It’s in the special – and much researched – area of image recognition, so a long way from a general solution, but it’s encouraging to see systematic thought being addressed to the problem.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.