If it’s hard to explain how the outputs of complex systems relate to the inputs in terms of sequential processes steps, because of the complexity of the model, then perhaps it makes sense to come at the problem the other way round. Neural networks are very crude representations of human minds, and the way we understand human minds is through cognitive psychology – so what happens if we apply approaches developed to understand the cognitive development of children to understanding black box systems?
That’s both powerful and worrying. Powerful because the approach seems to have some explanatory value and might be the dawn of a new discipline of artificial cognitive psychology. Worrying because if our most powerful neural networks learn and develop in ways which are usefully comparable with the ways humans learn and develop, then they may mirror elements of human frailties as well as our strengths.