Developing artifical intelligence is often seen as essentially a technical problem. But it also raises difficult ethical issues about accoutability, transparency, discrimination and privacy. Does that mean that the developers of such systems should be subject to some form of hippocratic oath? That seems both to be an important question and a rather naive one. In the month where we learned that Uber deployed software to evade regulatory oversight, it’s clear that this is about organisations and their culture and about social norms and expectations as much as it is about the behaviour of individual developers.