From social interaction to ethical AI
- 14:00 18th January 2019 ( Hilary Term 2019 )Tony Hoare Room, Robert Hooke Building
AI and robot ethics have recently gained a lot of attention because adaptive machines are increasingly involved in ethically sensitive scenarios and cause incidents of public outcry. Much of the debate has been focused on achieving highest moral standards in handling ethical dilemmas on which not even humans can agree, which indicates that the wrong questions are being asked.
While traditionally engineered artefacts, including AI, require the designer to ensure ethical compliance, learning machines that change through interaction with people after their deployment cannot be vetted in just the same way. I will argue that in order to progress on this issue, we need to look at it strictly through the lens of what behaviour seems socially acceptable, rather than idealistically ethical. Machines would then need to determine what behaviour is compliant with social and moral norms, and therefore be receptive to social feedback from people.
I will discuss a roadmap of computational and experimental questions to address the development of socially acceptable machines, and emphasize the need for social reward mechanisms and learning architectures that integrate these while reaching beyond limitations of traditional reinforcement-learning agents. I suggest to use the metaphor of “needs” to bridge rewards and higher level abstractions such as goals for both communication and action generation in a social context. We then suggest a series of experimental questions and possible platforms and paradigms to guide future research in the area.