Rocky ethical territory to traverse to reach AI code
How people relate to intelligent machines is one of the complicated questions to solve in work to develop ethical codes for Artificial Intelligence (AI) research. Senior Researcher Paula Boddington describes work currently in progress, potential pitfalls and opportunities.
There is growing attention to ethical issues in AI, with many calling for the development of codes of ethics, or for ethical watchdogs to oversee research and development. There are debates about why precisely we need such ethics guidance. Some cite the threat from a looming singularity (when intelligent machines achieve unstoppable growth, unpredictably changing the nature of human existence), while others express scepticism about this, but urge attention to ethical issues in AI that are already upon us, such as the impact upon employment, or unchecked bias in algorithms.
Indeed, there are several initial sets of broad principles already in existence, such as the Future of Life Institute's AI Asilomar Principles formulated in January 2017, and the Conference Toward AI Network Society’s set of eight principles in 'AI R&D Guideline'. By their nature, such sets of principles tend to be aspirational; they may be a useful starting point for discussion. Some common overlapping themes can be found, such as calls to ensure AI is aligned with human values, and calls for transparency and accountability.
Unsurprisingly, such sets of principles share a great deal with other sets of ethical principles. But what is most useful in such broad statements is to stress those values that AI is likely to challenge the most, and to explain why. For example, the opaque nature of the decision-making of many AI systems provides a very strong reason to emphasise values of transparency and auditability. Codes of professional ethics rest on the broad assumption that professionals have the power to control their products or services, in order to provide benefit and to prevent or mitigate harms.
The control problem in AI therefore presents a significant difficulty, which gives a strong reason to believe that in AI, we are facing particularly rocky ethical territory. Ethical issues in AI tend to arise from its use to replace or supplement human agency, and this means we have to work through questions about the nature of persons, and how they relate to intelligent machines, which go to the heart of philosophical thinking about ethics.
Many ethical issues, such as privacy in the use of personal data, may be highlighted by AI, yet are not unique to it. We should think about what can be learned from discussions of ethics elsewhere, paying close attention to similarities and differences. The ethical regulation of social science research suffered greatly from being squeezed into a model of regulation derived from medical research; there is ample reason to think this has hampered much valid research.
Likewise, with AI, we need to make sure that the shoe fits the foot. Ethics regulation is not and never should become an end in itself, and needs to be focused on enabling beneficial research and development, not on 'banning' things. This is particularly the case given that ill-judged brakes on the development of AI can in some instances mean that attempts to combat bad actors may be hampered. For instance when OpenAI, which aims to create open source AI, was launched, a stated aim was precisely to mitigate the threat of malevolent superintelligence by making the technology freely available. Others, however, disagree with the premise of such an approach.
Enabling beneficial R&D means that principles must be implemented in practical contexts. The Institute for Electrical and Electronic Engineers is engaged in an ambitious project, the Global Initiative for Ethical Considerations in Autonomous Systems, which includes the development of standards for engineers in a large variety of practical areas to try to ensure that broad ethical aspirations can be achieved in practice. The standards currently in development include ethical considerations in system design, bias in algorithms, transparency in autonomous systems, and the control of personal data by AI.
Paula's book Towards a Code of Ethics for Artificial Intelligence, was recently published by Springer, and was written while working on a project 'Towards a Code of Ethics for Artificial Intelligence Research', with Oxford Professors Mike Wooldridge and Peter Millican. Funding for the project was generously provided by the Future of Life Institute.
This article also appeared in the Winter 2017 issue of Inspired Research.