A choices framework for the responsible use of AI
- PDF / 751,311 Bytes
- 5 Pages / 595.276 x 790.866 pts Page_size
- 17 Downloads / 212 Views
OPINION PAPER
A choices framework for the responsible use of AI Richard Benjamins1 Received: 1 September 2020 / Accepted: 3 September 2020 © Springer Nature Switzerland AG 2020
Abstract Popular press and media often make us believe that artificial intelligence technology is ethical or unethical by itself. In this paper, we will argue that organizations that develop or apply AI have certain choices they can make that will lead to a more or less responsible use of AI. By approaching those choices in a methodological way, organizations can make better decisions toward the ethical use of this powerful technology.
1 Introduction In spite of what popular press and media wants us to believe about the ethical and societal impact of artificial intelligence, until AI becomes more intelligent than recognizing patterns in large amounts of data, humans will remain responsible for preventing or mitigating the potential negative impacts of AI. The use of artificial intelligence in all aspects of societies continues to grow, and for the moment, there are no signs that this will change in the near future. With this increasing uptake, there are also increasingly more examples of negative consequences of this powerful technology, and this is precisely one of the main motivations for the existence of Springer’s AI Ethics Journal. Like many scholars, I believe that there are many more positive opportunities thanks to AI, than the implied risks often associated with AI. But this does not mean that we should continue to develop and apply AI without critically thinking about its potential negative consequences or harm. The focus of this paper is on avoiding or mitigating the unintended, negative consequences of AI while the intention is a good use of AI, e.g., to solve difficult problems for health, society, business, climate, etc. For this, the paper proposes a framework that identifies relevant choices to be made when developing or using AI. The focus of this article is not on avoiding malicious uses of AI. We distinguish between two types of choices: choices related to what AI principle * Richard Benjamins [email protected]; [email protected] 1
Telefonica and OdiseIA, Madrid, Spain
organizations adhere to as discussed in [1] and choices related to how to technically articulate those principles. The contribution of this paper is related to the second part (in the context of AI of today, that is, data-driven AI mostly based on machine learning and in particular on deep learning).
2 Choices of principles for the responsible use of artificial intelligence In the past few years, a proliferation has occurred of AI ethical principles that are supposed to guide organizations in avoiding the negative consequences of AI. [2] analyses the AI principles of 36 organizations in 9 categories (human rights, human values, responsibility, human control, fairness and non-discrimination, transparency and explainability, safety and security, accountability, and privacy). The non-profit organization Algorithm Watch main
Data Loading...