Artificial intelligence and the value of transparency
- PDF / 750,590 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 79 Downloads / 227 Views
OPEN FORUM
Artificial intelligence and the value of transparency Joel Walmsley1 Received: 6 January 2020 / Accepted: 25 August 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it. Keywords Transparency · Explainability · Contestability · Machine learning · Bias
1 Introduction Alongside, and arguably because of, some of the most recent technical developments in Artificial Intelligence, the last few years have seen a growing number of calls for various forms of transparency1 within and about the field. For example, the 2019 report from the European Commission’s High-Level Expert Group on AI—entitled Ethics Guidelines for Trustworthy AI—features the notion of transparency prominently, and the European Union’s General Data Protection Regulation (GDPR) includes the stipulation that, when a person is subject to an automated decision based on their personal information, he or she has “the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”.2 In part, these calls respond to an epistemic limitation; machine learning techniques, together with the use of “Big Data” for training purposes, mean that many AI systems are both too complex for a complete understanding, and faster and more powerful than human cognition (at least, on the relatively narrow set of tasks for which AI is designed). Of course, in many cases, * Joel Walmsley [email protected] 1
Department of Philosophy, University College Cork, Cork, Ireland
“complete understanding” is neither desired nor required; we are perfectly happy to interact with technology by adopting Dennettian3 “intentional” or “design” stances (rather than the more complete but cumbersome “physical stance”) so long as the system functions correctly, and the respects in which it is not transparent are roughly neutral along ethical, political or commercial dimensions. But given that we increasingly and preferentially trust AI systems, and that we do rely on them to make decisions, recommendations and predictions in a var
Data Loading...