Emerging challenges in AI and the need for AI ethics education

  • PDF / 411,639 Bytes
  • 5 Pages / 595.276 x 790.866 pts Page_size
  • 54 Downloads / 241 Views

DOWNLOAD

REPORT


OPINION PAPER

Emerging challenges in AI and the need for AI ethics education Jason Borenstein1 · Ayanna Howard2 Received: 15 July 2020 / Accepted: 23 July 2020 © Springer Nature Switzerland AG 2020

Abstract Artificial Intelligence (AI) is reshaping the world in profound ways; some of its impacts are certainly beneficial but widespread and lasting harms can result from the technology as well. The integration of AI into various aspects of human life is underway, and the complex ethical concerns emerging from the design, deployment, and use of the technology serves as a reminder that it is time to revisit what future developers and designers, along with professionals, are learning when it comes to AI. It is of paramount importance to train future members of the AI community, and other stakeholders as well, to reflect on the ways in which AI might impact people’s lives and to embrace their responsibilities to enhance its benefits while mitigating its potential harms. This could occur in part through the fuller and more systematic inclusion of AI ethics into the curriculum. In this paper, we briefly describe different approaches to AI ethics and offer a set of recommendations related to AI ethics pedagogy. Keywords  AI ethics · Artificial intelligence · Design ethics · Ethics education · Professional responsibility

1 Introduction Artificial Intelligence (AI) is becoming pervasive. The technology is reaching into so many facets of our lives that we have no choice but to confront its impacts. The creation and deployment of AI is changing our lives and communities in countless ways. These changes are often difficult to understand and anticipate, and are only accelerating due to the ongoing COVID-19 pandemic. Although AI provides observable benefits, the collection, use, and abuse of data used to train and feed into AI, as well as the algorithm itself, may expose people to risks that they were not even aware existed. Employers can monitor workplace performance and behavior in covert and unexpected ways. And a potential employee might be turned down for a job because of the information an automated tool collects while scraping the person’s social media profile. A local government might use

* Jason Borenstein [email protected] 1



Center for Ethics and Technology, School of Public Policy and Office of Graduate Studies, Georgia Institute of Technology, Atlanta, GA, USA



Linda J. and Mark C. Smith Professor and Chair, School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA

2

facial recognition to identify each and every individual that passes through a public area. It was not that long ago that such scenarios would seem farfetched. But now we see a rise in the use of these tools by industry, government, and even academic institutions as they deploy AI algorithms to make decisions that alter our lives in direct, and potentially detrimental, ways. The frequently voiced justification for the use of such AI tools is that they are “better” than a human decision-maker. Should not an algorithm