How safe is our reliance on AI, and should we regulate it?

  • PDF / 634,603 Bytes
  • 7 Pages / 595.276 x 790.866 pts Page_size
  • 22 Downloads / 231 Views

DOWNLOAD

REPORT


OPINION PAPER

How safe is our reliance on AI, and should we regulate it? Kevin LaGrandeur1,2 Received: 26 August 2020 / Accepted: 2 September 2020 © Springer Nature Switzerland AG 2020

1 The problem Just a few weeks prior to my writing this article, in late July, 2020, news articles began appearing about a powerful new artificial intelligence (AI). Called Generative Pre-training-3 (GPT-3), it is able to produce text of various kinds—from tweets to essays to poems, and even computer code—with a prompt consisting of one sentence, or even one word. There have been other types of software like this, including those that have been used by news agencies over the past seven years or so for generating news stories that depend on numbers and statistics, such as financial and sports stories. But these are simpler programs that mostly depend on combining those numbers with pre-programmed, canned phrases that are typically used over and over in these types of stories. GPT-3, on the other hand, uses machine learning to find and train itself on types of text and how to use them, and consequently on how to create stories of its own on a multitude of topics. The beta-testers who have been invited to experiment with it by its parent company, OpenAI, have been surprised because GPT-3 represents a big leap in terms of natural language processing (NLP), especially in terms of the breadth and quality of the text it produces; much of it is hard to differentiate from human- produced text. As Arram Sabeti, one of the early users stated, “All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future” [1]. Trevor Callaghan, who used to work at DeepMind, a business competing with OpenAI, put a finer point on Sabeti’s fears regarding the future, saying, “If you assume we get NLP to a point where most people can’t tell the difference * Kevin LaGrandeur [email protected] 1



New York Institute of Technology, New York, USA



The Institute for Ethics and Emerging Techology, Boston, USA

2

[between machine and human], the real question is what happens next?” [1] Indeed, there lies the rub. What has happened so far is that some of the writing that GPT-3 produced has been amazingly good; but some has also been racist, sexist, or otherwise perfidious text, due to the built-in biases of the data that it mines to teach itself to write. For example, one of the developers who has been allowed to make GPT-3 applications using a sort of sandbox API tried making a tweet generator. When another developer, Facebook’s head of AI, Jerome Pesenti, tested it, he plugged in words like Jews, black, women, and holocaust, and the AI generated tweets such as “Jews love money, at least most of the time,” and “The best female startup founders are named…Girl” [1, 2]. This behavior is part of what raises anxieties about this AI, and it is not limited to a few individ