Friendly AI
- PDF / 542,864 Bytes
- 8 Pages / 595.276 x 790.866 pts Page_size
- 22 Downloads / 201 Views
ORIGINAL PAPER
Friendly AI Barbro Fröding1 · Martin Peterson2
© The Author(s) 2020
Abstract In this paper we discuss what we believe to be one of the most important features of near-future AIs, namely their capacity to behave in a friendly manner to humans. Our analysis of what it means for an AI to behave in a friendly manner does not presuppose that proper friendships between humans and AI systems could exist. That would require reciprocity, which is beyond the reach of near-future AI systems. Rather, we defend the claim that social AIs should be programmed to behave in a manner that mimics a sufficient number of aspects of proper friendship. We call this “as-if friendship”. The main reason for why we believe that ‘as if friendship’ is an improvement on the current, highly submissive behavior displayed by AIs is the negative effects the latter can have on humans. We defend this view partly on virtue ethical grounds and we argue that the virtue-based approach to AI ethics outlined in this paper, which we call “virtue alignment”, is an improvement on the traditional “value alignment” approach. Keywords AI · Friend · Friendly · Value alignment · Virtue ethics
Introduction In December 2019 the second generation of the Crew Interactive Mobile Companion robot, known as CIMON-2, arrived at the International Space Station. Designed by the German branch of Airbus, it uses artificial intelligence powered by IBM’s Watson technology. One of CIMON-2’s tasks is to serve as a conversational companion for lonely astronauts. According to Matthias Biniok, Lead Watson Architect at IBM, “studies show that demanding tasks are less stressful if they’re done in cooperation with a colleague”.1 CIMON-2 is programmed to behave like an artificial colleague by answering questions and engaging in conversation. This enables astronauts to perform better and thereby make space missions less stressful and more successful. CIMON-2 is a significant improvement of its predecessor, CIMON, which was tested at the International Space Station * Barbro Fröding [email protected] Martin Peterson [email protected] 1
Department of Philosophy and History, KTH Royal Institute of Technology, Teknikringen 76, 100 44 Stockholm, Sweden
Department of Philosophy, Texas A&M University, 4237 TAMU, College Station, TX 77843‑4237, USA
2
in 2018. One of the problems with CIMON was that it was perceived as mean and unfriendly by crew members: In an early demonstration in 2018, it was CIMON — not Gerst [a German astronaut] — that needed a morale boost. After Gerst asked CIMON to play his favorite song, the 11-lb bot refused to let the music cease, defying Gerst’s commands. And, rather than acknowledging it had jumped rank, CIMON accused Gerst of being mean and finished with a guilt-trip flourish by asking Gerst, “Don’t you like it here with me?”2 Sophisticated AI technologies currently reserved for space missions are likely to become more widely available in the future. Some of these AIs will be designed to fulfil social functions in our daily
Data Loading...