Artificial intelligence assistants and risk: framing a connectivity risk narrative

  • PDF / 964,486 Bytes
  • 10 Pages / 595.276 x 790.866 pts Page_size
  • 97 Downloads / 156 Views

DOWNLOAD

REPORT


OPEN FORUM

Artificial intelligence assistants and risk: framing a connectivity risk narrative Martin Cunneen1 · Martin Mullins1 · Finbarr Murphy1 Received: 12 June 2019 / Accepted: 24 September 2019 © Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract Our social relations are changing, we are now not just talking to each other, but we are now also talking to artificial intelligence (AI) assistants. We claim AI assistants present a new form of digital connectivity risk and a key aspect of this risk phenomenon is related to user risk awareness (or lack of) regarding AI assistant functionality. AI assistants present a significant societal risk phenomenon amplified by the global scale of the products and the increasing use in healthcare, education, business, and service industry. However, there appears to be little research concerning the need to not only understand the changing risks of AI assistant technologies but also how to frame and communicate the risks to users. How can users assess the risks without fully understanding the complexity of the technology? This is a challenging and unwelcome scenario. AI assistant technologies consist of a complex ecosystem and demand explicit and precise communication in terms of communicating and contextualising the new digital risk phenomenon. The paper then argues for the need to examine how to best to explain and support both domestic and commercial user risk awareness regarding AI assistants. To this end, we propose the method of creating a risk narrative which is focused on temporal points of changing societal connectivity and contextualised in terms of risk. We claim the connectivity risk narrative provides an effective medium in capturing, communicating, and contextualising the risks of AI assistants in a medium that can support explainability as a risk mitigation mechanism. Keywords  Artificial intelligence assistants · Risk · Connectivity · Narratology · Risk communication · Risk perception · Explainability · Informed consent · Data commodification · Data monetisation

1 Introduction 1.1 Artificial intelligence assistants and risk From smart phones, smart speakers, and smart TVs, to vehicle infotainment and wearables, the use of artificial intelligence assistants (AIAs) is an increasingly ubiquitous and challenging social phenomenon (Dale 2015; Janeček 2018). The use of artificial intelligence (AI) technologies will offer many benefits (Canbek and Mutlu 2016) and risks (Alzahrani 2016). The use of AI in relation to our digital online experience presents one of the most significant socio-technological risk scenarios (Dale 2017). This is most evident in the volume of global users and the real-time analytics in use. Moreover, AIAs present a form of AI that is specifically

* Martin Cunneen [email protected] 1



designed to act as a conduit and outward lens to what users digitally perceive, access and engage with (McLean and Osei-Frimpong 2019). This presents a powerful technology that uses analytics to determine news feeds, information, products and