In AI we trust? Perceptions about automated decision-making by artificial intelligence
- PDF / 699,851 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 36 Downloads / 313 Views
OPEN FORUM
In AI we trust? Perceptions about automated decision‑making by artificial intelligence Theo Araujo1 · Natali Helberger2 · Sanne Kruikemeier1 · Claes H. de Vreese1 Received: 25 March 2019 / Accepted: 10 December 2019 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed. Keywords Automated decision-making · Artificial intelligence · Algorithmic fairness · Algorithmic appreciation · User perceptions
1 Introduction Fueled by ever-growing amounts of digital data and advances in artificial intelligence (AI), decision-making is increasingly delegated to automated processes. These automated decision-making (ADM) processes take place for example in communication, with algorithms making (personalized) news recommendations (Thurman and Schifferes 2012; Diakopoulos and Koliska 2017; Carlson 2018), personalizing advertising based on online behavior (Boerman et al. 2017), regulating user activity on social media platforms (van Dijck et al. 2018), automatically identifying suspicious profiles (Chu et al. 2012; Ferrara et al. 2016; Siddiqui et al. 2017), or even automatically generating news stories (Graefe et al. 2018). ADM processes also make their way into (public) * Theo Araujo [email protected] 1
Amsterdam School of Communication Research (ASCoR), University of Amsterdam, Nieuwe Achtergracht 166, 1018 WV Amsterdam, The Netherlands
Institute for Information Law (IViR), University of Amsterdam, Amsterdam, The Netherlands
2
health, with virtual health coaches recommending activities to individual users (Grolleman et al. 2006; Hudlicka 2013; Bickmore et al. 2016), or with ongoing discussions on how to integrate AI to the decision-making process within healthcare (e.g., Agarwal et al. 2010; Dilsizian and Siegel 2013; Jha and Topol 2016; Yu and Kohane 2018). Their relevance is also growing in the judicial and law enforcement sector (for an overview of the possibilities, see Nissan 2017). Fo
Data Loading...