Ethics of engagement
- PDF / 417,311 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 97 Downloads / 215 Views
PREFACE
Ethics of engagement Karamjit S. Gill1 Published online: 10 October 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
In this volume, AI&Society authors critically reflect on ethics of engagement. The narratives range from societal sustainability, Surveillance Capitalism, Machine theology, Social jurisdiction, Covid-19, EU GDPR consent mechanisms, Strategic Health Initiative, Watson for Oncology, Recommender Systems, and Socio-technological systems. The discussion and arguments range from Artificial wisdom; Artificial moral agents; Crisis of moral passivity; Smart phones on wheels; Disengagement and re-engagement with roboethics; roboaesthetics; interpersonal interaction and perceived legitimacy; Value conflicts, Nudging traps and algorithmic bias, Digital Fake News, Social anxiety, and Dysfunctional impacts of automation on social and political stability; Regulatory frameworks and EU GDPR consent mechanisms; Legal, political, and bureaucratic decision-making; Implication of autonomous decision making on judgment making during COVID-19 pandemic; AI, medicine and ethics; Global supply chain dependency and Global concordance; Narrative of entanglement; AI and shared human motivations; Cognitive-architecture for autonomy, intentionality and emotion as prerequisites for creativity; Turing’s vision and cooperative challenge of language use; and Theistic AI narratives. Patrick Gamez et al. in ‘Artificial Virtue: The Machine Question and Perceptions of Moral Character in Artificial Moral Agents’ (this volume), investigate the “machine question” of whether virtue or vice can be attributed to artificial intelligence; that is, whether people are willing to judge machines as possessing moral character. Self-driving cars opens up the concrete possibility of encountering familiar moral dilemmas in the real world, for example, whether to save a group of children who have suddenly darted into the road or swerving to avoid that collision and instead colliding with a single pedestrian properly using a crosswalk. To authors, it is obvious a moral question; there is no morally * Karamjit S. Gill [email protected] 1
University of Brighton, Brighton, UK
neutral decision procedure here. Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behaviour of artificial moral agents. The authors explore virtuous ethics through the lens of three types of artificial agents implicit ethical agents, explicit ethical agents, and full ethical agents where implicit moral agents that are constrained by ethical norms even if they are not explicitly represented by ethical language; explicit moral agents that are capable of explicit reasoning, might explicitly represent moral rules to themselves, and use these moral rules to guide their behaviour “on the go”, so to speak; and to be a full moral agent is to be both a moral agent and patient., For the authors, virtue ethics speaks of core features: rather than making actions the central focus of moral evaluation (as with
Data Loading...