AI and moral thinking: how can we live well with machines to enhance our moral agency?

  • PDF / 301,464 Bytes
  • 3 Pages / 595.276 x 790.866 pts Page_size
  • 104 Downloads / 245 Views

DOWNLOAD

REPORT


OPINION PAPER

AI and moral thinking: how can we live well with machines to enhance our moral agency? Paula Boddington1 Received: 16 September 2020 / Accepted: 18 September 2020 © Springer Nature Switzerland AG 2020

Abstract Humans should never relinquish moral agency to machines, and machines should be ‘aligned’ with human values; but we also need to consider how broad assumptions about our moral capacities and the capabilities of AI, impact on how we think about AI and ethics. Consideration of certain approaches, such as the idea that we might programme our ethics into machines, may rest upon a tacit assumption of our own moral progress. Here I consider how broad assumptions about morality act to suggest certain approaches in addressing the ethics of AI. Work in the ethics of AI would benefit from closer attention not just to what our moral judgements should be, but also to how we deliberate and act morally: the process of moral decisionmaking. We must guard against any erosion of our moral agency and responsibilities. Attention to the differences between humans and machines, alongside attention to ways in which humans fail ethically, could be useful in spotting specific, if limited, ways that AI assist us to advance our moral agency. Keywords  Ethics · Moral agency · Autonomy · Responsibility My starting point is this: that humans are agents. This agency is a central feature of our humanity and of what makes each one of us both interesting and valuable individuals. It is also central to the value of humanity viewed collectively. And this agency includes, importantly, our moral agency. This we must not lose. Computers, even those with artificial intelligence, are our tools. They should not diminish our agency; ideally, we should use them to enhance our agency. Hence, we can see the validity of the strategy of AI alignment. It is right and proper that attention is paid to ensuring that AI does not control us, but that we control AI, and that AI does not produce decisions or results which are at odds with the moral judgements of those human who use AI and of those humans who are on the receiving end of an AI’s decisions and actions: we must ensure that AI does what we want it to do. This can be seen as a negative strategy, of trying to ensure that disaster does not occur, or less dramatically, of fine-tuning AI to keep on track with our wishes and values. It can also be seen as a more positive strategy,

* Paula Boddington [email protected] 1



New College of the Humanities, 19 Bedford Square, London WC1N 3HH, UK

of ensuring that AI works for us, to produce beneficial outcomes. This is naturally to be welcomed. A further goal might be to develop and use AI that could make moral decisions for us. I would strongly argue against this on grounds of the importance of our moral agency. And there is a sense in which this is simply not possible, because if we decide to outsource our moral decision-making to a machine, it is we who have taken this outsourcing decision, and it is we who are ultimately resp