On conflicts between ethical and logical principles in artificial intelligence

  • PDF / 571,774 Bytes
  • 6 Pages / 595.276 x 790.866 pts Page_size
  • 0 Downloads / 208 Views

DOWNLOAD

REPORT


OPEN FORUM

On conflicts between ethical and logical principles in artificial intelligence Giuseppe D’Acquisto1 Received: 28 October 2019 / Accepted: 5 December 2019 © Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract Artificial intelligence is nowadays a reality. Setting rules on the potential outcomes of intelligent machines, so that no surprise can be expected by humans from the behavior of those machines, is becoming a priority for policy makers. In its recent Communication “Artificial Intelligence for Europe” (EU Commission 2018), for instance, the European Commission identifies the distinguishing trait of an intelligent machine in the presence of “a certain degree of autonomy” in decision making, in the light of the context. The crucial issue to be addressed is, therefore, whether it is possible to identify a set of rules for data use by intelligent machines so that the decision-making autonomy of machines can allow for humans’ traditional informational self-determination (humans provide machines only with the data they decide to), as enshrined in many existing legal frameworks (including, for personal data protection, the EU’s General Data Protection Regulation) (EU Parliament and Council 2016) and can actually turn out to be further beneficial to individuals. Governing the autonomy of machines can be a very ambitious goal for humans since machines are geared first to the principles of formal logic and then—possibly—to ethical or legal principles. This introduces an unprecedented degree of complexity in how a norm should be engineered, which requires, in turn, an in-depth reflection in order to prevent conflicts between the legal and ethical principles underlying humans’ civil coexistence and the rules of formal logic upon which the functioning of machines is based (EU Parliament 2017). Keywords  Artificial intelligence ethics · Formal logic constraints · Machine incompleteness · Value alignment · Algorithm transparency vs. explainability

1 Forewords Allowing a "certain degree of autonomy" to machines will allow humans to be relieved of decisions humans cannot take (because of the computational complexity of the decision), or activities humans do not want to perform (because of their repetitiveness). This lack of full human supervision on the functioning of machines (humans may not have the last say on machines) exposes humans to the risk of unexpected adverse outcomes. The ongoing debate on these aspects includes increasingly technical considerations related to the design of machines, such as how processes are developed and resources are used, or how information is effectively delivered to users on the * Giuseppe D’Acquisto [email protected] 1



Garante per la Protezione dei Dati Personali (Italian Data Protection Authority), Piazza Venezia n. 11, 00187 Rome, Italy

purposes that the designer wishes to attain. Discussions also include warnings and remedies about potential harm to individuals’ fundamental rights and freedoms such as privacy, potential discrimination, economic losses,