AI transparency: a matter of reconciling design with critique

  • PDF / 1,203,844 Bytes
  • 9 Pages / 595.276 x 790.866 pts Page_size
  • 100 Downloads / 227 Views

DOWNLOAD

REPORT


STUDENT FORUM

AI transparency: a matter of reconciling design with critique Tomasz Hollanek1 Received: 28 March 2020 / Accepted: 29 October 2020 © The Author(s) 2020

Abstract In the late 2010s, various international committees, expert groups, and national strategy boards have voiced the demand to ‘open’ the algorithmic black box, to audit, expound, and demystify artificial intelligence. The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. In this article, I argue that only the sort of transparency that arises from critique—a method of theoretical examination that, by revealing pre-existing power structures, aims to challenge them—can help us produce technological systems that are less deceptive and more just. I relate the question of AI transparency to the broader challenge of responsible making, contending that future action must aim to systematically reconcile design—as a way of concealing—with critique—as a manner of revealing. Keywords  Critical theory · Critical thinking · Transparency · Responsibility · Self-awareness · Design theory

1 Preliminaries 1.1 AI transparency In the age of ubiquitous computing, we are surrounded by objects that incorporate artificial intelligence solutions. We interact with different kinds of AI without realizing it— using online banking systems, searching for YouTube clips, or consuming news through social media—not really knowing how and when AI systems operate. Corporate strategies of secrecy and user interfaces that hide traces of AI-driven personalization combine with the inherent opacity of deep learning algorithms (whose inner workings are not directly comprehensible to human interpreters) to create a marked lack of transparency associated with all aspects of emerging technologies. It is in response to the widespread application of AI-based solutions to various products and services in the late 2010s that multiple expert groups—both national and international—have voiced the demand to ‘open’ the algorithmic black box, to audit, expound, and demystify AI. They claim that to ensure that the use of AI is ethical, we must design emerging systems to be transparent, explainable, and auditable.1

The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. It is critique, as the underside of making, that prioritizes unboxing, debunking the illusion, seeing through—to reveal how an object really works. Critique—grounded in the tradition of Critical Theory and practiced by cultural studies, critical race theory, queer theory, as well as decolonial theory scholars, among others—moves beyond the technical detail to uncover the desires, ideologies, and social relations forged into objects, opening the black boxes of history, culture, and progress. In what follows, I argue that the calls for technological transparency demand that we combine the practice of design with critique. I relate the question of AI transparency to the broader challenge of responsible making, contending that future action must