Prediction paradigm: the human price of instrumentalism

  • PDF / 562,545 Bytes
  • 9 Pages / 595.276 x 790.866 pts Page_size
  • 91 Downloads / 181 Views

DOWNLOAD

REPORT


EDITORIAL

Prediction paradigm: the human price of instrumentalism Karamjit S. Gill1 Published online: 11 August 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020

Reflecting on the rise of instrumentalism, we learn how it has travelled across the academic boundary to the high-tech culture of Silicon Valley. At its core lies the prediction paradigm. Under the cloak of inevitability of technology, we are being offered the prediction paradigm as the technological dream of public safety, national security, fraud detection, and even disease control and diagnosis. For example, there are offers of facial recognition systems for predicting behaviour of citizens, offers of surveillance drones for ’biometric readings’, ‘Predictive Policing’ is offered as an effective tool to predict and reduce crime rates. A recent critical review of the prediction technology (Coalition for Critical Technology 2020), brings to our notice the discriminatory consequences of predicting “criminality” using biometric and/or criminal legal data. The review outlines the specific ways crime prediction technology reproduces, naturalizes and amplifies discriminatory outcomes, and why exclusively technical criteria are insufficient for evaluating their risks. We learn that neither predication architectures nor machine learning programs are neutral, they often uncritically inherit, accept and incorporate dominant cultural and belief systems, which are then normalised. For example, “Predictions” based on finding correlations between facial features and criminality are accepted as valid, interpreted as the product of intelligent and “objective” technical assessments. Furthermore, the data from predictive outcomes and recommendations are fed back into the system, thereby reproducing and confirming biased correlations. The consequence of this feedback loop, especially in facial recognition architectures, combined with a belief in “evidence based” diagnosis, is that it leads to ‘widespread mischaracterizations of criminal justice data’ that ‘justifies the exclusion and repression of marginalized populations through the construction of “risky” or “deviant” profiles’. Prediction apps, such as the Australia’s CovidSafe are now part of a wide variety of high-tech offerings to * Karamjit S. Gill [email protected] 1



University of Brighton, Brighton, UK

automate COVID-19 contact tracing. From Morrison (2020), we learn that prediction algorithms can be used to assess the outcome of patient X-rays and diagnose COVID-19 virus. For example, an Oxford-based data-visualisation company, Zegami, offers a machine learning model that quickly predicts the outcome of coronavirus patients by studying X-rays of their chests. However, AI algorithms need to be trained on a wider range of X-ray images from infected patients. Machine learning and data analytics are offered to ‘accelerate solutions and minimize the impacts of the virus, and further machine learning tools are promoted to help expedite the drug development process, forecast infection rates, and