Explainable Machine Learning in Credit Risk Management

  • PDF / 1,902,718 Bytes
  • 14 Pages / 439.37 x 666.142 pts Page_size
  • 37 Downloads / 246 Views

DOWNLOAD

REPORT


Explainable Machine Learning in Credit Risk Management Niklas Bussmann1 · Paolo Giudici1   · Dimitri Marinelli2 · Jochen Papenbrock3 Accepted: 17 August 2020 © The Author(s) 2020

Abstract The paper proposes an explainable Artificial Intelligence model that can be used in credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model applies correlation networks to Shapley values so that Artificial Intelligence predictions are grouped according to the similarity in the underlying explanations. The empirical analysis of 15,000 small and medium companies asking for credit reveals that both risky and not risky borrowers can be grouped according to a set of similar financial characteristics, which can be employed to explain their credit score and, therefore, to predict their future behaviour. Keywords  Credit risk management · Explainable AI · Financial technologies · Similarity networks

1 Introduction Black box Artificial Intelligence (AI) is not suitable in regulated financial services. To overcome this problem, Explainable AI models, which provide details or reasons to make the functioning of AI clear or easy to understand, are necessary.

* Paolo Giudici [email protected] Niklas Bussmann [email protected] Dimitri Marinelli dm@financial‑networks.eu Jochen Papenbrock [email protected] 1

University of Pavia, Pavia, Italy

2

FinNet-Project, Frankfurt, Germany

3

FIRAMIS, Frankfurt, Germany



13

Vol.:(0123456789)



N. Bussmann et al.

To develop such models, we first need to understand what “Explainable” means. Recently, some important insitutional definitions have been provided. For example, Bracke et al. (2019) states that “Explanations can answer different kinds of questions about a model’s operation depending on the stakeholder they are addressed to and Croxson et  al. (2019)” ‘interpretability’ will be the focus will be the focus—generally taken to mean that an interested stakeholder can compre‑ hend the main drivers of a model-driven decision". Explainability means that an interested stakeholder can comprehend the main drivers of a model-driven decision; FSB (2017) suggests that “lack of interpret‑ ability and auditability of AI and Machine Learning (ML) methods could become a macro-level risk”; Croxson et al. (2019) establishes that “in some cases, the law itself may dictate a degree of explainability.” The European GDPR EU (2016) regulation states that “the existence of auto‑ mated decision-making should carry meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Under the GDPR regulation, the data subject is therefore, under certain circumstances, entitled to receive meaningful information about the logic of automated decision-making. Finally, the European Commission High-Level Expert Group on AI presented the Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019. Such guidelines put fo