Contrastive Explanations for a Deep Learning Model on Time-Series Data
In the last decade, with the irruption of Deep Learning (DL), artificial intelligence has risen a step concerning previous years. Although Deep Learning models have gained strength in many fields like image classification, speech recognition, time-series
- PDF / 823,214 Bytes
- 10 Pages / 439.37 x 666.142 pts Page_size
- 63 Downloads / 207 Views
stract. In the last decade, with the irruption of Deep Learning (DL), artificial intelligence has risen a step concerning previous years. Although Deep Learning models have gained strength in many fields like image classification, speech recognition, time-series anomaly detection, etc. these models are often difficult to understand because of their lack of interpretability. In recent years an effort has been made to understand DL models, creating a new research area called Explainable Artificial Intelligence (XAI). Most of the research in XAI has been done for image data, and little research has been done in the time-series data field. In this paper, a model-agnostic method called Contrastive Explanation Method (CEM) is used for interpreting a DL model for time-series classification. Even though CEM has been validated in tabular data and image data, the obtained experimental results show that CEM is also suitable for interpreting deep learning models that work with time-series data.
Keywords: Contrastive explanations Artificial intelligence
1
· Time-series · Deep learning ·
Introduction
Nowadays, many systems are monitored by multiple sensors, which provide data on how the system is evolving, and consequently research on temporal data has increased in recent years. DL algorithms are becoming really powerful, also for time-series data, in which the Long Short-Term Memory (LSTM) networks [9] are a key part of many state-of-the-art architectures. LSTMs are capable of preserving information from long term dependencies, and have proven to be very effective in processing temporal data [12]. Even though DL has gained strength, DL models are often considered blackboxes due to the lack of interpretability. Due to this problem, a new research area has been created, called Explainable Artificial Intelligence (XAI), which is c Springer Nature Switzerland AG 2020 M. Song et al. (Eds.): DaWaK 2020, LNCS 12393, pp. 235–244, 2020. https://doi.org/10.1007/978-3-030-59065-9_19
236
J. Labaien et al.
focused on DL model interpretation. A lot of work has been done in recent years on interpretation issues, as many techniques have been proposed to facilitate the understanding of DL models [1,2], such as LRP [3], SHAP [10], LIME [11], Integrated Gradients [14], CEM [6], etc. These methods have been applied especially to image data [13,15], and more research is needed for time-series data [7,8]. CEM [6] is a perturbation-based method that provides local explanations. Although CEM offers two ways to interpret a model, either using pertinent negatives (PN) or pertinent positives (PP), in this paper, PNs are used. Unlike other methods, the idea behind this paper is that applying CEM to time series data can allow us to give explanations such as “this time series is classified as class y because a particular point or group of points have value v (PP) when they should have value w (PN)”. The authors apply CEM to a time series classification problem, concluding that this kind of explanation is viable in time series data. This article is structured as
Data Loading...