Challenges in benchmarking stream learning algorithms with real-world data
- PDF / 4,189,223 Bytes
- 54 Pages / 439.37 x 666.142 pts Page_size
- 69 Downloads / 228 Views
Challenges in benchmarking stream learning algorithms with real-world data Vinicius M. A. Souza1,2 · Denis M. dos Reis1 · André G. Maletzke1 · Gustavo E. A. P. A. Batista1,3 Received: 27 March 2019 / Accepted: 18 June 2020 © The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2020
Abstract Streaming data are increasingly present in real-world applications such as sensor measurements, satellite data feed, stock market, and financial data. The main characteristics of these applications are the online arrival of data observations at high speed and the susceptibility to changes in the data distributions due to the dynamic nature of real environments. The data stream mining community still faces some primary challenges and difficulties related to the comparison and evaluation of new proposals, mainly due to the lack of publicly available high quality non-stationary real-world datasets. The comparison of stream algorithms proposed in the literature is not an easy task, as authors do not always follow the same recommendations, experimental evaluation procedures, datasets, and assumptions. In this paper, we mitigate problems related to the choice of datasets in the experimental evaluation of stream classifiers and drift detectors. To that end, we propose a new public data repository for benchmarking stream algorithms with real-world data. This repository contains the most popular datasets from literature and new datasets related to a highly relevant public health problem that involves the recognition of disease vector insects using optical sensors. The main advantage of these new datasets is the prior knowledge of their characteristics and patterns of changes to adequately evaluate new adaptive algorithms. We also present an in-depth discussion about the characteristics, reasons, and issues that lead to different types of changes in data distribution, as well as a critical review of common problems concerning the current benchmark datasets available in the literature. Keywords Data stream · Concept drift · Classification · Drift detection · Benchmark data
Responsible editor: Grigorios Tsoumakas.
B
Vinicius M. A. Souza [email protected]
Extended author information available on the last page of the article
123
V. M. A. Souza et al.
1 Introduction In the last 20 years, we have witnessed the emergence and notable increase in the interest of algorithms that learn from streaming data. This new generation of machine learning methods is designed to deal with continuous flows of data. Frequently, such streams comprise changes in the distribution of data, which are governed by the dynamics of real-world problems and application domains that evolve. In the context of machine learning, these changes in data distribution are named concept drifts (Widmer and Kubat 1996) and typically occur in data that are observed continuously at a fast rate, which in turn impose time and memory constraints on the algorithms that process them. Batch learning is a standard approach for machine l
Data Loading...