Random-based networks with dropout for embedded systems
- PDF / 1,368,119 Bytes
- 16 Pages / 595.276 x 790.866 pts Page_size
- 8 Downloads / 167 Views
(0123456789().,-volV)(0123456789(). ,- volV)
ORIGINAL ARTICLE
Random-based networks with dropout for embedded systems Edoardo Ragusa1
•
Christian Gianoglio1 • Rodolfo Zunino1 • Paolo Gastaldo1
Received: 27 December 2019 / Accepted: 5 October 2020 The Author(s) 2020
Abstract Random-based learning paradigms exhibit efficient training algorithms and remarkable generalization performances. However, the computational cost of the training procedure scales with the cube of the number of hidden neurons. The paper presents a novel training procedure for random-based neural networks, which combines ensemble techniques and dropout regularization. This limits the computational complexity of the training phase without affecting classification performance significantly; the method best fits Internet of Things (IoT) applications. In the training algorithm, one first generates a pool of random neurons; then, an ensemble of independent sub-networks (each including a fraction of the original pool) is trained; finally, the sub-networks are integrated into one classifier. The experimental validation compared the proposed approach with state-of-the-art solutions, by taking into account both generalization performance and computational complexity. To verify the effectiveness in IoT applications, the training procedures were deployed on a pair of commercially available embedded devices. The results showed that the proposed approach overall improved accuracy, with a minor degradation in performance in a few cases. When considering embedded implementations as compared with conventional architectures, the speedup of the proposed method scored up to 209 in IoT devices. Keywords Internet of Things Random-based neural networks Embedded systems
1 Introduction Edge computing and Internet of Things (IoT) are crucial areas in modern electronics [26, 42], involving important domains such as healthcare [39, 41], intelligent transportation [40], and multimedia communications [38]. Deep learning paradigms [14] prove effective in those applications, but resource-constrained devices cannot support the training process [19], and even deploying trained models in embedded systems still remains a challenging task.
& Edoardo Ragusa [email protected] Christian Gianoglio [email protected] Rodolfo Zunino [email protected] Paolo Gastaldo [email protected] 1
Department of Electrical, Electronic, Telecommunications Engineering and Naval Architecture, DITEN, University of Genoa, Genoa, Italy
Traditional approaches such as single-layer feed-forward neural networks (SLFNNs) and support vector machines (SVMs) can be trained by involving a relatively small amount of computational resources. Random-based networks (RBNs) such as random radial basis functions [28], random vector functional link (RVFLs) [31], extreme learning machines (ELMs) [17, 18], and weighted sum of random kitchen sinks [36] offer interesting opportunities. The major advantage of the latter paradigms is that the training process requires to solve a
Data Loading...