Deep Learning Based Resources Allocation for Internet-of-Things Deployment Underlaying Cellular Networks

  • PDF / 1,147,825 Bytes
  • 9 Pages / 595.224 x 790.955 pts Page_size
  • 35 Downloads / 240 Views

DOWNLOAD

REPORT


Deep Learning Based Resources Allocation for Internet-of-Things Deployment Underlaying Cellular Networks Basem M. ElHalawany1,2

· Kaishun Wu3,4 · Ahmed B. Zaky2

© Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Resources allocation (RA) is a challenging task in many fields and applications including communications and computer networks. The conventional solutions of such problems usually come with a time and memory cost, especially for massive networks such as Internet-of-Things (IoT) networks. In this paper, two RA deep network models are proposed for enabling a clustered underlay IoT deployment, where a group of IoT nodes are uploading information to a centralized gateway in their vicinity by reusing the communication channels of conventional cellular users. The RA problem is formulated as a two-dimensional matching problem, which can be expressed as a traditional linear sum assignment problem (LSAP). The two proposed models are based on the recurrent neural network (RNN). Specifically, we investigate the performance of two long short-term memory (LSTM) based architectures. The results show that the proposed techniques could be used as replacement of the well-known Hungarian algorithm for solving LSAPs due to its ability to find the solution for the problems with different sizes, high accuracy, and very fast execution time. Additionally, the results show that the obtained accuracy outperforms the state-of-the-art deep network techniques. Keywords Resources allocation · Linear sum assignment problems · Recurrent neural network · Long short-term memory

1 Introduction Resources allocation (RA) is gaining a lot of interest in different areas due to its ability to improve the system efficiency. The system operator/designer may target the optimization of RA in order to achieve quality-of-service (QoS) requirements. As an example, the radio resources  Kaishun Wu

[email protected] Basem M. ElHalawany [email protected]; [email protected] Ahmed B. Zaky [email protected] 1

Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University, Shenzhen, 518060, China

2

Benha University, Benha, Egypt

3

College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, China

4

Guangzhou HKUST Fok Ying Tung Research Institute, Guangzhou, China

management (RRM) plays an essential role in various communication systems including Internet-of-Things (IoT) networks, wireless sensors networks (WSNs), machine-tomachine (M2M) communications [1], antenna selection in multiple-input multiple-output systems [2, 3], sub-channels allocation for orthogonal frequency-division multiple access (OFDMA) based networks [4], relay assignment in cooperative networks [4], and spectrum sharing in underlay device-to-device (D2D) communication [5]. Several RRM methods can be recasted in the form of an assignment problem, where a number of jobs are assigned to several workers in a sub-optimal or optimal way to improve the system performance b