FedMEC: Improving Efficiency of Differentially Private Federated Learning via Mobile Edge Computing

  • PDF / 1,414,941 Bytes
  • 13 Pages / 595.224 x 790.955 pts Page_size
  • 45 Downloads / 245 Views

DOWNLOAD

REPORT


FedMEC: Improving Efficiency of Differentially Private Federated Learning via Mobile Edge Computing Jiale Zhang1

· Yanchao Zhao1 · Junyu Wang1 · Bing Chen1

© Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Federated learning is a recently proposed paradigm that presents significant advantages in privacy-preserving machine learning services. It enables the deep learning applications on mobile devices, where a deep neural network (DNN) is trained in a decentralized manner among thousands of edge clients. However, directly apply the federated learning algorithm to the mobile edge computing environment will incur unacceptable computation costs in mobile edge devices. Moreover, among the training process, frequent model parameters exchanging between participants and the central server will increase the leakage possibility of the users’ sensitive training data. Aiming at reducing the heavy computation cost of DNN training on edge devices while providing strong privacy guarantees, we propose a mobile edge computing enabled federated learning framework, called FedMEC, which integrating model partition technique and differential privacy simultaneously. In FedMEC, the most complex computations can be outsourced to the edge servers by splitting a DNN model into two parts. Furthermore, we apply the differentially private data perturbation method to prevent the privacy leakage from the local model parameters, in which the updates from an edge device to the edge server is perturbed by the Laplace noise. To validate the proposed FedMEC, we conduct a series of experiments on an image classification task under the settings of federated learning. The results demonstrate the effectiveness and practicality of our FedMEC scheme. Keywords Federated learning · Mobile edge computing · Deep neural network · Model partition · Differential privacy

1 Introduction With the explosive growth of the smart Internet of Things (IoT) devices, intelligent mobile networking applications have become ubiquitous, which triggers the high demands The material in this paper was presented partially at “An Efficient Federated Learning Scheme with Differential Privacy in Mobile Edge Computing”, EAI MLICOM 2019 [24]  Bing Chen

cb [email protected] Jiale Zhang [email protected] Yanchao Zhao [email protected] Junyu Wang [email protected] 1

College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China

for the on-device big data analytics. Meanwhile, the cloudbased deep learning services [1], including recommendation systems, health monitoring, language translation, and many others [2–4], call for the efficiency improvement of performing the deep neural networks (DNN) on the mobile devices. However, such a centralized deep learning framework requires the users to outsource their sensitive data to the remote cloud in order to train the corresponding learning models, which arises the significant concerns on privacy as well as the on-device computation resources [5]. To this end, m