ﻻ يوجد ملخص باللغة العربية
Federated edge learning (FEEL) has emerged as an effective alternative to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy. Unfortunately, the learning performance of FEEL may be compromised due to limited training data in a single edge cluster. In this paper, we investigate a novel framework of FEEL, namely semi-decentralized federated edge learning (SD-FEEL). By allowing model aggregation between different edge clusters, SD-FEEL enjoys the benefit of FEEL in reducing training latency and improves the learning performance by accessing richer training data from multiple edge clusters. A training algorithm for SD-FEEL with three main procedures in each round is presented, including local model updates, intra-cluster and inter-cluster model aggregations, and it is proved to converge on non-independent and identically distributed (non-IID) data. We also characterize the interplay between the network topology of the edge servers and the communication overhead of inter-cluster model aggregation on training performance. Experiment results corroborate our analysis and demonstrate the effectiveness of SD-FFEL in achieving fast convergence. Besides, guidelines on choosing critical hyper-parameters of the training algorithm are also provided.
Distributed learning algorithms aim to leverage distributed and diverse data stored at users devices to learn a global phenomena by performing training amongst participating devices and periodically aggregating their local models parameters into a gl
Federated learning (FL) is a prevailing distributed learning paradigm, where a large number of workers jointly learn a model without sharing their training data. However, high communication costs could arise in FL due to large-scale (deep) learning m
Federated learning is an emerging distributed machine learning framework for privacy preservation. However, models trained in federated learning usually have worse performance than those trained in the standard centralized learning mode, especially w
This study develops a federated learning (FL) framework overcoming largely incremental communication costs due to model sizes in typical frameworks without compromising model performance. To this end, based on the idea of leveraging an unlabeled open
Owing to the increasing need for massive data analysis and model training at the network edge, as well as the rising concerns about the data privacy, a new distributed training framework called federated learning (FL) has emerged. In each iteration o