Do you want to publish a course? Click here

Multi-Stage Hybrid Federated Learning over Large-Scale D2D-Enabled Fog Networks

102   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Federated learning has generated significant interest, with nearly all works focused on a ``star topology where nodes/devices are each connected to a central server. We migrate away from this architecture and extend it through the network dimension to the case where there are multiple layers of nodes between the end devices and the server. Specifically, we develop multi-stage hybrid federated learning (MH-FL), a hybrid of intra- and inter-layer model learning that considers the network as a multi-layer cluster-based structure, each layer of which consists of multiple device clusters. MH-FL considers the topology structures among the nodes in the clusters, including local networks formed via device-to-device (D2D) communications. It orchestrates the devices at different network layers in a collaborative/cooperative manner (i.e., using D2D interactions) to form local consensus on the model parameters, and combines it with multi-stage parameter relaying between layers of the tree-shaped hierarchy. We derive the upper bound of convergence for MH-FL with respect to parameters of the network topology (e.g., the spectral radius) and the learning algorithm (e.g., the number of D2D rounds in different clusters). We obtain a set of policies for the D2D rounds at different clusters to guarantee either a finite optimality gap or convergence to the global optimum. We then develop a distributed control algorithm for MH-FL to tune the D2D rounds in each cluster over time to meet specific convergence criteria. Our experiments on real-world datasets verify our analytical results and demonstrate the advantages of MH-FL in terms of resource utilization metrics.



rate research

Read More

Machine learning (ML) tasks are becoming ubiquitous in todays network applications. Federated learning has emerged recently as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data. There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices. To address this, we advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers. Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity. It considers a multi-layer hybrid learning framework consisting of heterogeneous devices with various proximities. It accounts for the topology structures of the local networks among the heterogeneous nodes at each network layer, orchestrating them for collaborative/cooperative learning through device-to-device (D2D) communications. This migrates from star network topologies used for parameter transfers in federated learning to more distributed topologies at scale. We discuss several open research directions to realizing fog learning.
A mega-constellation of low-altitude earth orbit (LEO) satellites (SATs) and burgeoning unmanned aerial vehicles (UAVs) are promising enablers for high-speed and long-distance communications in beyond fifth-generation (5G) systems. Integrating SATs and UAVs within a non-terrestrial network (NTN), in this article we investigate the problem of forwarding packets between two faraway ground terminals through SAT and UAV relays using either millimeter-wave (mmWave) radio-frequency (RF) or free-space optical (FSO) link. Towards maximizing the communication efficiency, the real-time associations with orbiting SATs and the moving trajectories of UAVs should be optimized with suitable FSO/RF links, which is challenging due to the time-varying network topology and a huge number of possible control actions. To overcome the difficulty, we lift this problem to multi-agent deep reinforcement learning (MARL) with a novel action dimensionality reduction technique. Simulation results corroborate that our proposed SAT-UAV integrated scheme achieves 1.99x higher end-to-end sum throughput compared to a benchmark scheme with fixed ground relays. While improving the throughput, our proposed scheme also aims to reduce the UAV control energy, yielding 2.25x higher energy efficiency than a baseline method only maximizing the throughput. Lastly, thanks to utilizing hybrid FSO/RF links, the proposed scheme achieves up to 62.56x higher peak throughput and 21.09x higher worst-case throughput than the cases utilizing either RF or FSO links, highlighting the importance of co-designing SAT-UAV associations, UAV trajectories, and hybrid FSO/RF links in beyond-5G NTNs.
Unmanned aerial vehicles (UAVs), or say drones, are envisioned to support extensive applications in next-generation wireless networks in both civil and military fields. Empowering UAVs networks intelligence by artificial intelligence (AI) especially machine learning (ML) techniques is inevitable and appealing to enable the aforementioned applications. To solve the problems of traditional cloud-centric ML for UAV networks such as privacy concern, unacceptable latency, and resource burden, a distributed ML technique, textit(i.e.), federated learning (FL), has been recently proposed to enable multiple UAVs to collaboratively train ML model without letting out raw data. However, almost all existing FL paradigms are still centralized, textit{i.e.}, a central entity is in charge of ML model aggregation and fusion over the whole network, which could result in the issue of a single point of failure and are inappropriate to UAV networks with both unreliable nodes and links. Thus motivated, in this article, we propose a novel architecture called DFL-UN (underline{D}ecentralized underline{F}ederated underline{L}earning for underline{U}AV underline{N}etworks), which enables FL within UAV networks without a central entity. We also conduct a preliminary simulation study to validate the feasibility and effectiveness of the DFL-UN architecture. Finally, we discuss the main challenges and potential research directions in the DFL-UN.
This paper studies a federated learning (FL) system, where textit{multiple} FL services co-exist in a wireless network and share common wireless resources. It fills the void of wireless resource allocation for multiple simultaneous FL services in the existing literature. Our method designs a two-level resource allocation framework comprising emph{intra-service} resource allocation and emph{inter-service} resource allocation. The intra-service resource allocation problem aims to minimize the length of FL rounds by optimizing the bandwidth allocation among the clients of each FL service. Based on this, an inter-service resource allocation problem is further considered, which distributes bandwidth resources among multiple simultaneous FL services. We consider both cooperative and selfish providers of the FL services. For cooperative FL service providers, we design a distributed bandwidth allocation algorithm to optimize the overall performance of multiple FL services, meanwhile cater to the fairness among FL services and the privacy of clients. For selfish FL service providers, a new auction scheme is designed with the FL service owners as the bidders and the network provider as the auctioneer. The designed auction scheme strikes a balance between the overall FL performance and fairness. Our simulation results show that the proposed algorithms outperform other benchmarks under various network conditions.
Sixth-Generation (6G)-based Internet of Everything applications (e.g. autonomous driving cars) have witnessed a remarkable interest. Autonomous driving cars using federated learning (FL) has the ability to enable different smart services. Although FL implements distributed machine learning model training without the requirement to move the data of devices to a centralized server, it its own implementation challenges such as robustness, centralized server security, communication resources constraints, and privacy leakage due to the capability of a malicious aggregation server to infer sensitive information of end-devices. To address the aforementioned limitations, a dispersed federated learning (DFL) framework for autonomous driving cars is proposed to offer robust, communication resource-efficient, and privacy-aware learning. A mixed-integer non-linear (MINLP) optimization problem is formulated to jointly minimize the loss in federated learning model accuracy due to packet errors and transmission latency. Due to the NP-hard and non-convex nature of the formulated MINLP problem, we propose the Block Successive Upper-bound Minimization (BSUM) based solution. Furthermore, the performance comparison of the proposed scheme with three baseline schemes has been carried out. Extensive numerical results are provided to show the validity of the proposed BSUM-based scheme.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا