No Arabic abstract
Unlike theoretical distributed learning (DL), DL over wireless edge networks faces the inherent dynamics/uncertainty of wireless connections and edge nodes, making DL less efficient or even inapplicable under the highly dynamic wireless edge networks (e.g., using mmW interfaces). This article addresses these problems by leveraging recent advances in coded computing and the deep dueling neural network architecture. By introducing coded structures/redundancy, a distributed learning task can be completed without waiting for straggling nodes. Unlike conventional coded computing that only optimizes the code structure, coded distributed learning over the wireless edge also requires to optimize the selection/scheduling of wireless edge nodes with heterogeneous connections, computing capability, and straggling effects. However, even neglecting the aforementioned dynamics/uncertainty, the resulting joint optimization of coding and scheduling to minimize the distributed learning time turns out to be NP-hard. To tackle this and to account for the dynamics and uncertainty of wireless connections and edge nodes, we reformulate the problem as a Markov Decision Process and then design a novel deep reinforcement learning algorithm that employs the deep dueling neural network architecture to find the jointly optimal coding scheme and the best set of edge nodes for different learning tasks without explicit information about the wireless environment and edge nodes straggling parameters. Simulations show that the proposed framework reduces the average learning delay in wireless edge computing up to 66% compared with other DL approaches. The jointly optimal framework in this article is also applicable to any distributed learning scheme with heterogeneous and uncertain computing nodes.
Throughput and per-packet delay can present strong trade-offs that are important in the cases of delay sensitive applications.We investigate such trade-offs using a random linear network coding scheme for one or more receivers in single hop wireless packet erasure broadcast channels. We capture the delay sensitivities across different types of network applications using a class of delay metrics based on the norms of packet arrival times. With these delay metrics, we establish a unified framework to characterize the rate and delay requirements of applications and optimize system parameters. In the single receiver case, we demonstrate the trade-off between average packet delay, which we view as the inverse of throughput, and maximum ordered inter-arrival delay for various system parameters. For a single broadcast channel with multiple receivers having different delay constraints and feedback delays, we jointly optimize the coding parameters and time-division scheduling parameters at the transmitters. We formulate the optimization problem as a Generalized Geometric Program (GGP). This approach allows the transmitters to adjust adaptively the coding and scheduling parameters for efficient allocation of network resources under varying delay constraints. In the case where the receivers are served by multiple non-interfering wireless broadcast channels, the same optimization problem is formulated as a Signomial Program, which is NP-hard in general. We provide approximation methods using successive formulation of geometric programs and show the convergence of approximations.
Interactive applications with automated feedback will largely influence the design of future networked infrastructures. In such applications, status information about an environment of interest is captured and forwarded to a compute node, which analyzes the information and generates a feedback message. Timely processing and forwarding must ensure the feedback information to be still applicable; thus, the quality-of-service parameter for such applications is the end-to-end latency over the entire loop. By modelling the communication of a feedback loop as a two-hop network, we address the problem of allocating network resources in order to minimize the delay violation probability (DVP), i.e. the probability of the end-to-end latency exceeding a target value. We investigate the influence of the network queue states along the network path on the performance of semi-static and dynamic scheduling policies. The former determine the schedule prior to the transmission of the packet, while the latter benefit from feedback on the queue states as time evolves and reallocate time slots depending on the queues evolution. The performance of the proposed policies is evaluated for variations in several system parameters and comparison baselines. Results show that the proposed semi-static policy achieves close-to-optimal DVP and the dynamic policy outperforms the state-of-the-art algorithms.
An efficient and fair node scheduling is a big challenge in multihop wireless networks. In this work, we propose a distributed node scheduling algorithm, called Local Voting. The idea comes from the finding that the shortest delivery time or delay is obtained when the load is equalized throughout the network. Simulation results demonstrate that Local Voting achieves better performance in terms of average delay, maximum delay, and fairness compared to several representative scheduling algorithms from the literature. Despite being distributed, Local Voting has a very close performance to a centralized algorithm that is considered to have the optimal performance.
In this paper, we propose a transfer learning (TL)-enabled edge-CNN framework for 5G industrial edge networks with privacy-preserving characteristic. In particular, the edge server can use the existing image dataset to train the CNN in advance, which is further fine-tuned based on the limited datasets uploaded from the devices. With the aid of TL, the devices that are not participating in the training only need to fine-tune the trained edge-CNN model without training from scratch. Due to the energy budget of the devices and the limited communication bandwidth, a joint energy and latency problem is formulated, which is solved by decomposing the original problem into an uploading decision subproblem and a wireless bandwidth allocation subproblem. Experiments using ImageNet demonstrate that the proposed TL-enabled edge-CNN framework can achieve almost 85% prediction accuracy of the baseline by uploading only about 1% model parameters, for a compression ratio of 32 of the autoencoder.
We consider the transmission of packets across a lossy end-to-end network path so as to achieve low in-order delivery delay. This can be formulated as a decision problem, namely deciding whether the next packet to send should be an information packet or a coded packet. Importantly, this decision is made based on delayed feedback from the receiver. While an exact solution to this decision problem is challenging, we exploit ideas from queueing theory to derive scheduling policies based on prediction of a receiver queue length that, while suboptimal, can be efficiently implemented and offer substantially better performance than state of the art approaches. We obtain a number of useful analytic bounds that help characterise design trade-offs and our analysis highlights that the use of prediction plays a key role in achieving good performance in the presence of significant feedback delay. Our approach readily generalises to networks of paths and we illustrate this by application to multipath transport scheduler design.