ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine learning and wireless communication technologies are jointly facilitating an intelligent edge, where federated edge learning (FEEL) is a promising training framework. As wireless devices involved in FEEL are resource limited in terms of commu nication bandwidth, computing power and battery capacity, it is important to carefully schedule them to optimize the training performance. In this work, we consider an over-the-air FEEL system with analog gradient aggregation, and propose an energy-aware dynamic device scheduling algorithm to optimize the training performance under energy constraints of devices, where both communication energy for gradient aggregation and computation energy for local training are included. The consideration of computation energy makes dynamic scheduling challenging, as devices are scheduled before local training, but the communication energy for over-the-air aggregation depends on the l2-norm of local gradient, which is known after local training. We thus incorporate estimation methods into scheduling to predict the gradient norm. Taking the estimation error into account, we characterize the performance gap between the proposed algorithm and its offline counterpart. Experimental results show that, under a highly unbalanced local data distribution, the proposed algorithm can increase the accuracy by 4.9% on CIFAR-10 dataset compared with the myopic benchmark, while satisfying the energy constraints.
In federated learning (FL), devices contribute to the global training by uploading their local model updates via wireless channels. Due to limited computation and communication resources, device scheduling is crucial to the convergence rate of FL. In this paper, we propose a joint device scheduling and resource allocation policy to maximize the model accuracy within a given total training time budget for latency constrained wireless FL. A lower bound on the reciprocal of the training performance loss, in terms of the number of training rounds and the number of scheduled devices per round, is derived. Based on the bound, the accuracy maximization problem is solved by decoupling it into two sub-problems. First, given the scheduled devices, the optimal bandwidth allocation suggests allocating more bandwidth to the devices with worse channel conditions or weaker computation capabilities. Then, a greedy device scheduling algorithm is introduced, which in each step selects the device consuming the least updating time obtained by the optimal bandwidth allocation, until the lower bound begins to increase, meaning that scheduling more devices will degrade the model accuracy. Experiments show that the proposed policy outperforms state-of-the-art scheduling policies under extensive settings of data distributions and cell radius.
In a vehicular edge computing (VEC) system, vehicles can share their surplus computation resources to provide cloud computing services. The highly dynamic environment of the vehicular network makes it challenging to guarantee the task offloading dela y. To this end, we introduce task replication to the VEC system, where the replicas of a task are offloaded to multiple vehicles at the same time, and the task is completed upon the first response among replicas. First, the impact of the number of task replicas on the offloading delay is characterized, and the optimal number of task replicas is approximated in closed-form. Based on the analytical result, we design a learning-based task replication algorithm (LTRA) with combinatorial multi-armed bandit theory, which works in a distributed manner and can automatically adapt itself to the dynamics of the VEC system. A realistic traffic scenario is used to evaluate the delay performance of the proposed algorithm. Results show that, under our simulation settings, LTRA with an optimized number of task replicas can reduce the average offloading delay by over 30% compared to the benchmark without task replication, and at the same time can improve the task completion ratio from 97% to 99.6%.
As 5G and Internet-of-Things (IoT) are deeply integrated into vertical industries such as autonomous driving and industrial robotics, timely status update is crucial for remote monitoring and control. In this regard, Age of Information (AoI) has been proposed to measure the freshness of status updates. However, it is just a metric changing linearly with time and irrelevant of context-awareness. We propose a context-based metric, named as Urgency of Information (UoI), to measure the nonlinear time-varying importance and the non-uniform context-dependence of the status information. This paper first establishes a theoretical framework for UoI characterization and then provides UoI-optimal status updating and user scheduling schemes in both single-terminal and multi-terminal cases. Specifically, an update-index-based scheme is proposed for a single-terminal system, where the terminal always updates and transmits when its update index is larger than a threshold. For the multi-terminal case, the UoI of the proposed scheduling scheme is proven to be upper-bounded and its decentralized implementation by Carrier Sensing Multiple Access with Collision Avoidance (CSMA/CA) is also provided. In the simulations, the proposed updating and scheduling schemes notably outperform the existing ones such as round robin and AoI-optimal schemes in terms of UoI, error-bound violation and control system stability.
Timely status updating is crucial for future applications that involve remote monitoring and control, such as autonomous driving and Industrial Internet of Things (IIoT). Age of Information (AoI) has been proposed to measure the freshness of status u pdates. However, it is incapable of capturing critical systematic context information that indicates the time-varying importance of status information, and the dynamic evolution of status. In this paper, we propose a context-based metric, namely the Urgency of Information (UoI), to evaluate the timeliness of status updates. Compared to AoI, the new metric incorporates both time-varying context information and dynamic status evolution, which enables the analysis on context-based adaptive status update schemes, as well as more effective remote monitoring and control. The minimization of average UoI for a status update terminal with an updating frequency constraint is investigated, and an update-index-based adaptive scheme is proposed. Simulation results show that the proposed scheme achieves a near-optimal performance with a low computational complexity.
Owing to the increasing need for massive data analysis and model training at the network edge, as well as the rising concerns about the data privacy, a new distributed training framework called federated learning (FL) has emerged. In each iteration o f FL (called round), the edge devices update local models based on their own data and contribute to the global training by uploading the model updates via wireless channels. Due to the limited spectrum resources, only a portion of the devices can be scheduled in each round. While most of the existing work on scheduling focuses on the convergence of FL w.r.t. rounds, the convergence performance under a total training time budget is not yet explored. In this paper, a joint bandwidth allocation and scheduling problem is formulated to capture the long-term convergence performance of FL, and is solved by being decoupled into two sub-problems. For the bandwidth allocation sub-problem, the derived optimal solution suggests to allocate more bandwidth to the devices with worse channel conditions or weaker computation capabilities. For the device scheduling sub-problem, by revealing the trade-off between the number of rounds required to attain a certain model accuracy and the latency per round, a greedy policy is inspired, that continuously selects the device that consumes the least time in model updating until achieving a good trade-off between the learning efficiency and latency per round. The experiments show that the proposed policy outperforms other state-of-the-art scheduling policies, with the best achievable model accuracy under training time budgets.
Future wireless access networks need to support diversified quality of service (QoS) metrics required by various types of Internet-of-Things (IoT) devices, e.g., age of information (AoI) for status generating sources and ultra low latency for safety information in vehicular networks. In this paper, a novel inner-state driven random access (ISDA) framework is proposed based on distributed policy learning, in particular a cross-entropy method. Conventional random access schemes, e.g., $p$-CSMA, assume state-less terminals, and thus assigning equal priorities to all. In ISDA, the inner-states of terminals are described by a time-varying state vector, and the transmission probabilities of terminals in the contention period are determined by their respective inner-states. Neural networks are leveraged to approximate the function mappings from inner-states to transmission probabilities, and an iterative approach is adopted to improve these mappings in a distributed manner. Experiment results show that ISDA can improve the QoS of heterogeneous terminals simultaneously compared to conventional CSMA schemes.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا