ترغب بنشر مسار تعليمي؟ اضغط هنا

Joint Annotator-and-Spectrum Allocation in Wireless Networks for Crowd Labelling

327   0   0.0 ( 0 )
 نشر من قبل Xiaoyang Li
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

The massive sensing data generated by Internet-of-Things will provide fuel for ubiquitous artificial intelligence (AI), automating the operations of our society ranging from transportation to healthcare. The realistic adoption of this technique however entails labelling of the enormous data prior to the training of AI models via supervised learning. To tackle this challenge, we explore a new perspective of wireless crowd labelling that is capable of downloading data to many imperfect mobile annotators for repetition labelling by exploiting multicasting in wireless networks. In this cross-disciplinary area, the integration of the rate-distortion theory and the principle of repetition labelling for accuracy improvement gives rise to a new tradeoff between radio-and-annotator resources under a constraint on labelling accuracy. Building on the tradeoff and aiming at maximizing the labelling throughput, this work focuses on the joint optimization of encoding rate, annotator clustering, and sub-channel allocation, which results in an NP-hard integer programming problem. To devise an efficient solution approach, we establish an optimal sequential annotator-clustering scheme based on the order of decreasing signal-to-noise ratios. Thereby, the optimal solution can be found by an efficient tree search. Next, the solution is simplified by applying truncated channel inversion. Alternatively, the optimization problem can be recognized as a knapsack problem, which can be efficiently solved in pseudo-polynomial time by means of dynamic programming. In addition, exact polices are derived for the annotators constrained and spectrum constrained cases. Last, simulation results demonstrate the significant throughput gains based on the optimal solution compared with decoupled allocation of the two types of resources.



قيم البحث

اقرأ أيضاً

In this paper, a novel intelligent reflecting surface (IRS)-assisted wireless powered communication network (WPCN) architecture is proposed for low-power Internet-of-Things (IoT) devices, where the IRS is exploited to improve the performance of WPCN under imperfect channel state information (CSI). We formulate a hybrid access point (HAP) transmission energy minimization problem by a joint design of time allocation, HAP energy beamforming, receiving beamforming, user transmit power allocation, IRS energy reflection coefficient and information reflection coefficient under the imperfect CSI and non-linear energy harvesting model. Due to the high coupling of optimization variables, this problem is a non-convex optimization problem, which is difficult to solve directly. In order to solve the above-mentioned challenging problems, the alternating optimization (AO) is applied to decouple the optimization variables to solve the problem. Specifically, through AO, time allocation, HAP energy beamforming, receiving beamforming, user transmit power allocation, IRS energy reflection coefficient and information reflection coefficient are divided into three sub-problems to be solved alternately. The difference-of-convex (DC) programming is applied to solve the non-convex rank-one constraint in solving the IRS energy reflection coefficient and information reflection coefficient. Numerical simulations verify the effectiveness of our proposed algorithm in reducing HAP transmission energy compared to other benchmarks.
We consider a fully-loaded ground wireless network supporting unmanned aerial vehicle (UAV) transmission services. To enable the overload transmissions to a ground user (GU) and a UAV, two transmission schemes are employed, namely non-orthogonal mult iple access (NOMA) and relaying, depending on whether or not the GU and UAV are served simultaneously. Under the assumption of the system operating with infinite blocklength (IBL) codes, the IBL throughputs of both the GU and the UAV are derived under the two schemes. More importantly, we also consider the scenario in which data packets are transmitted via finite blocklength (FBL) codes, i.e., data transmission to both the UAV and the GU is performed under low-latency and high reliability constraints. In this setting, the FBL throughputs are characterized again considering the two schemes of NOMA and relaying. Following the IBL and FBL throughput characterizations, optimal resource allocation designs are subsequently proposed to maximize the UAV throughput while guaranteeing the throughput of the cellular user.Moreover, we prove that the relaying scheme is able to provide transmission service to the UAV while improving the GUs performance, and that the relaying scheme potentially offers a higher throughput to the UAV in the FBL regime than in the IBL regime. On the other hand, the NOMA scheme provides a higher UAV throughput (than relaying) by slightly sacrificing the GUs performance.
This paper investigates a full-duplex orthogonal-frequency-division multiple access (OFDMA) based multiple unmanned aerial vehicles (UAVs)-enabled wireless-powered Internet-of-Things (IoT) networks. In this paper, a swarm of UAVs is first deployed in three dimensions (3D) to simultaneously charge all devices, i.e., a downlink (DL) charging period, and then flies to new locations within this area to collect information from scheduled devices in several epochs via OFDMA due to potential limited number of channels available in Narrow Band IoT, i.e., an uplink (UL) communication period. To maximize the UL throughput of IoT devices, we jointly optimizes the UL-and-DL 3D deployment of the UAV swarm, including the device-UAV association, the scheduling order, and the UL-DL time allocation. In particular, the DL energy harvesting (EH) threshold of devices and the UL signal decoding threshold of UAVs are taken into consideration when studying the problem. Besides, both line-of-sight (LoS) and non-line-of-sight (NLoS) channel models are studied depending on the position of sensors and UAVs. The influence of the potential limited channels issue in NB-IoT is also considered by studying the IoT scheduling policy. Two scheduling policies, a near-first (NF) policy and a far-first (FF) policy, are studied. It is shown that the NF scheme outperforms FF scheme in terms of sum throughput maximization; whereas FF scheme outperforms NF scheme in terms of system fairness.
In multicell massive multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) networks, base stations (BSs) with multiple antennas deliver their radio frequency energy in the downlink, and Internet-of-Things (IoT) devices use their harvested energy to support uplink data transmission. This paper investigates the energy efficiency (EE) problem for multicell massive MIMO NOMA networks with wireless power transfer (WPT). To maximize the EE of the network, we propose a novel joint power, time, antenna selection, and subcarrier resource allocation scheme, which can properly allocate the time for energy harvesting and data transmission. Both perfect and imperfect channel state information (CSI) are considered, and their corresponding EE performance is analyzed. Under quality-of-service (QoS) requirements, an EE maximization problem is formulated, which is non-trivial due to non-convexity. We first adopt nonlinear fraction programming methods to convert the problem to be convex, and then, develop a distributed alternating direction method of multipliers (ADMM)- based approach to solve the problem. Simulation results demonstrate that compared to alternative methods, the proposed algorithm can converge quickly within fewer iterations, and can achieve better EE performance.
Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unreliable. Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration. Due to the shared nature of the wireless medium, transmissions are subjected to interference and are not guaranteed. The performance of FL system in such a setting is not well understood. In this paper, an analytical model is developed to characterize the performance of FL in wireless networks. Particularly, tractable expressions are derived for the convergence rate of FL in a wireless setting, accounting for effects from both scheduling schemes and inter-cell interference. Using the developed analysis, the effectiveness of three different scheduling policies, i.e., random scheduling (RS), round robin (RR), and proportional fair (PF), are compared in terms of FL convergence rate. It is shown that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Moreover, the FL convergence rate decreases rapidly as the SINR threshold increases, thus confirming the importance of compression and quantization of the update parameters. The analysis also reveals a trade-off between the number of scheduled UEs and subchannel bandwidth under a fixed amount of available spectrum.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا