ترغب بنشر مسار تعليمي؟ اضغط هنا

Wireless Power Transfer for Future Networks: Signal Processing, Machine Learning, Computing, and Sensing

114   0   0.0 ( 0 )
 نشر من قبل Bruno Clerckx
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Wireless power transfer (WPT) is an emerging paradigm that will enable using wireless to its full potential in future networks, not only to convey information but also to deliver energy. Such networks will enable trillions of future low-power devices to sense, compute, connect, and energize anywhere, anytime, and on the move. The design of such future networks brings new challenges and opportunities for signal processing, machine learning, sensing, and computing so as to make the best use of the RF radiations, spectrum, and network infrastructure in providing cost-effective and real-time power supplies to wireless devices and enable wireless-powered applications. In this paper, we first review recent signal processing techniques to make WPT and wireless information and power transfer as efficient as possible. Topics include power amplifier and energy harvester nonlinearities, active and passive beamforming, intelligent reflecting surfaces, receive combining with multi-antenna harvester, modulation, coding, waveform, massive MIMO, channel acquisition, transmit diversity, multi-user power region characterization, coordinated multipoint, and distributed antenna systems. Then, we overview two different design methodologies: the model and optimize approach relying on analytical system models, modern convex optimization, and communication theory, and the learning approach based on data-driven end-to-end learning and physics-based learning. We discuss the pros and cons of each approach, especially when accounting for various nonlinearities in wireless-powered networks, and identify interesting emerging opportunities for the approaches to complement each other. Finally, we identify new emerging wireless technologies where WPT may play a key role -- wireless-powered mobile edge computing and wireless-powered sensing -- arguing WPT, communication, computation, and sensing must be jointly designed.



قيم البحث

اقرأ أيضاً

292 - Yifei Shen , Jun Zhang , S.H. Song 2021
Resource management plays a pivotal role in wireless networks, which, unfortunately, leads to challenging NP-hard problems. Artificial Intelligence (AI), especially deep learning techniques, has recently emerged as a disruptive technology to solve su ch challenging problems in a real-time manner. However, although promising results have been reported, practical design guidelines and performance guarantees of AI-based approaches are still missing. In this paper, we endeavor to address two fundamental questions: 1) What are the main advantages of AI-based methods compared with classical techniques; and 2) Which neural network should we choose for a given resource management task. For the first question, four advantages are identified and discussed. For the second question, emph{optimality gap}, i.e., the gap to the optimal performance, is proposed as a measure for selecting model architectures, as well as, for enabling a theoretical comparison between different AI-based approaches. Specifically, for $K$-user interference management problem, we theoretically show that graph neural networks (GNNs) are superior to multi-layer perceptrons (MLPs), and the performance gap between these two methods grows with $sqrt{K}$.
One of the key challenges of the Internet of Things (IoT) is to sustainably power the large number of IoT devices in real-time. In this paper, we consider a wireless power transfer (WPT) scenario between an energy transmitter (ET) capable of retrodir ective WPT and an energy receiver (ER) capable of ambient backscatter in the presence of an ambient source (AS). The ER requests WPT by backscattering signals from an AS towards the ET, which then retrodirectively beamforms an energy signal towards the ER. To remove the inherent direct-link ambient interference, we propose a scheme of ambient backscatter training. Specifically, the ER varies the reflection coefficient multiple times while backscattering each ambient symbol according to a certain pattern called the training sequence, whose design criterion we also present. To evaluate the system performance, we derive an analytical expression for the average harvested power at the ER. Our numerical results show that with the proposed scheme, the ER harvests tens of $mu$W of power, without any CSI estimation or active transmission from the ER, which is a significant improvement for low-power and low-cost ambient backscatter devices.
128 - Xian Li , Suzhi Bi , Zhi Quan 2021
Mobile edge computing (MEC) has recently become a prevailing technique to alleviate the intensive computation burden in Internet of Things (IoT) networks. However, the limited device battery capacity and stringent spectrum resource significantly rest rict the data processing performance of MEC-enabled IoT networks. To address the two performance limitations, we consider in this paper an MEC-enabled IoT system with an energy harvesting (EH) wireless device (WD) which opportunistically accesses the licensed spectrum of an overlaid primary communication link for task offloading. We aim to maximize the long-term average sensing rate of the WD subject to quality of service (QoS) requirement of primary link, average power constraint of MEC server (MS) and data queue stability of both MS and WD. We formulate the problem as a multi-stage stochastic optimization and propose an online algorithm named PLySE that applies the perturbed Lyapunov optimization technique to decompose the original problem into per-slot deterministic optimization problems. For each per-slot problem, we derive the closed-form optimal solution of data sensing and processing control to facilitate low-complexity real-time implementation. Interestingly, our analysis finds that the optimal solution exhibits an threshold-based structure. Simulation results collaborate with our analysis and demonstrate more than 46.7% data sensing rate improvement of the proposed PLySE over representative benchmark methods.
Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unreliable. Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration. Due to the shared nature of the wireless medium, transmissions are subjected to interference and are not guaranteed. The performance of FL system in such a setting is not well understood. In this paper, an analytical model is developed to characterize the performance of FL in wireless networks. Particularly, tractable expressions are derived for the convergence rate of FL in a wireless setting, accounting for effects from both scheduling schemes and inter-cell interference. Using the developed analysis, the effectiveness of three different scheduling policies, i.e., random scheduling (RS), round robin (RR), and proportional fair (PF), are compared in terms of FL convergence rate. It is shown that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Moreover, the FL convergence rate decreases rapidly as the SINR threshold increases, thus confirming the importance of compression and quantization of the update parameters. The analysis also reveals a trade-off between the number of scheduled UEs and subchannel bandwidth under a fixed amount of available spectrum.
In multicell massive multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) networks, base stations (BSs) with multiple antennas deliver their radio frequency energy in the downlink, and Internet-of-Things (IoT) devices use their harvested energy to support uplink data transmission. This paper investigates the energy efficiency (EE) problem for multicell massive MIMO NOMA networks with wireless power transfer (WPT). To maximize the EE of the network, we propose a novel joint power, time, antenna selection, and subcarrier resource allocation scheme, which can properly allocate the time for energy harvesting and data transmission. Both perfect and imperfect channel state information (CSI) are considered, and their corresponding EE performance is analyzed. Under quality-of-service (QoS) requirements, an EE maximization problem is formulated, which is non-trivial due to non-convexity. We first adopt nonlinear fraction programming methods to convert the problem to be convex, and then, develop a distributed alternating direction method of multipliers (ADMM)- based approach to solve the problem. Simulation results demonstrate that compared to alternative methods, the proposed algorithm can converge quickly within fewer iterations, and can achieve better EE performance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا