ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning to Compensate: A Deep Neural Network Framework for 5G Power Amplifier Compensation

89   0   0.0 ( 0 )
 نشر من قبل Po-Yu Chen
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Owing to the complicated characteristics of 5G communication system, designing RF components through mathematical modeling becomes a challenging obstacle. Moreover, such mathematical models need numerous manual adjustments for various specification requirements. In this paper, we present a learning-based framework to model and compensate Power Amplifiers (PAs) in 5G communication. In the proposed framework, Deep Neural Networks (DNNs) are used to learn the characteristics of the PAs, while, correspondent Digital Pre-Distortions (DPDs) are also learned to compensate for the nonlinear and memory effects of PAs. On top of the framework, we further propose two frequency domain losses to guide the learning process to better optimize the target, compared to naive time domain Mean Square Error (MSE). The proposed framework serves as a drop-in replacement for the conventional approach. The proposed approach achieves an average of 56.7% reduction of nonlinear and memory effects, which converts to an average of 16.3% improvement over a carefully-designed mathematical model, and even reaches 34% enhancement in severe distortion scenarios.

قيم البحث

اقرأ أيضاً

Realization of deep learning with coherent optical field has attracted remarkably attentions presently, which benefits on the fact that optical matrix manipulation can be executed at speed of light with inherent parallel computation as well as low la tency. Photonic neural network has a significant potential for prediction-oriented tasks. Yet, real-value Backpropagation behaves somewhat intractably for coherent photonic intelligent training. We develop a compatible learning protocol in complex space, of which nonlinear activation could be selected efficiently depending on the unveiled compatible condition. Compatibility indicates that matrix representation in complex space covers its real counterpart, which could enable a single channel mingled training in real and complex space as a unified model. The phase logical XOR gate with Mach-Zehnder interferometers and diffractive neural network with optical modulation mechanism, implementing intelligent weight learned from compatible learning, are presented to prove the availability. Compatible learning opens an envisaged window for deep photonic neural network.
The Doherty power amplifier (DPA) has been extensively explored in the past and has become one of the most widely used power amplifier (PA) architectures in cellular base stations. The classical DPA suffers intrinsic bandwidth constrains which limit its application in future 5G wireless transmitters. In this paper, we present a comprehensive review of the DPA bandwidth enhancement techniques proposed in literature in order to provide a thorough understanding of the DPAs broadband design for high-efficiency 5G wireless transmitters. We elaborate on the main bandwidth limitation sources and provide circuit design insights. We then follow with an overview of bandwidth enhancement techniques developed for the DPA, including modified load-modulation networks, frequency response optimization, parasitic compensation, post-matching, as well as distributed DPA, dual-input digital DPA, transformer-based power-combining PA, and transformer-less load modulated PA architectures. Furthermore, challenges and design techniques for integrated circuit (IC) implementation of broadband DPAs are discussed, including a review of circuits developed in CMOS, SiGe, and GaN processes, and operating in RF and mm-Wave frequencies.
98 - Jianxi Yang 2020
Structural damage detection has become an interdisciplinary area of interest for various engineering fields, while the available damage detection methods are being in the process of adapting machine learning concepts. Most machine learning based meth ods heavily depend on extracted ``hand-crafted features that are manually selected in advance by domain experts and then, fixed. Recently, deep learning has demonstrated remarkable performance on traditional challenging tasks, such as image classification, object detection, etc., due to the powerful feature learning capabilities. This breakthrough has inspired researchers to explore deep learning techniques for structural damage detection problems. However, existing methods have considered either spatial relation (e.g., using convolutional neural network (CNN)) or temporal relation (e.g., using long short term memory network (LSTM)) only. In this work, we propose a novel Hierarchical CNN and Gated recurrent unit (GRU) framework to model both spatial and temporal relations, termed as HCG, for structural damage detection. Specifically, CNN is utilized to model the spatial relations and the short-term temporal dependencies among sensors, while the output features of CNN are fed into the GRU to learn the long-term temporal dependencies jointly. Extensive experiments on IASC-ASCE structural health monitoring benchmark and scale model of three-span continuous rigid frame bridge structure datasets have shown that our proposed HCG outperforms other existing methods for structural damage detection significantly.
In this paper, we are interested in building a domain knowledge based deep learning framework to solve the chiller plants energy optimization problems. Compared to the hotspot applications of deep learning (e.g. image classification and NLP), it is d ifficult to collect enormous data for deep network training in real-world physical systems. Most existing methods reduce the complex systems into linear model to facilitate the training on small samples. To tackle the small sample size problem, this paper considers domain knowledge in the structure and loss design of deep network to build a nonlinear model with lower redundancy function space. Specifically, the energy consumption estimation of most chillers can be physically viewed as an input-output monotonic problem. Thus, we can design a Neural Network with monotonic constraints to mimic the physical behavior of the system. We verify the proposed method in a cooling system of a data center, experimental results show the superiority of our framework in energy optimization compared to the existing ones.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا