Do you want to publish a course? Click here

UAV-Aided Interference Assessment for Private 5G NR Deployments: Challenges and Solutions

73   0   0.0 ( 0 )
 Added by Olga Galinina
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Industrial automation has created a high demand for private 5G networks, the deployment of which calls for an efficient and reliable solution to ensure strict compliance with the regulatory emission limits. While traditional methods for measuring outdoor interference include collecting real-world data by walking or driving, the use of unmanned aerial vehicles (UAVs) offers an attractive alternative due to their flexible mobility and adaptive altitude. As UAVs perform measurements quickly and semiautomatically, they can potentially assist in near realtime adjustments of the network configuration and fine-tuning its parameters, such as antenna settings and transmit power, as well as help improve indoor connectivity while respecting outdoor emission constraints. This article offers a firsthand tutorial on using aerial 5G emission assessment for interference management in nonpublic networks (NPNs) by reviewing the key challenges of UAV-mounted radio-scanner measurements. Particularly, we (i) outline the challenges of practical assessment of the outdoor interference originating from a local indoor 5G network while discussing regulatory and other related constraints and (ii) address practical methods and tools while summarizing the recent results of our measurement campaign. The reported proof of concept confirms that UAV-based systems represent a promising tool for capturing outdoor interference from private 5G systems.



rate research

Read More

Due to its high mobility and flexible deployment, unmanned aerial vehicle (UAV) is drawing unprecedented interest in both military and civil applications to enable agile wireless communications and provide ubiquitous connectivity. Mainly operating in an open environment, UAV communications can benefit from dominant line-of-sight links; however, it on the other hand renders the UAVs more vulnerable to malicious eavesdropping or jamming attacks. Recently, physical layer security (PLS), which exploits the inherent randomness of the wireless channels for secure communications, has been introduced to UAV systems as an important complement to the conventional cryptography-based approaches. In this paper, a comprehensive survey on the current achievements of the UAV-aided wireless communications is conducted from the PLS perspective. We first introduce the basic concepts of UAV communications including the typical static/mobile deployment scenarios, the unique characteristics of air-to-ground channels, as well as various roles that a UAV may act when PLS is concerned. Then, we introduce the widely used secrecy performance metrics and start by reviewing the secrecy performance analysis and enhancing techniques for statically deployed UAV systems, and extend the discussion to a more general scenario where the UAVs mobility is further exploited. For both cases, respectively, we summarize the commonly adopted methodologies in the corresponding analysis and design, then describe important works in the literature in detail. Finally, potential research directions and challenges are discussed to provide an outlook for future works in the area of UAV-PLS in 5G and beyond networks.
The capability of smarter networked devices to dynamically select appropriate radio connectivity options is especially important in the emerging millimeter-wave (mmWave) systems to mitigate abrupt link blockage in complex environments. To enrich the levels of diversity, mobile mmWave relays can be employed for improved connection reliability. These are considered by 3GPP for on-demand densification on top of the static mmWave infrastructure. However, performance dynamics of mobile mmWave relaying is not nearly well explored, especially in realistic conditions, such as urban vehicular scenarios. In this paper, we develop a mathematical framework for the performance evaluation of mmWave vehicular relaying in a typical street deployment. We analyze and compare alternative connectivity strategies by quantifying the performance gains made available to smart devices in the presence of mmWave relays. We identify situations where the use of mmWave vehicular relaying is particularly beneficial. Our methodology and results can support further standardization and deployment of mmWave relaying in more intelligent 5G+ all-mmWave cellular networks.
This paper studies the processing principles, implementation challenges, and performance of OFDM-based radars, with particular focus on the fourth-generation Long-Term Evolution (LTE) and fifth-generation (5G) New Radio (NR) mobile networks base stations and their utilization for radar/sensing purposes. First, we address the problem stemming from the unused subcarriers within the LTE and NR transmit signal passbands, and their impact on frequency-domain radar processing. Particularly, we formulate and adopt a computationally efficient interpolation approach to mitigate the effects of such empty subcarriers in the radar processing. We evaluate the target detection and the corresponding range and velocity estimation performance through computer simulations, and show that high-quality target detection as well as high-precision range and velocity estimation can be achieved. Especially 5G NR waveforms, through their impressive channel bandwidths and configurable subcarrier spacing, are shown to provide very good radar/sensing performance. Then, a fundamental implementation challenge of transmitter-receiver (TX-RX) isolation in OFDM radars is addressed, with specific emphasis on shared-antenna cases, where the TX-RX isolation challenges are the largest. It is confirmed that from the OFDM radar processing perspective, limited TX-RX isolation is primarily a concern in detection of static targets while moving targets are inherently more robust to transmitter self-interference. Properly tailored analog/RF and digital self-interference cancellation solutions for OFDM radars are also described and implemented, and shown through RF measurements to be key technical ingredients for practical deployments, particularly from static and slowly moving targets point of view.
In this paper, we aim at interference mitigation in 5G millimeter-Wave (mm-Wave) communications by employing beamforming and Non-Orthogonal Multiple Access (NOMA) techniques with the aim of improving networks aggregate rate. Despite the potential capacity gains of mm-Wave and NOMA, many technical challenges might hinder that performance gain. In particular, the performance of Successive Interference Cancellation (SIC) diminishes rapidly as the number of users increases per beam, which leads to higher intra-beam interference. Furthermore, intersection regions between adjacent cells give rise to inter-beam inter-cell interference. To mitigate both interference levels, optimal selection of the number of beams in addition to best allocation of users to those beams is essential. In this paper, we address the problem of joint user-cell association and selection of number of beams for the purpose of maximizing the aggregate network capacity. We propose three machine learning-based algorithms; transfer Q-learning (TQL), Q-learning, and Best SINR association with Density-based Spatial Clustering of Applications with Noise (BSDC) algorithms and compare their performance under different scenarios. Under mobility, TQL and Q-learning demonstrate 12% rate improvement over BSDC at the highest offered traffic load. For stationary scenarios, Q-learning and BSDC outperform TQL, however TQL achieves about 29% convergence speedup compared to Q-learning.
The unprecedented requirements of the Internet of Things (IoT) have made fine-grained optimization of spectrum resources an urgent necessity. Thus, designing techniques able to extract knowledge from the spectrum in real time and select the optimal spectrum access strategy accordingly has become more important than ever. Moreover, 5G and beyond (5GB) networks will require complex management schemes to deal with problems such as adaptive beam management and rate selection. Although deep learning (DL) has been successful in modeling complex phenomena, commercially-available wireless devices are still very far from actually adopting learning-based techniques to optimize their spectrum usage. In this paper, we first discuss the need for real-time DL at the physical layer, and then summarize the current state of the art and existing limitations. We conclude the paper by discussing an agenda of research challenges and how DL can be applied to address crucial problems in 5GB networks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا