Do you want to publish a course? Click here

Learning-based WiFi Traffic Load Estimation in NR-U Systems

223   0   0.0 ( 0 )
 Added by Zhiqun Zou
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The unlicensed spectrum has been utilized to make up the shortage on frequency spectrum in new radio (NR) systems. To fully exploit the advantages brought by the unlicensed bands, one of the key issues is to guarantee the fair coexistence with WiFi systems. To reach this goal, timely and accurate estimation on the WiFi traffic loads is an important prerequisite. In this paper, a machine learning (ML) based method is proposed to detect the number of WiFi users on the unlicensed bands. An unsupervised Neural Network (NN) structure is applied to filter the detected transmission collision probability on the unlicensed spectrum, which enables the NR users to precisely rectify the measurement error and estimate the number of active WiFi users. Moreover, NN is trained online and the related parameters and learning rate of NN are jointly optimized to estimate the number of WiFi users adaptively with high accuracy. Simulation results demonstrate that compared with the conventional Kalman Filter based detection mechanism, the proposed approach has lower complexity and can achieve a more stable and accurate estimation.



rate research

Read More

Using commodity WiFi data for applications such as indoor localization, object identification and tracking and channel sounding has recently gained considerable attention. We study the problem of channel impulse response (CIR) estimation from commodity WiFi channel state information (CSI). The accuracy of a CIR estimation method in this setup is limited by both the available channel bandwidth as well as various CSI distortions induced by the underlying hardware. We propose a multi-band splicing method that increases channel bandwidth by combining CSI data across multiple frequency bands. In order to compensate for the CSI distortions, we develop a per-band processing algorithm that is able to estimate the distortion parameters and remove them to yield the clean CSI. This algorithm incorporates the atomic norm denoising sparse recovery method to exploit channel sparsity. Splicing clean CSI over M frequency bands, we use orthogonal matching pursuit (OMP) as an estimation method to recover the sparse CIR with high (M-fold) resolution. Unlike previous works in the literature, our method does not appeal to any limiting assumption on the CIR (other than the widely accepted sparsity assumption) or any ad hoc processing for distortion removal. We show, empirically, that the proposed method outperforms the state of the art in terms of localization accuracy.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.
This paper proposes an off-grid channel estimation scheme for orthogonal time-frequency space (OTFS) systems adopting the sparse Bayesian learning (SBL) framework. To avoid channel spreading caused by the fractional delay and Doppler shifts and to fully exploit the channel sparsity in the delay-Doppler (DD) domain, we estimate the original DD domain channel response rather than the effective DD domain channel response as commonly adopted in the literature. OTFS channel estimation is first formulated as a one-dimensional (1D) off-grid sparse signal recovery (SSR) problem based on a virtual sampling grid defined in the DD space, where the on-grid and off-grid components of the delay and Doppler shifts are separated for estimation. In particular, the on-grid components of the delay and Doppler shifts are jointly determined by the entry indices with significant values in the recovered sparse vector. Then, the corresponding off-grid components are modeled as hyper-parameters in the proposed SBL framework, which can be estimated via the expectation-maximization method. To strike a balance between channel estimation performance and computational complexity, we further propose a two-dimensional (2D) off-grid SSR problem via decoupling the delay and Doppler shift estimations. In our developed 1D and 2D off-grid SBL-based channel estimation algorithms, the hyper-parameters are updated alternatively for computing the conditional posterior distribution of channels, which can be exploited to reconstruct the effective DD domain channel. Compared with the 1D method, the proposed 2D method enjoys a much lower computational complexity while only suffers slight performance degradation. Simulation results verify the superior performance of the proposed channel estimation schemes over state-of-the-art schemes.
The future 5G systems are getting closer to be a reality. It is envisioned, indeed, that the roll-out of first 5G network will happen around end of 2018 and beginning of 2019. However, there are still a number of issues and problems that have to be faces and new solutions and methods are needed to solve them. Along these lines, the effects that beamforming and antenna configurations may have on the mobility in 5G New Radio (NR) is still unclear. In fact, with the use of directive antennas and high frequencies (e.g., above 10 GHz), in order to meet the stringent requirements of 5G (e.g., support of 500km/h) it is crucial to understand how the envisioned 5G NR antenna configurations may impact mobility (and thus handovers). In this article, first we will briefly survey mobility enhancements and solution currently under discussion in 3GPP Release 15. In particular, we focus our analysis on the physical layer signals involved in the measurement reporting and the new radio measurement model used in 5G NR to filter the multiple beams typical of directive antenna with a large number of antenna elements. Finally, the critical aspect of mobility identified in the previous sections will be analyzed in more details through the obtained results of an extensive system-level evaluation analysis.
This paper investigates downlink channel estimation in frequency-division duplex (FDD)-based massive multiple-input multiple-output (MIMO) systems. To reduce the overhead of downlink channel estimation and uplink feedback in FDD systems, cascaded precoding has been used in massive MIMO such that only a low-dimensional effective channel needs to be estimated and fed back. On the other hand, traditional channel estimations can hardly achieve the minimum mean-square-error (MMSE) performance due to lack of the a priori knowledge of the channels. In this paper, we design and analyze a strategy for downlink channel estimation based on the parametric model in massive MIMO with cascaded precoding. For a parametric model, channel frequency responses are expressed using the path delays and the associated complex amplitudes. The path delays of uplink channels are first estimated and quantized at the base station, then fed forward to the user equipment (UE) through a dedicated feedforward link. In this manner, the UE can obtain the a priori knowledge of the downlink channel in advance since it has been demonstrated that the downlink and the uplink channels can have identical path delays. Our analysis and simulation results show that the proposed approach can achieve near-MMSE performance.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا