ترغب بنشر مسار تعليمي؟ اضغط هنا

Beam alignment - the process of finding an optimal directional beam pair - is a challenging procedure crucial to millimeter wave (mmWave) communication systems. We propose a novel beam alignment method that learns a site-specific probing codebook and uses the probing codebook measurements to predict the optimal narrow beam. An end-to-end neural network (NN) architecture is designed to jointly learn the probing codebook and the beam predictor. The learned codebook consists of site-specific probing beams that can capture particular characteristics of the propagation environment. The proposed method relies on beam sweeping of the learned probing codebook, does not require additional context information, and is compatible with the beam sweeping-based beam alignment framework in 5G. Using realistic ray-tracing datasets, we demonstrate that the proposed method can achieve high beam alignment accuracy and signal-to-noise ratio (SNR) while significantly - by roughly a factor of 3 in our setting - reducing the beam sweeping complexity and latency.
Communication at high carrier frequencies such as millimeter wave (mmWave) and terahertz (THz) requires channel estimation for very large bandwidths at low SNR. Hence, allocating an orthogonal pilot tone for each coherence bandwidth leads to excessiv e number of pilots. We leverage generative adversarial networks (GANs) to accurately estimate frequency selective channels with few pilots at low SNR. The proposed estimator first learns to produce channel samples from the true but unknown channel distribution via training the generative network, and then uses this trained network as a prior to estimate the current channel by optimizing the networks input vector in light of the current received signal. Our results show that at an SNR of -5 dB, even if a transceiver with one-bit phase shifters is employed, our design achieves the same channel estimation error as an LS estimator with SNR = 20 dB or the LMMSE estimator at 2.5 dB, both with fully digital architectures. Additionally, the GAN-based estimator reduces the required number of pilots by about 70% without significantly increasing the estimation error and required SNR. We also show that the generative network does not appear to require retraining even if the number of clusters and rays change considerably.
Full-duplex millimeter wave (mmWave) communication has shown increasing promise for self-interference cancellation via hybrid precoding and combining. This paper proposes a novel mmWave multiple-input multiple-output (MIMO) design for configuring the analog and digital beamformers of a full-duplex transceiver. Our design is the first to holistically consider the key practical constraints of analog beamforming codebooks, a minimal number of radio frequency (RF) chains, limited channel knowledge, beam alignment, and a limited receive dynamic range. To prevent self-interference from saturating the receiver of a full-duplex device having limited dynamic range, our design addresses saturation on a per-antenna and per-RF chain basis. Numerical results evaluate our design in a variety of settings and validate the need to prevent receiver-side saturation. These results and the corresponding insights serve as useful design references for practical full-duplex mmWave transceivers.
This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in the receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which perfectly refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.
This paper proposes a deep learning-based channel estimation method for multi-cell interference-limited massive MIMO systems, in which base stations equipped with a large number of antennas serve multiple single-antenna users. The proposed estimator employs a specially designed deep neural network (DNN) to first denoise the received signal, followed by a conventional least-squares (LS) estimation. We analytically prove that our LS-type deep channel estimator can approach minimum mean square error (MMSE) estimator performance for high-dimensional signals, while avoiding MMSEs requirement for complex channel
We aim to jointly optimize antenna tilt angle, and vertical and horizontal half-power beamwidths of the macrocells in a heterogeneous cellular network (HetNet). The interactions between the cells, most notably due to their coupled interference render this optimization prohibitively complex. Utilizing a single agent reinforcement learning (RL) algorithm for this optimization becomes quite suboptimum despite its scalability, whereas multi-agent RL algorithms yield better solutions at the expense of scalability. Hence, we propose a compromise algorithm between these two. Specifically, a multi-agent mean field RL algorithm is first utilized in the offline phase so as to transfer information as features for the second (online) phase single agent RL algorithm, which employs a deep neural network to learn users locations. This two-step approach is a practical solution for real deployments, which should automatically adapt to environmental changes in the network. Our results illustrate that the proposed algorithm approaches the performance of the multi-agent RL, which requires millions of trials, with hundreds of online trials, assuming relatively low environmental dynamics, and performs much better than a single agent RL. Furthermore, the proposed algorithm is compact and implementable, and empirically appears to provide a performance guarantee regardless of the amount of environmental dynamics.
This paper develops novel deep learning-based architectures and design methodologies for an orthogonal frequency division multiplexing (OFDM) receiver under the constraint of one-bit complex quantization. Single bit quantization greatly reduces compl exity and power consumption, but makes accurate channel estimation and data detection difficult. This is particularly true for multicarrier waveforms, which have high peak-to-average ratio in the time domain and fragile subcarrier orthogonality in the frequency domain. The severe distortion for one-bit quantization typically results in an error floor even at moderately low signal-to-noise-ratio (SNR) such as 5 dB. For channel estimation (using pilots), we design a novel generative supervised deep neural network (DNN) that can be trained with a reasonable number of pilots. After channel estimation, a neural network-based receiver -- specifically, an autoencoder -- jointly learns a precoder and decoder for data symbol detection. Since quantization prevents end-to-end training, we propose a two-step sequential training policy for this model. With synthetic data, our deep learning-based channel estimation can outperform least squares (LS) channel estimation for unquantized (full-resolution) OFDM at average SNRs up to 14 dB. For data detection, our proposed design achieves lower bit error rate (BER) in fading than unquantized OFDM at average SNRs up to 10 dB.
We characterize the rate coverage distribution for a spectrum-shared millimeter wave downlink cellular network. Each of multiple cellular operators owns separate mmWave bandwidth, but shares the spectrum amongst each other while using dynamic inter-o perator base station (BS) coordination to suppress the resulting cross-operator interference. We model the BS locations of each operator as mutually independent Poisson point processes, and derive the probability density function (PDF) of the K-th strongest link power, incorporating both line-of-sight and non line-of-sight states. Leveraging the obtained PDF, we derive the rate coverage expression as a function of system parameters such as the BS density, transmit power, bandwidth, and coordination set size. We verify the analysis with extensive simulation results. A major finding is that inter-operator BS coordination is useful in spectrum sharing (i) with dense and high power operators and (ii) with fairly wide beams, e.g., 30 or higher.
We consider a downlink cellular network where multi-antenna base stations (BSs) transmit data to single-antenna users by using one of two linear precoding methods with limited feedback: (i) maximum ratio transmission (MRT) for serving a single user o r (ii) zero forcing (ZF) for serving multiple users. The BS and user locations are drawn from a Poisson point process, allowing expressions for the signal- to-interference coverage probability and the ergodic spectral efficiency to be derived as a function of system parameters such as the number of BS antennas and feedback bits, and the pathloss exponent. We find a tight lower bound on the optimum number of feedback bits to maximize the net spectral efficiency, which captures the overall system gain by considering both of downlink and uplink spectral efficiency using limited feedback. Our main finding is that, when using MRT, the optimum number of feedback bits scales linearly with the number of antennas, and logarithmically with the channel coherence time. When using ZF, the feedback scales in the same ways as MRT, but also linearly with the pathloss exponent. The derived results provide system-level insights into the preferred channel codebook size by averaging the effects of short-term fading and long-term pathloss.
Load balancing by proactively offloading users onto small and otherwise lightly-loaded cells is critical for tapping the potential of dense heterogeneous cellular networks (HCNs). Offloading has mostly been studied for the downlink, where it is gener ally assumed that a user offloaded to a small cell will communicate with it on the uplink as well. The impact of coupled downlink-uplink offloading is not well understood. Uplink power control and spatial interference correlation further complicate the mathematical analysis as compared to the downlink. We propose an accurate and tractable model to characterize the uplink SINR and rate distribution in a multi-tier HCN as a function of the association rules and power control parameters. Joint uplink-downlink rate coverage is also characterized. Using the developed analysis, it is shown that the optimal degree of channel inversion (for uplink power control) increases with load imbalance in the network. In sharp contrast to the downlink, minimum path loss association is shown to be optimal for uplink rate. Moreover, with minimum path loss association and full channel inversion, uplink SIR is shown to be invariant of infrastructure density. It is further shown that a decoupled association---employing differing association strategies for uplink and downlink---leads to significant improvement in joint uplink-downlink rate coverage over the standard coupled association in HCNs.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا