No Arabic abstract
Human motion recognition (HMR) based on wireless sensing is a low-cost technique for scene understanding. Current HMR systems adopt support vector machines (SVMs) and convolutional neural networks (CNNs) to classify radar signals. However, whether a deeper learning model could improve the system performance is currently not known. On the other hand, training a machine learning model requires a large dataset, but data gathering from experiment is cost-expensive and time-consuming. Although wireless channel models can be adopted for dataset generation, current channel models are mostly designed for communication rather than sensing. To address the above problems, this paper proposes a deep spectrogram network (DSN) by leveraging the residual mapping technique to enhance the HMR performance. Furthermore, a primitive based autoregressive hybrid (PBAH) channel model is developed, which facilitates efficient training and testing dataset generation for HMR in a virtual environment. Experimental results demonstrate that the proposed PBAH channel model matches the actual experimental data very well and the proposed DSN achieves significantly smaller recognition error than that of CNN.
In a time-varying massive multiple-input multipleoutput (MIMO) system, the acquisition of the downlink channel state information at the base station (BS) is a very challenging task due to the prohibitively high overheads associated with downlink training and uplink feedback. In this paper, we consider the hybrid precoding structure at BS and examine the antennatime domain channel extrapolation. We design a latent ordinary differential equation (ODE)-based network under the variational auto-encoder (VAE) framework to learn the mapping function from the partial uplink channels to the full downlink ones at the BS side. Specifically, the gated recurrent unit is adopted for the encoder and the fully-connected neural network is used for the decoder. The end-to-end learning is utilized to optimize the network parameters. Simulation results show that the designed network can efficiently infer the full downlink channels from the partial uplink ones, which can significantly reduce the channel training overhead.
Integrating large intelligent reflecting surfaces (IRS) into millimeter-wave (mmWave) massive multi-input-multi-ouput (MIMO) has been a promising approach for improved coverage and throughput. Most existing work assumes the ideal channel estimation, which can be challenging due to the high-dimensional cascaded MIMO channels and passive reflecting elements. Therefore, this paper proposes a deep denoising neural network assisted compressive channel estimation for mmWave IRS systems to reduce the training overhead. Specifically, we first introduce a hybrid passive/active IRS architecture, where very few receive chains are employed to estimate the uplink user-to-IRS channels. At the channel training stage, only a small proportion of elements will be successively activated to sound the partial channels. Moreover, the complete channel matrix can be reconstructed from the limited measurements based on compressive sensing, whereby the common sparsity of angular domain mmWave MIMO channels among different subcarriers is leveraged for improved accuracy. Besides, a complex-valued denoising convolution neural network (CV-DnCNN) is further proposed for enhanced performance. Simulation results demonstrate the superiority of the proposed solution over state-of-the-art solutions.
Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications. Asymptotically optimal channel codes have been developed by mathematicians for communicating under canonical models after over 60 years of research. On the other hand, in many non-canonical channel settings, optimal codes do not exist and the codes designed for canonical models are adapted via heuristics to these channels and are thus not guaranteed to be optimal. In this work, we make significant progress on this problem by designing a fully end-to-end jointly trained neural encoder and decoder, namely, Turbo Autoencoder (TurboAE), with the following contributions: ($a$) under moderate block lengths, TurboAE approaches state-of-the-art performance under canonical channels; ($b$) moreover, TurboAE outperforms the state-of-the-art codes under non-canonical settings in terms of reliability. TurboAE shows that the development of channel coding design can be automated via deep learning, with near-optimal performance.
A status updating communication system is examined, in which a transmitter communicates with a receiver over a noisy channel. The goal is to realize timely delivery of fresh data over time, which is assessed by an age-of-information (AoI) metric. Channel coding is used to combat the channel errors, and feedback is sent to acknowledge updates reception. In case decoding is unsuccessful, a hybrid ARQ protocol is employed, in which incremental redundancy (IR) bits are transmitted to enhance the decoding ability. This continues for some amount of time in case decoding remains unsuccessful, after which a new (fresh) status update is transmitted instead. In case decoding is successful, the transmitter has the option to idly wait for a certain amount of time before sending a new update. A general problem is formulated that optimizes the codeword and IR lengths for each update, and the waiting times, such that the long term average AoI is minimized. Stationary deterministic policies are investigated, in which the codeword and IR lengths are fixed for each update, and the waiting time is a deterministic function of the AoI. The optimal waiting policy is then derived, and is shown to have a threshold structure, in which the transmitter sends a new update only if the AoI grows above a certain threshold that is a function of the codeword and IR lengths. Choosing the codeword and IR lengths is discussed in the context of binary symmetric channels.
Massive access is a critical design challenge of Internet of Things (IoT) networks. In this paper, we consider the grant-free uplink transmission of an IoT network with a multiple-antenna base station (BS) and a large number of single-antenna IoT devices. Taking into account the sporadic nature of IoT devices, we formulate the joint activity detection and channel estimation (JADCE) problem as a group-sparse matrix estimation problem. This problem can be solved by applying the existing compressed sensing techniques, which however either suffer from high computational complexities or lack of algorithm robustness. To this end, we propose a novel algorithm unrolling framework based on the deep neural network to simultaneously achieve low computational complexity and high robustness for solving the JADCE problem. Specifically, we map the original iterative shrinkage thresholding algorithm (ISTA) into an unrolled recurrent neural network (RNN), thereby improving the convergence rate and computational efficiency through end-to-end training. Moreover, the proposed algorithm unrolling approach inherits the structure and domain knowledge of the ISTA, thereby maintaining the algorithm robustness, which can handle non-Gaussian preamble sequence matrix in massive access. With rigorous theoretical analysis, we further simplify the unrolled network structure by reducing the redundant training parameters. Furthermore, we prove that the simplified unrolled deep neural network structures enjoy a linear convergence rate. Extensive simulations based on various preamble signatures show that the proposed unrolled networks outperform the existing methods in terms of the convergence rate, robustness and estimation accuracy.