No Arabic abstract
Intelligent signal processing for wireless communications is a vital task in modern wireless systems, but it faces new challenges because of network heterogeneity, diverse service requirements, a massive number of connections, and various radio characteristics. Owing to recent advancements in big data and computing technologies, artificial intelligence (AI) has become a useful tool for radio signal processing and has enabled the realization of intelligent radio signal processing. This survey covers four intelligent signal processing topics for the wireless physical layer, including modulation classification, signal detection, beamforming, and channel estimation. In particular, each theme is presented in a dedicated section, starting with the most fundamental principles, followed by a review of up-to-date studies and a summary. To provide the necessary background, we first present a brief overview of AI techniques such as machine learning, deep learning, and federated learning. Finally, we highlight a number of research challenges and future directions in the area of intelligent radio signal processing. We expect this survey to be a good source of information for anyone interested in intelligent radio signal processing, and the perspectives we provide therein will stimulate many more novel ideas and contributions in the future.
Matrix completion (MC) is a promising technique which is able to recover an intact matrix with low-rank property from sub-sampled/incomplete data. Its application varies from computer vision, signal processing to wireless network, and thereby receives much attention in the past several years. There are plenty of works addressing the behaviors and applications of MC methodologies. This work provides a comprehensive review for MC approaches from the perspective of signal processing. In particular, the MC problem is first grouped into six optimization problems to help readers understand MC algorithms. Next, four representative types of optimization algorithms solving the MC problem are reviewed. Ultimately, three different application fields of MC are described and evaluated.
Partially Detected Intelligent Traffic Signal Control (PD-ITSC) systems that can optimize traffic signals based on limited detected information could be a cost-efficient solution for mitigating traffic congestion in the future. In this paper, we focus on a particular problem in PD-ITSC - adaptation to changing environments. To this end, we investigate different reinforcement learning algorithms, including Q-learning, Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Actor-Critic with Kronecker-Factored Trust Region (ACKTR). Our findings suggest that RL algorithms can find optimal strategies under partial vehicle detection; however, policy-based algorithms can adapt to changing environments more efficiently than value-based algorithms. We use these findings to draw conclusions about the value of different models for PD-ITSC systems.
We design and experimentally demonstrate a radio frequency interference management system with free-space optical communication and photonic signal processing. The system provides real-time interference cancellation in 6 GHz wide bandwidth.
Orthogonal frequency-division multiplexing (OFDM) has been selected as the basis for the fifth-generation new radio (5G-NR) waveform developments. However, effective signal processing tools are needed for enhancing the OFDM spectrum in various advanced transmission scenarios. In earlier work, we have shown that fast-convolution (FC) processing is a very flexible and efficient tool for filtered-OFDM signal generation and receiver-side subband filtering, e.g., for the mixed-numerology scenarios of the 5G-NR. FC filtering approximates linear convolution through effective fast Fourier transform (FFT)-based circular convolutions using partly overlapping processing blocks. However, with the continuous overlap-and-save and overlap-and-add processing models with fixed block-size and fixed overlap, the FC-processing blocks cannot be aligned with all OFDM symbols of a transmission frame. Furthermore, 5G-NR numerology does not allow to use transform lengths shorter than 128 because this would lead to non-integer cyclic prefix (CP) lengths. In this article, we present new FC-processing schemes which solve the mentioned limitations. These schemes are based on dynamically adjusting the overlap periods and extrapolating the CP samples, which make it possible to align the FC blocks with each OFDM symbol, even in case of variable CP lengths. This reduces complexity and latency, e.g., in mini-slot transmissions and, as an example, allows to use 16-point transforms in case of a 12-subcarrier-wide subband allocation, greatly reducing the implementation complexity. On the receiver side, the proposed scheme makes it possible to effectively combine cascaded inverse and forward FFT units in FC-filtered OFDM processing. Transform decomposition is used to simplify these computations. Very extensive set of numerical results is also provided, in terms of radio-link performance and associated processing complexity.
The recent advancements in cloud services, Internet of Things (IoT) and Cellular networks have made cloud computing an attractive option for intelligent traffic signal control (ITSC). Such a method significantly reduces the cost of cables, installation, number of devices used, and maintenance. ITSC systems based on cloud computing lower the cost of the ITSC systems and make it possible to scale the system by utilizing the existing powerful cloud platforms. While such systems have significant potential, one of the critical problems that should be addressed is the network delay. It is well known that network delay in message propagation is hard to prevent, which could potentially degrade the performance of the system or even create safety issues for vehicles at intersections. In this paper, we introduce a new traffic signal control algorithm based on reinforcement learning, which performs well even under severe network delay. The framework introduced in this paper can be helpful for all agent-based systems using remote computing resources where network delay could be a critical concern. Extensive simulation results obtained for different scenarios show the viability of the designed algorithm to cope with network delay.