No Arabic abstract
Downlink beamforming is a key technology for cellular networks. However, computing the transmit beamformer that maximizes the weighted sum rate subject to a power constraint is an NP-hard problem. As a result, iterative algorithms that converge to a local optimum are used in practice. Among them, the weighted minimum mean square error (WMMSE) algorithm has gained popularity, but its computational complexity and consequent latency has motivated the need for lower-complexity approximations at the expense of performance. Motivated by the recent success of deep unfolding in the trade-off between complexity and performance, we propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel. The main idea consists of mapping a fixed number of iterations of the WMMSE algorithm into trainable neural network layers, whose architecture reflects the structure of the original algorithm. With respect to traditional end-to-end learning, deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability. However, the formulation of the WMMSE algorithm, as described in Shi et al., is not amenable to be unfolded due to a matrix inversion, an eigendecomposition, and a bisection search performed at each iteration. Therefore, we present an alternative formulation that circumvents these operations by resorting to projected gradient descent. By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations, with the advantage of a lower computational load.
We study the problem of optimal power allocation in a single-hop ad hoc wireless network. In solving this problem, we propose a hybrid neural architecture inspired by the algorithmic unfolding of the iterative weighted minimum mean squared error (WMMSE) method, that we denote as unfolded WMMSE (UWMMSE). The learnable weights within UWMMSE are parameterized using graph neural networks (GNNs), where the time-varying underlying graphs are given by the fading interference coefficients in the wireless network. These GNNs are trained through a gradient descent approach based on multiple instances of the power allocation problem. Once trained, UWMMSE achieves performance comparable to that of WMMSE while significantly reducing the computational complexity. This phenomenon is illustrated through numerical experiments along with the robustness and generalization to wireless networks of different densities and sizes.
Graph signal processing is a ubiquitous task in many applications such as sensor, social, transportation and brain networks, point cloud processing, and graph neural networks. Graph signals are often corrupted through sensing processes, and need to be restored for the above applications. In this paper, we propose two graph signal restoration methods based on deep algorithm unrolling (DAU). First, we present a graph signal denoiser by unrolling iterations of the alternating direction method of multiplier (ADMM). We then propose a general restoration method for linear degradation by unrolling iterations of Plug-and-Play ADMM (PnP-ADMM). In the second method, the unrolled ADMM-based denoiser is incorporated as a submodule. Therefore, our restoration method has a nested DAU structure. Thanks to DAU, parameters in the proposed denoising/restoration methods are trainable in an end-to-end manner. Since the proposed restoration methods are based on iterations of a (convex) optimization algorithm, the method is interpretable and keeps the number of parameters small because we only need to tune graph-independent regularization parameters. We solve two main problems in existing graph signal restoration methods: 1) limited performance of convex optimization algorithms due to fixed parameters which are often determined manually. 2) large number of parameters of graph neural networks that result in difficulty of training. Several experiments for graph signal denoising and interpolation are performed on synthetic and real-world data. The proposed methods show performance improvements to several existing methods in terms of root mean squared error in both tasks.
In this paper, we consider hybrid beamforming designs for multiuser massive multiple-input multiple-output (MIMO)-orthogonal frequency division multiplexing (OFDM) systems. Aiming at maximizing the weighted spectral efficiency, we propose one alternating maximization framework where the analog precoding is optimized by Riemannian manifold optimization. If the digital precoding is optimized by a locally optimal algorithm, we obtain a locally optimal alternating maximization algorithm. In contrast, if we use a weighted minimum mean square error (MMSE)-based iterative algorithm for digital precoding, we obtain a suboptimal alternating maximization algorithm with reduced complexity in each iteration. By characterizing the upper bound of the weighted arithmetic and geometric means of mean square errors (MSEs), it is shown that the two alternating maximization algorithms have similar performance when the user specific weights do not have big differences. Verified by numerical results, the performance gap between the two alternating maximization algorithms becomes large when the ratio of the maximal and minimal weights among users is very large. Moreover, we also propose a low-complexity closed-form method without iterations. It employs matrix decomposition for the analog beamforming and weighted MMSE for the digital beamforming. Although it is not supposed to maximize the weighted spectral efficiency, it exhibits small performance deterioration compared to the two iterative alternating maximization algorithms and it qualifies as a good initialization for iterative algorithms, saving thereby iterations.
Photoacoustic imaging (PAI), is a promising medical imaging technique that provides the high contrast of the optical imaging and the resolution of ultrasound (US) imaging. Among all the methods, Three-dimensional (3D) PAI provides a high resolution and accuracy. One of the most common algorithms for 3D PA image reconstruction is delay-and-sum (DAS). However, the quality of the reconstructed image obtained from this algorithm is not satisfying, having high level of sidelobes and a wide mainlobe. In this paper, delay-multiply-and-sum (DMAS) algorithm is suggested to overcome these limitations in 3D PAI. It is shown that DMAS algorithm is an appropriate reconstruction technique for 3D PAI and the reconstructed images using this algorithm are improved in the terms of the width of mainlobe and sidelobes, compared to DAS. Also, the quantitative results show that DMAS improves signal-to-noise ratio (SNR) and full-width-half-maximum (FWHM) for about 25 dB and 0.2 mm, respectively, compared to DAS.
In this paper, we consider a massive multiple-input-multiple-output (MIMO) downlink system that improves the hardware efficiency by dynamically selecting the antenna subarray and utilizing 1-bit phase shifters for hybrid beamforming. To maximize the spectral efficiency, we propose a novel deep unsupervised learning-based approach that avoids the computationally prohibitive process of acquiring training labels. The proposed design has its input as the channel matrix and consists of two convolutional neural networks (CNNs). To enable unsupervised training, the problem constraints are embedded in the neural networks: the first CNN adopts deep probabilistic sampling, while the second CNN features a quantization layer designed for 1-bit phase shifters. The two networks can be trained jointly without labels by sharing an unsupervised loss function. We next propose a phased training approach to promote the convergence of the proposed networks. Simulation results demonstrate the advantage of the proposed approach over conventional optimization-based algorithms in terms of both achieved rate and computational complexity.