ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper integrates non-orthogonal multiple access (NOMA) and over-the-air federated learning (AirFL) into a unified framework using one simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS). The STAR-RIS plays an i mportant role in adjusting the decoding order of hybrid users for efficient interference mitigation and omni-directional coverage extension. To capture the impact of non-ideal wireless channels on AirFL, a closed-form expression for the optimality gap (a.k.a. convergence upper bound) between the actual loss and the optimal loss is derived. This analysis reveals that the learning performance is significantly affected by active and passive beamforming schemes as well as wireless noise. Furthermore, when the learning rate diminishes as the training proceeds, the optimality gap is explicitly characterized to converge with a linear rate. To accelerate convergence while satisfying QoS requirements, a mixed-integer non-linear programming (MINLP) problem is formulated by jointly designing the transmit power at users and the configuration mode of STAR-RIS. Next, a trust region-based successive convex approximation method and a penalty-based semidefinite relaxation approach are proposed to handle the decoupled non-convex subproblems iteratively. An alternating optimization algorithm is then developed to find a suboptimal solution for the original MINLP problem. Extensive simulation results show that i) the proposed framework can efficiently support NOMA and AirFL users via concurrent uplink communications, ii) our algorithms can achieve a faster convergence rate on IID and non-IID settings compared to existing baselines, and iii) both the spectrum efficiency and learning performance can be significantly improved with the aid of the well-tuned STAR-RIS.
Multi-modal distributions are commonly used to model clustered data in statistical learning tasks. In this paper, we consider the Mixed Linear Regression (MLR) problem. We propose an optimal transport-based framework for MLR problems, Wasserstein Mix ed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models. Through a model-based duality analysis, WMLR reduces the underlying MLR task to a nonconvex-concave minimax optimization problem, which can be provably solved to find a minimax stationary point by the Gradient Descent Ascent (GDA) algorithm. In the special case of mixtures of two linear regression models, we show that WMLR enjoys global convergence and generalization guarantees. We prove that WMLRs sample complexity grows linearly with the dimension of data. Finally, we discuss the application of WMLR to the federated learning task where the training samples are collected by multiple agents in a network. Unlike the Expectation Maximization algorithm, WMLR directly extends to the distributed, federated learning setting. We support our theoretical results through several numerical experiments, which highlight our frameworks ability to handle the federated learning setting with mixture models.
We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventio nal deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques. Such model-based methods utilize mathematical formulations that represent the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. On the other hand, purely data-driven approaches that are model-agnostic are becoming increasingly popular as datasets become abundant and the power of modern deep learning pipelines increases. Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance, especially for supervised problems. However, DNNs typically require massive amounts of data and immense computational resources, limiting their applicability for some signal processing scenarios. We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches. Such model-based deep learning methods exploit both partial domain knowledge, via mathematical structures designed for specific problems, as well as learning from limited data. In this article we survey the leading approaches for studying and designing model-based deep learning systems. We divide hybrid model-based/data-driven systems into categories based on their inference mechanism. We provide a comprehensive review of the leading approaches for combining model-based algorithms with deep learning in a systematic manner, along with concrete guidelines and detailed signal processing oriented examples from recent literature. Our aim is to facilitate the design and study of future systems on the intersection of signal processing and machine learning that incorporate the advantages of both domains.
Wireless communications is often subject to channel fading. Various statistical models have been proposed to capture the inherent randomness in fading, and conventional model-based receiver designs rely on accurate knowledge of this underlying distri bution, which, in practice, may be complex and intractable. In this work, we propose a neural network-based symbol detection technique for downlink fading channels, which is based on the maximum a-posteriori probability (MAP) detector. To enable training on a diverse ensemble of fading realizations, we propose a federated training scheme, in which multiple users collaborate to jointly learn a universal data-driven detector, hence the name FedRec. The performance of the resulting receiver is shown to approach the MAP performance in diverse channel conditions without requiring knowledge of the fading statistics, while inducing a substantially reduced communication overhead in its training procedure compared to centralized training.
The design of methods for inference from time sequences has traditionally relied on statistical models that describe the relation between a latent desired sequence and the observed one. A broad family of model-based algorithms have been derived to ca rry out inference at controllable complexity using recursive computations over the factor graph representing the underlying distribution. An alternative model-agnostic approach utilizes machine learning (ML) methods. Here we propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences. In the proposed approach, neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence, rather than the complete inference task. By exploiting stationary properties of this distribution, the resulting approach can be applied to sequences of varying temporal duration. Learned factor graph can be realized using compact neural networks that are trainable using small training sets, or alternatively, be used to improve upon existing deep inference systems. We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data, and can be applied to sequences of different lengths. Our experimental results demonstrate the ability of the proposed learned factor graphs to learn to carry out accurate inference from small training sets for sleep stage detection using the Sleep-EDF dataset, as well as for symbol detection in digital communications with unknown channels.
Multiple-input multiple-output (MIMO) systems are required to communicate reliably at high spectral bands using a large number of antennas, while operating under strict power and cost constraints. In order to meet these constraints, future MIMO recei vers are expected to operate with low resolution quantizers, namely, utilize a limited number of bits for representing their observed measurements, inherently distorting the digital representation of the acquired signals. The fact that MIMO receivers use their measurements for some task, such as symbol detection and channel estimation, other than recovering the underlying analog signal, indicates that the distortion induced by bit-constrained quantization can be reduced by designing the acquisition scheme in light of the system task, i.e., by {em task-based quantization}. In this work we survey the theory and design approaches to task-based quantization, presenting model-aware designs as well as data-driven implementations. Then, we show how one can implement a task-based bit-constrained MIMO receiver, presenting approaches ranging from conventional hybrid receiver architectures to structures exploiting the dynamic nature of metasurface antennas. This survey narrows the gap between theoretical task-based quantization and its implementation in practice, providing concrete algorithmic and hardware design principles for realizing task-based MIMO receivers.
Traditional multiple input multiple output radars, which transmit orthogonal coded waveforms, suffer from range-azimuth resolution trade-off. In this work, we adopt a frequency division multiple access (FDMA) approach that breaks this conflict. We co mbine narrow individual bandwidth for high azimuth resolution and large overall total bandwidth for high range resolution. We process all channels jointly to overcome the FDMA range resolution limitation to a single bandwidth, and address range-azimuth coupling using a random array configuration.
The proliferation of wireless communications has recently created a bottleneck in terms of spectrum availability. Motivated by the observation that the root of the spectrum scarcity is not a lack of resources but an inefficient managing that can be s olved, dynamic opportunistic exploitation of spectral bands has been considered, under the name of Cognitive Radio (CR). This technology allows secondary users to access currently idle spectral bands by detecting and tracking the spectrum occupancy. The CR application revisits this traditional task with specific and severe requirements in terms of spectrum sensing and detection performance, real-time processing, robustness to noise and more. Unfortunately, conventional methods do not satisfy these demands for typical signals, that often have very high Nyquist rates. Recently, several sampling methods have been proposed that exploit signals a priori known structure to sample them below the Nyquist rate. Here, we review some of these techniques and tie them to the task of spectrum sensing in the context of CR. We then show how issues related to spectrum sensing can be tackled in the sub-Nyquist regime. First, to cope with low signal to noise ratios, we propose to recover second-order statistics from the low rate samples, rather than the signal itself. In particular, we consider cyclostationary based detection, and investigate CR networks that perform collaborative spectrum sensing to overcome channel effects. To enhance the efficiency of the available spectral bands detection, we present joint spectrum sensing and direction of arrival estimation methods. Throughout this work, we highlight the relation between theoretical algorithms and their practical implementation. We show hardware simulations performed on a prototype we built, demonstrating the feasibility of sub-Nyquist spectrum sensing in the context of CR.
This paper presents a spectrum sharing technology enabling interference-free operation of a surveillance radar and communication transmissions over a common spectrum. A cognitive radio receiver senses the spectrum using low sampling and processing ra tes. The radar is a cognitive system that employs a Xampling-based receiver and transmits in several narrow bands. Our main contribution is the alliance of two previous ideas, CRo and cognitive radar (CRr), and their adaptation to solve the spectrum sharing problem.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا