Do you want to publish a course? Click here

Data-Driven Symbol Detection via Model-Based Machine Learning

160   0   0.0 ( 0 )
 Added by Nir Shlezinger
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The design of symbol detectors in digital communication systems has traditionally relied on statistical channel models that describe the relation between the transmitted symbols and the observed signal at the receiver. Here we review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms. In this hybrid approach, well-known channel-model-based algorithms such as the Viterbi method, BCJR detection, and multiple-input multiple-output (MIMO) soft interference cancellation (SIC) are augmented with ML-based algorithms to remove their channel-model-dependence, allowing the receiver to learn to implement these algorithms solely from data. The resulting data-driven receivers are most suitable for systems where the underlying channel models are poorly understood, highly complex, or do not well-capture the underlying physics. Our approach is unique in that it only replaces the channel-model-based computations with dedicated neural networks that can be trained from a small amount of data, while keeping the general algorithm intact. Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship and in the presence of channel state information uncertainty.



rate research

Read More

High demands for industrial networks lead to increasingly large sensor networks. However, the complexity of networks and demands for accurate data require better stability and communication quality. Conventional clustering methods for ad-hoc networks are based on topology and connectivity, leading to unstable clustering results and low communication quality. In this paper, we focus on two situations: time-evolving networks, and multi-channel ad-hoc networks. We model ad-hoc networks as graphs and introduce community detection methods to both situations. Particularly, in time-evolving networks, our method utilizes the results of community detection to ensure stability. By using similarity or human-in-the-loop measures, we construct a new weighted graph for final clustering. In multi-channel networks, we perform allocations from the results of multiplex community detection. Experiments on real-world datasets show that our method outperforms baselines in both stability and quality.
In this paper, we propose a model-based machine-learning approach for dual-polarization systems by parameterizing the split-step Fourier method for the Manakov-PMD equation. The resulting method combines hardware-friendly time-domain nonlinearity mitigation via the recently proposed learned digital backpropagation (LDBP) with distributed compensation of polarization-mode dispersion (PMD). We refer to the resulting approach as LDBP-PMD. We train LDBP-PMD on multiple PMD realizations and show that it converges within 1% of its peak dB performance after 428 training iterations on average, yielding a peak effective signal-to-noise ratio of only 0.30 dB below the PMD-free case. Similar to state-of-the-art lumped PMD compensation algorithms in practical systems, our approach does not assume any knowledge about the particular PMD realization along the link, nor any knowledge about the total accumulated PMD. This is a significant improvement compared to prior work on distributed PMD compensation, where knowledge about the accumulated PMD is typically assumed. We also compare different parameterization choices in terms of performance, complexity, and convergence behavior. Lastly, we demonstrate that the learned models can be successfully retrained after an abrupt change of the PMD realization along the fiber.
Forward channel state information (CSI) often plays a vital role in scheduling and capacity-approaching transmission optimization for massive multiple-input multiple-output (MIMO) communication systems. In frequency division duplex (FDD) massive MIMO systems, forwardlink CSI reconstruction at the transmitter relies critically on CSI feedback from receiving nodes and must carefully weigh the tradeoff between reconstruction accuracy and feedback bandwidth. Recent studies on the use of recurrent neural networks (RNNs) have demonstrated strong promises, though the cost of computation and memory remains high, for massive MIMO deployment. In this work, we exploit channel coherence in time to substantially improve the feedback efficiency. Using a Markovian model, we develop a deep convolutional neural network (CNN)-based framework MarkovNet to differentially encode forward CSI in time to effectively improve reconstruction accuracy. Furthermore, we explore important physical insights, including spherical normalization of input data and convolutional layers for feedback compression. We demonstrate substantial performance improvement and complexity reduction over the RNN-based work by our proposed MarkovNet to recover forward CSI estimates accurately. We explore additional practical consideration in feedback quantization, and show that MarkovNet outperforms RNN-based CSI estimation networks at a fraction of the computational cost.
In this paper, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile edge computing (MEC) servers to jointly provide computational and communication services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multi-stack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users information in its multiple stacks to avoid learning the same resource allocation scheme and users states, thus improving the convergence speed and learning efficiency. Simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventional deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا