ترغب بنشر مسار تعليمي؟ اضغط هنا

We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventio nal deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.
Reed-Muller (RM) codes are one of the oldest families of codes. Recently, a recursive projection aggregation (RPA) decoder has been proposed, which achieves a performance that is close to the maximum likelihood decoder for short-length RM codes. One of its main drawbacks, however, is the large amount of computations needed. In this paper, we devise a new algorithm to lower the computational budget while keeping a performance close to that of the RPA decoder. The proposed approach consists of multiple sparse RPAs that are generated by performing only a selection of projections in each sparsified decoder. In the end, a cyclic redundancy check (CRC) is used to decide between output codewords. Simulation results show that our proposed approach reduces the RPA decoders computations up to $80%$ with negligible performance loss.
The design of methods for inference from time sequences has traditionally relied on statistical models that describe the relation between a latent desired sequence and the observed one. A broad family of model-based algorithms have been derived to ca rry out inference at controllable complexity using recursive computations over the factor graph representing the underlying distribution. An alternative model-agnostic approach utilizes machine learning (ML) methods. Here we propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences. In the proposed approach, neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence, rather than the complete inference task. By exploiting stationary properties of this distribution, the resulting approach can be applied to sequences of varying temporal duration. Learned factor graph can be realized using compact neural networks that are trainable using small training sets, or alternatively, be used to improve upon existing deep inference systems. We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data, and can be applied to sequences of different lengths. Our experimental results demonstrate the ability of the proposed learned factor graphs to learn to carry out accurate inference from small training sets for sleep stage detection using the Sleep-EDF dataset, as well as for symbol detection in digital communications with unknown channels.
This work introduces the particle-intensity channel (PIC) as a model for molecular communication systems and characterizes the capacity limits as well as properties of the optimal (capacity-achieving) input distributions for such channels. In the PIC , the transmitter encodes information, in symbols of a given duration, based on the probability of particle release, and the receiver detects and decodes the message based on the number of particles detected during the symbol interval. In this channel, the transmitter may be unable to control precisely the probability of particle release, and the receiver may not detect all the particles that arrive. We model this channel using a generalization of the binomial channel and show that the capacity-achieving input distribution for this channel always has mass points at probabilities of particle release of zero and one. To find the capacity-achieving input distributions, we develop an efficient algorithm we call dynamic assignment Blahut-Arimoto (DAB). For diffusive particle transport, we also derive the conditions under which the input with two mass points is capacity-achieving.
The design of symbol detectors in digital communication systems has traditionally relied on statistical channel models that describe the relation between the transmitted symbols and the observed signal at the receiver. Here we review a data-driven fr amework to symbol detection design which combines machine learning (ML) and model-based algorithms. In this hybrid approach, well-known channel-model-based algorithms such as the Viterbi method, BCJR detection, and multiple-input multiple-output (MIMO) soft interference cancellation (SIC) are augmented with ML-based algorithms to remove their channel-model-dependence, allowing the receiver to learn to implement these algorithms solely from data. The resulting data-driven receivers are most suitable for systems where the underlying channel models are poorly understood, highly complex, or do not well-capture the underlying physics. Our approach is unique in that it only replaces the channel-model-based computations with dedicated neural networks that can be trained from a small amount of data, while keeping the general algorithm intact. Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship and in the presence of channel state information uncertainty.
For nano-scale communications, there must be cooperation and simultaneous communication between nano devices. To this end, in this paper we investigate two-way (a.k.a. bi-directional) molecular communications between nano devices. If different types of molecules are used for the communication links, the two-way system eliminates the need to consider self-interference. However, in many systems, it is not feasible to use a different type of molecule for each communication link. Thus, we propose a two-way molecular communication system that uses a single type of molecule. We develop a channel model for this system and use it to analyze the proposed systems bit error rate, throughput, and self-interference. Moreover, we propose analog- and digital- self-interference cancellation techniques. The enhancement of link-level performance using these techniques is confirmed with both numerical and analytical results.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا