ترغب بنشر مسار تعليمي؟ اضغط هنا

Measure Concentration on the OFDM-based Random Access Channel

81   0   0.0 ( 0 )
 نشر من قبل Gerhard Wunder
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

It is well known that CS can boost massive random access protocols. Usually, the protocols operate in some overloaded regime where the sparsity can be exploited. In this paper, we consider a different approach by taking an orthogonal FFT base, subdivide its image into appropriate sub-channels and let each subchannel take only a fraction of the load. To show that this approach can actually achieve the full capacity we provide i) new concentration inequalities, and ii) devise a sparsity capture effect, i.e where the sub-division can be driven such that the activity in each each sub-channel is sparse by design. We show by simulations that the system is scalable resulting in a coarsely 30-fold capacity increase.



قيم البحث

اقرأ أيضاً

Motivated by applications in unsourced random access, this paper develops a novel scheme for the problem of compressed sensing of binary signals. In this problem, the goal is to design a sensing matrix $A$ and a recovery algorithm, such that the spar se binary vector $mathbf{x}$ can be recovered reliably from the measurements $mathbf{y}=Amathbf{x}+sigmamathbf{z}$, where $mathbf{z}$ is additive white Gaussian noise. We propose to design $A$ as a parity check matrix of a low-density parity-check code (LDPC), and to recover $mathbf{x}$ from the measurements $mathbf{y}$ using a Markov chain Monte Carlo algorithm, which runs relatively fast due to the sparse structure of $A$. The performance of our scheme is comparable to state-of-the-art schemes, which use dense sensing matrices, while enjoying the advantages of using a sparse sensing matrix.
In this paper, we propose a frequency-time division network (FreqTimeNet) to improve the performance of deep learning (DL) based OFDM channel estimation. This FreqTimeNet is designed based on the orthogonality between the frequency domain and the tim e domain. In FreqTimeNet, the input is processed by parallel frequency blocks and parallel time blocks in sequential. Introducing the attention mechanism to use the SNR information, an attention based FreqTimeNet (AttenFreqTimeNet) is proposed. Using 3rd Generation Partnership Project (3GPP) channel models, the mean square error (MSE) performance of FreqTimeNet and AttenFreqTimeNet under different scenarios is evaluated. A method for constructing mixed training data is proposed, which could address the generalization problem in DL. It is observed that AttenFreqTimeNet outperforms FreqTimeNet, and FreqTimeNet outperforms other DL networks, with acceptable complexity.
Grant-free random access is a promising protocol to support massive access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT) with sporadic traffic. Specifically, in each coherence interval, the base station (BS) performs joint activi ty detection and channel estimation (JADCE) before data transmission. Due to the deployment of a large-scale antennas array and the existence of a huge number of IoT devices, JADCE usually has high computational complexity and needs long pilot sequences. To solve these challenges, this paper proposes a dimension reduction method, which projects the original device state matrix to a low-dimensional space by exploiting its sparse and low-rank structure. Then, we develop an optimized design framework with a coupled full column rank constraint for JADCE to reduce the size of the search space. However, the resulting problem is non-convex and highly intractable, for which the conventional convex relaxation approaches are inapplicable. To this end, we propose a logarithmic smoothing method for the non-smoothed objective function and transform the interested matrix to a positive semidefinite matrix, followed by giving a Riemannian trust-region algorithm to solve the problem in complex field. Simulation results show that the proposed algorithm is efficient to a large-scale JADCE problem and requires shorter pilot sequences than the state-of-art algorithms which only exploit the sparsity of device state matrix.
Large communication networks, e.g. Internet of Things (IoT), are known to be vulnerable to co-channel interference. One possibility to address this issue is the use of orthogonal multiple access (OMA) techniques. However, due to a potentially very lo ng duty cycle, OMA is not well suited for such schemes. Instead, random medium access (RMA) appears more promising. An RMA scheme is based on transmission of short data packets with random scheduling, which is typically unknown to the receiver. The received signal, which consists of the overlapping packets, can be used for energy harvesting and powering of a relay device. Such an energy harvesting relay may utilize the energy for further information processing and uplink transmission. In this paper, we address the design of a simultaneous information and power transfer scheme based on randomly scheduled packet transmissions and reliable symbol detection. We formulate a prediction problem with the goal to maximize the harvested power for an RMA scenario. In order to solve this problem, we propose a new prediction method, which shows a significant performance improvement compared to the straightforward baseline scheme. Furthermore, we investigate the complexity of the proposed method and its vulnerability to imperfect channel state information.
This paper analyzes the impact of non-Gaussian multipath component (MPC) amplitude distributions on the performance of Compressed Sensing (CS) channel estimators for OFDM systems. The number of dominant MPCs that any CS algorithm needs to estimate in order to accurately represent the channel is characterized. This number relates to a Compressibility Index (CI) of the channel that depends on the fourth moment of the MPC amplitude distribution. A connection between the Mean Squared Error (MSE) of any CS estimation algorithm and the MPC amplitude distribution fourth moment is revealed that shows a smaller number of MPCs is needed to well-estimate channels when these components have large fourth moment amplitude gains. The analytical results are validated via simulations for channels with lognormal MPCs such as the NYU mmWave channel model. These simulations show that when the MPC amplitude distribution has a high fourth moment, the well known CS algorithm of Orthogonal Matching Pursuit performs almost identically to the Basis Pursuit De-Noising algorithm with a much lower computational cost.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا