Do you want to publish a course? Click here

Quantum Learning Based Nonrandom Superimposed Coding for Secure Wireless Access in 5G URLLC

64   0   0.0 ( 0 )
 Added by Dongyang Xu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Secure wireless access in ultra-reliable low-latency communications (URLLC), which is a critical aspect of 5G security, has become increasingly important due to its potential support of grant-free configuration. In grant-free URLLC, precise allocation of different pilot resources to different users that share the same time-frequency resource is essential for the next generation NodeB (gNB) to exactly identify those users under access collision and to maintain precise channel estimation required for reliable data transmission. However, this process easily suffers from attacks on pilots. We in this paper propose a quantum learning based nonrandom superimposed coding method to encode and decode pilots on multidimensional resources, such that the uncertainty of attacks can be learned quickly and eliminated precisely. Particularly, multiuser pilots for uplink access are encoded as distinguishable subcarrier activation patterns (SAPs) and gNB decodes pilots of interest from observed SAPs, a superposition of SAPs from access users, by joint design of attack mode detection and user activity detection though a quantum learning network (QLN). We found that the uncertainty lies in the identification process of codeword digits from the attacker, which can be always modelled as a black-box model, resolved by a quantum learning algorithm and quantum circuit. Novel analytical closed-form expressions of failure probability are derived to characterize the reliability of this URLLC system with short packet transmission. Simulations how that our method can bring ultra-high reliability and low latency despite attacks on pilots.



rate research

Read More

In a frequency division duplex (FDD) massive multiple input multiple output (MIMO) system, the channel state information (CSI) feedback causes a significant bandwidth resource occupation. In order to save the uplink bandwidth resources, a 1-bit compressed sensing (CS)-based CSI feedback method assisted by superimposed coding (SC) is proposed. Using 1-bit CS and SC techniques, the compressed support-set information and downlink CSI (DL-CSI) are superimposed on the uplink user data sequence (UL-US) and fed back to base station (BS). Compared with the SC-based feedback, the analysis and simulation results show that the UL-USs bit error ratio (BER) and the DL-CSIs accuracy can be improved in the proposed method, without using the exclusive uplink bandwidth resources to feed DL-CSI back to BS.
Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying.
5G New Radio (NR) is expected to support new ultra-reliable low-latency communication (URLLC) service targeting at supporting the small packets transmissions with very stringent latency and reliability requirements. Current Long Term Evolution (LTE) system has been designed based on grantbased (GB) (i.e., dynamic grant) random access, which can hardly support the URLLC requirements. Grant-free (GF) (i.e., configured grant) access is proposed as a feasible and promising technology to meet such requirements, especially for uplink transmissions, which effectively saves the time of requesting/waiting for a grant. While some basic GF access features have been proposed and standardized in NR Release-15, there is still much space to improve. Being proposed as 3GPP study items, three GF access schemes with Hybrid Automatic Repeat reQuest (HARQ) retransmissions including Reactive, K-repetition, and Proactive, are analyzed in this paper. Specifically, we present a spatiotemporal analytical framework for the contention-based GF access analysis. Based on this framework, we define the latent access failure probability to characterize URLLC reliability and latency performances. We propose a tractable approach to derive and analyze the latent access failure probability of the typical UE under three GF HARQ schemes. Our results show that under shorter latency constraints, the Proactive scheme provides the lowest latent access failure probability, whereas, under longer latency constraints, the K-repetition scheme achieves the lowest latent access failure probability, which depends on K. If K is overestimated, the Proactive scheme provides lower latent access failure probability than the K-repetition scheme.
In the fifth-generation (5G) networks and the beyond, communication latency and network bandwidth will be no more bottleneck to mobile users. Thus, almost every mobile device can participate in the distributed learning. That is, the availability issue of distributed learning can be eliminated. However, the model safety will become a challenge. This is because the distributed learning system is prone to suffering from byzantine attacks during the stages of updating model parameters and aggregating gradients amongst multiple learning participants. Therefore, to provide the byzantine-resilience for distributed learning in 5G era, this article proposes a secure computing framework based on the sharding-technique of blockchain, namely PIRATE. A case-study shows how the proposed PIRATE contributes to the distributed learning. Finally, we also envision some open issues and challenges based on the proposed byzantine-resilient learning framework.
We introduce the concept of using unmanned aerial vehicles (UAVs) as drone base stations for in-band Integrated Access and Backhaul (IB-IAB) scenarios for 5G networks. We first present a system model for forward link transmissions in an IB-IAB multi-tier drone cellular network. We then investigate the key challenges of this scenario and propose a framework that utilizes the flying capabilities of the UAVs as the main degree of freedom to find the optimal precoder design for the backhaul links, user-base station association, UAV 3D hovering locations, and power allocations. We discuss how the proposed algorithm can be utilized to optimize the network performance in both large and small scales. Finally, we use an exhaustive search-based solution to demonstrate the performance gains that can be achieved from the presented algorithm in terms of the received signal to interference plus noise ratio (SINR) and overall network sum-rate.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا