Do you want to publish a course? Click here

Time Synchronization in 5G Wireless Edge: Requirements and Solutions for Critical-MTC

360   0   0.0 ( 0 )
 Added by Aamir Mahmood
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Wireless edge is about distributing intelligence to the wireless devices wherein the distribution of accurate time reference is essential for time-critical machine-type communication (cMTC). In 5G-based cMTC, enabling time synchronization in the wireless edge means moving beyond the current synchronization needs and solutions in 5G radio access. In this article, we analyze the device-level synchronization needs of potential cMTC applications: industrial automation, power distribution, vehicular communication, and live audio/video production. We present an over-the-air (OTA) synchronization scheme comprised of 5G air interface parameters, and discuss their associated timing errors. We evaluate the estimation error in device-to-base station propagation delay from timing advance (TA) under random errors and show how to reduce the estimation error. In the end, we identify the random errors specific to dense multipath fading environments and discuss countermeasures.

rate research

Read More

Mobile-edge computing (MEC) and wireless power transfer are technologies that can assist in the implementation of next generation wireless networks, which will deploy a large number of computational and energy limited devices. In this letter, we consider a point-to-point MEC system, where the device harvests energy from the access points (APs) transmitted signal to power the offloading and/or the local computation of a task. By taking into account the non-linearities of energy harvesting, we provide analytical expressions for the probability of successful computation and for the average number of successfully computed bits. Our results show that a hybrid scheme of partial offloading and local computation is not always efficient. In particular, the decision to offload and/or compute locally, depends on the systems parameters such as the distance to the AP and the number of bits that need to be computed.
In this paper, a novel intelligent reflecting surface (IRS)-assisted wireless powered communication network (WPCN) architecture is proposed for low-power Internet-of-Things (IoT) devices, where the IRS is exploited to improve the performance of WPCN under imperfect channel state information (CSI). We formulate a hybrid access point (HAP) transmission energy minimization problem by a joint design of time allocation, HAP energy beamforming, receiving beamforming, user transmit power allocation, IRS energy reflection coefficient and information reflection coefficient under the imperfect CSI and non-linear energy harvesting model. Due to the high coupling of optimization variables, this problem is a non-convex optimization problem, which is difficult to solve directly. In order to solve the above-mentioned challenging problems, the alternating optimization (AO) is applied to decouple the optimization variables to solve the problem. Specifically, through AO, time allocation, HAP energy beamforming, receiving beamforming, user transmit power allocation, IRS energy reflection coefficient and information reflection coefficient are divided into three sub-problems to be solved alternately. The difference-of-convex (DC) programming is applied to solve the non-convex rank-one constraint in solving the IRS energy reflection coefficient and information reflection coefficient. Numerical simulations verify the effectiveness of our proposed algorithm in reducing HAP transmission energy compared to other benchmarks.
Massive machine-type communications (mMTC) is a crucial scenario to support booming Internet of Things (IoTs) applications. In mMTC, although a large number of devices are registered to an access point (AP), very few of them are active with uplink short packet transmission at the same time, which requires novel design of protocols and receivers to enable efficient data transmission and accurate multi-user detection (MUD). Aiming at this problem, grant-free non-orthogonal multiple access (GF-NOMA) protocol is proposed. In GF-NOMA, active devices can directly transmit their preambles and data symbols altogether within one time frame, without grant from the AP. Compressive sensing (CS)-based receivers are adopted for non-orthogonal preambles (NOP)-based MUD, and successive interference cancellation is exploited to decode the superimposed data signals. In this paper, we model, analyze, and optimize the CS-based GF-MONA mMTC system via stochastic geometry (SG), from an aspect of network deployment. Based on the SG network model, we first analyze the success probability as well as the channel estimation error of the CS-based MUD in the preamble phase and then analyze the average aggregate data rate in the data phase. As IoT applications highly demands low energy consumption, low infrastructure cost, and flexible deployment, we optimize the energy efficiency and AP coverage efficiency of GF-NOMA via numerical methods. The validity of our analysis is verified via Monte Carlo simulations. Simulation results also show that CS-based GF-NOMA with NOP yields better MUD and data rate performances than contention-based GF-NOMA with orthogonal preambles and CS-based grant-free orthogonal multiple access.
The massive sensing data generated by Internet-of-Things will provide fuel for ubiquitous artificial intelligence (AI), automating the operations of our society ranging from transportation to healthcare. The realistic adoption of this technique however entails labelling of the enormous data prior to the training of AI models via supervised learning. To tackle this challenge, we explore a new perspective of wireless crowd labelling that is capable of downloading data to many imperfect mobile annotators for repetition labelling by exploiting multicasting in wireless networks. In this cross-disciplinary area, the integration of the rate-distortion theory and the principle of repetition labelling for accuracy improvement gives rise to a new tradeoff between radio-and-annotator resources under a constraint on labelling accuracy. Building on the tradeoff and aiming at maximizing the labelling throughput, this work focuses on the joint optimization of encoding rate, annotator clustering, and sub-channel allocation, which results in an NP-hard integer programming problem. To devise an efficient solution approach, we establish an optimal sequential annotator-clustering scheme based on the order of decreasing signal-to-noise ratios. Thereby, the optimal solution can be found by an efficient tree search. Next, the solution is simplified by applying truncated channel inversion. Alternatively, the optimization problem can be recognized as a knapsack problem, which can be efficiently solved in pseudo-polynomial time by means of dynamic programming. In addition, exact polices are derived for the annotators constrained and spectrum constrained cases. Last, simulation results demonstrate the significant throughput gains based on the optimal solution compared with decoupled allocation of the two types of resources.
Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unreliable. Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration. Due to the shared nature of the wireless medium, transmissions are subjected to interference and are not guaranteed. The performance of FL system in such a setting is not well understood. In this paper, an analytical model is developed to characterize the performance of FL in wireless networks. Particularly, tractable expressions are derived for the convergence rate of FL in a wireless setting, accounting for effects from both scheduling schemes and inter-cell interference. Using the developed analysis, the effectiveness of three different scheduling policies, i.e., random scheduling (RS), round robin (RR), and proportional fair (PF), are compared in terms of FL convergence rate. It is shown that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Moreover, the FL convergence rate decreases rapidly as the SINR threshold increases, thus confirming the importance of compression and quantization of the update parameters. The analysis also reveals a trade-off between the number of scheduled UEs and subchannel bandwidth under a fixed amount of available spectrum.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا