ترغب بنشر مسار تعليمي؟ اضغط هنا

Ultra-Low Latency (ULL) Networks: The IEEE TSN and IETF DetNet Standards and Related 5G ULL Research

106   0   0.0 ( 0 )
 نشر من قبل Martin Reisslein
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Many network applications, e.g., industrial control, demand Ultra-Low Latency (ULL). However, traditional packet networks can only reduce the end-to-end latencies to the order of tens of milliseconds. The IEEE 802.1 Time Sensitive Networking (TSN) standard and related research studies have sought to provide link layer support for ULL networking, while the emerging IETF Deterministic Networking (DetNet) standards seek to provide the complementary network layer ULL support. This article provides an up-to-date comprehensive survey of the IEEE TSN and IETF DetNet standards and the related research studies. The survey of these standards and research studies is organized according to the main categories of flow concept, flow synchronization, flow management, flow control, and flow integrity. ULL networking mechanisms play a critical role in the emerging fifth generation (5G) network access chain from wireless devices via access, backhaul, and core networks. We survey the studies that specifically target the support of ULL in 5G networks, with the main categories of fronthaul, backhaul, and network management. Throughout, we identify the pitfalls and limitations of the existing standards and research studies. This survey can thus serve as a basis for the development of standards enhancements and future ULL research studies that address the identified pitfalls and limitations.



قيم البحث

اقرأ أيضاً

The IEEE 802.1 time-sensitive networking (TSN) standards aim at improving the real-time capabilities of standard Ethernet. TSN is widely recognized as the long-term replacement of proprietary technologies for industrial control systems. However, wire d connectivity alone is not sufficient to meet the requirements of future industrial systems. The fifth-generation (5G) mobile/cellular technology has been designed with native support for ultra-reliable low-latency communication (uRLLC). 5G is promising to meet the stringent requirements of industrial systems in the wireless domain. Converged operation of 5G and TSN systems is crucial for achieving end-to-end deterministic connectivity in industrial networks. Accurate time synchronization is key to integrated operation of 5G and TSN systems. To this end, this paper evaluates the performance of over-the-air time synchronization mechanism which has been proposed in 3GPP Release 16. We analyze the accuracy of time synchronization through the boundary clock approach in the presence of clock drift and different air-interface timing errors related to reference time indication. We also investigate frequency and scalability aspects of over-the-air time synchronization. Our performance evaluation reveals the conditions under which 1 (mu)s or below requirement for TSN time synchronization can be achieved.
In this paper, we propose a transfer learning (TL)-enabled edge-CNN framework for 5G industrial edge networks with privacy-preserving characteristic. In particular, the edge server can use the existing image dataset to train the CNN in advance, which is further fine-tuned based on the limited datasets uploaded from the devices. With the aid of TL, the devices that are not participating in the training only need to fine-tune the trained edge-CNN model without training from scratch. Due to the energy budget of the devices and the limited communication bandwidth, a joint energy and latency problem is formulated, which is solved by decomposing the original problem into an uploading decision subproblem and a wireless bandwidth allocation subproblem. Experiments using ImageNet demonstrate that the proposed TL-enabled edge-CNN framework can achieve almost 85% prediction accuracy of the baseline by uploading only about 1% model parameters, for a compression ratio of 32 of the autoencoder.
Considering a Manhattan mobility model in vehicle-to-vehicle networks, this work studies a power minimization problem subject to second-order statistical constraints on latency and reliability, captured by a network-wide maximal data queue length. We invoke results in extreme value theory to characterize statistics of extreme events in terms of the maximal queue length. Subsequently, leveraging Lyapunov stochastic optimization to deal with network dynamics, we propose two queue-aware power allocation solutions. In contrast with the baseline, our approaches achieve lower mean and variance of the maximal queue length.
Common Public Radio Interface (CPRI) is a successful industry cooperation defining the publicly available specification for the key internal interface of radio base stations between the radio equipment control (REC) and the radio equipment (RE) in th e fronthaul of mobile networks. However, CPRI is expensive to deploy, consumes large bandwidth, and currently is statically configured. On the other hand, an Ethernet-based mobile fronthaul will be cost-efficient and more easily reconfigurable. Encapsulating CPRI over Ethernet (CoE) is an attractive solution, but stringent CPRI requirements such as delay and jitter are major challenges that need to be met to make CoE a reality. This study investigates whether CoE can meet delay and jitter requirements by performing FPGA-based Verilog experiments and simulations. Verilog experiments show that CoE encapsulation with fixed Ethernet frame size requires about tens of microseconds. Numerical experiments show that the proposed scheduling policy of CoE flows on Ethernet can reduce jitter when redundant Ethernet capacity is provided. The reduction in jitter can be as large as 1 {mu}s, hence making Ethernet-based mobile fronthaul a credible technology.
To overcome devices limitations in performing computation-intense applications, mobile edge computing (MEC) enables users to offload tasks to proximal MEC servers for faster task computation. However, current MEC system design is based on average-bas ed metrics, which fails to account for the ultra-reliable low-latency requirements in mission-critical applications. To tackle this, this paper proposes a new system design, where probabilistic and statistical constraints are imposed on task queue lengths, by applying extreme value theory. The aim is to minimize users power consumption while trading off the allocated resources for local computation and task offloading. Due to wireless channel dynamics, users are re-associated to MEC servers in order to offload tasks using higher rates or accessing proximal servers. In this regard, a user-server association policy is proposed, taking into account the channel quality as well as the servers computation capabilities and workloads. By marrying tools from Lyapunov optimization and matching theory, a two-timescale mechanism is proposed, where a user-server association is solved in the long timescale while a dynamic task offloading and resource allocation policy is executed in the short timescale. Simulation results corroborate the effectiveness of the proposed approach by guaranteeing highly-reliable task computation and lower delay performance, compared to several baselines.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا