ترغب بنشر مسار تعليمي؟ اضغط هنا

A Just-In-Time Networking Framework for Minimizing Request-Response Latency of Wireless Time-Sensitive Applications

89   0   0.0 ( 0 )
 نشر من قبل Lihao Zhang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper puts forth a networking paradigm, referred to as just-in-time (JIT) communication, to support client-server applications with stringent request-response latency requirement. Of interest is not just the round-trip delay of the network, but the actual request-response latency experienced by the application. The JIT framework contains two salient features. At the client side, the communication layer will pull a request from the client just when there is an upcoming transmission opportunity from the network. This ensures that the request contains information that is as fresh as possible (e.g., a sensor reading obtained just before the transmission opportunity). At the server side, the network ascertains that the server, after receiving and processing the request to generate a response (e.g., a control command to be sent to the client), will have a transmission opportunity at just this time. We realize the JIT system, including the protocol stack, over a Time-Division-Multiple-Access (TDMA) network implemented on a System-on-Chip (SoC) platform. We prove that a TDMA network with a power-of-2 time slots per superframe is optimal for realizing the server-side JIT function. Our experimental results validate that JIT networks can yield significantly lower request-response latency than networks without JIT support can.

قيم البحث

اقرأ أيضاً

Wireless communication is a basis of the vision of connected and automated vehicles (CAVs). Given the heterogeneity of both wireless communication technologies and CAV applications, one question that is critical to technology road-mapping and policy making is which communication technology is more suitable for a specific CAV application. Focusing on the technical aspect of this question, we present a multi-scale spatiotemporal perspective of wireless communication technologies as well as canonical CAV applications in active safety, fuel economy and emission control, vehicle automation, and vehicular infotainment. Our analysis shows that CAV applications in the regime of small spatiotemporal scale communication requirements are best supported by V2V communications, applications in the regime of large spatiotemporal scale communication requirements are better supported by cellular communications, and applications in the regime of small spatial scale but medium-to-large temporal scale can be supported by both V2V and cellular communications and provide the opportunity of leveraging heterogeneous communication resources.
376 - Jiaxin Liang , He Chen , 2020
Time-sensitive wireless networks are an important enabling building block for many emerging industrial Internet of Things (IoT) applications. Quick prototyping and evaluation of time-sensitive wireless technologies are desirable for R&D efforts. Soft ware-defined radio (SDR), by allowing wireless signal processing on a personal computer (PC), has been widely used for such quick prototyping efforts. Unfortunately, because of the textit{uncontrollable delay} between the PC and the radio board, SDR is generally deemed not suitable for time-sensitive wireless applications that demand communication with low and deterministic latency. For a rigorous evaluation of its suitability for industrial IoT applications, this paper conducts a quantitative investigation of the synchronization accuracy and end-to-end latency achievable by an SDR wireless system. To this end, we designed and implemented a time-slotted wireless system on the Universal Software Radio Peripheral (USRP) SDR platform. We developed a time synchronization mechanism to maintain synchrony among nodes in the system. To reduce the delays and delay jitters between the USRP board and its PC, we devised a {textit{Just-in-time}} algorithm to ensure that packets sent by the PC to the USRP can reach the USRP just before the time slots they are to be transmitted. Our experiments demonstrate that $90%$ ($100%$) of the time slots of different nodes can be synchronized and aligned to within $ pm 0.5$ samples or $ pm 0.05mu s$ ($ pm 1.5$ samples or $ pm 0.15mu s$), and that the end-to-end packet delivery latency can be down to $3.75ms$. This means that SDR-based solutions can be applied in a range of IIoT applications that require tight synchrony and moderately low latency, e.g., sensor data collection, automated guided vehicle (AGV) control, and Human-Machine-Interaction (HMI).
The 5G Phase-2 and beyond wireless systems will focus more on vertical applications such as autonomous driving and industrial Internet-of-things, many of which are categorized as ultra-Reliable Low-Latency Communications (uRLLC). In this article, an alternative view on uRLLC is presented, that information latency, which measures the distortion of information resulted from time lag of its acquisition process, is more relevant than conventional communication latency of uRLLC in wireless networked control systems. An AI-assisted Situationally-aware Multi-Agent Reinforcement learning framework for wireless neTworks (SMART) is presented to address the information latency optimization challenge. Case studies of typical applications in Autonomous Driving (AD) are demonstrated, i.e., dense platooning and intersection management, which show that SMART can effectively optimize information latency, and more importantly, information latency-optimized systems outperform conventional uRLLC-oriented systems significantly in terms of AD performance such as traffic efficiency, thus pointing out a new research and system design paradigm.
We consider the problem of minimizing age in a multihop wireless network. There are multiple source-destination pairs, transmitting data through multiple wireless channels, over multiple hops. We propose a network control policy which consists of a d istributed scheduling algorithm, utilizing channel state information and queue lengths at each link, in combination with a packet dropping rule. Dropping of older packets locally at queues is seen to reduce the average age of flows, even below what can be achieved by Last Come First Served (LCFS) scheduling. Dropping of older packets also allows us to use the network without congestion, irrespective of the rate at which updates are generated. Furthermore, exploiting system state information substantially improves performance. The proposed scheduling policy obtains average age values close to a theoretical lower bound as well.
The explosion of 5G networks and the Internet of Things will result in an exceptionally crowded RF environment, where techniques such as spectrum sharing and dynamic spectrum access will become essential components of the wireless communication proce ss. In this vision, wireless devices must be able to (i) learn to autonomously extract knowledge from the spectrum on-the-fly; and (ii) react in real time to the inferred spectrum knowledge by appropriately changing communication parameters, including frequency band, symbol modulation, coding rate, among others. Traditional CPU-based machine learning suffers from high latency, and requires application-specific and computationally-intensive feature extraction/selection algorithms. In this paper, we present RFLearn, the first system enabling spectrum knowledge extraction from unprocessed I/Q samples by deep learning directly in the RF loop. RFLearn provides (i) a complete hardware/software architecture where the CPU, radio transceiver and learning/actuation circuits are tightly connected for maximum performance; and (ii) a learning circuit design framework where the latency vs. hardware resource consumption trade-off can explored. We implement and evaluate the performance of RFLearn on custom software-defined radio built on a system-on-chip (SoC) ZYNQ-7000 device mounting AD9361 radio transceivers and VERT2450 antennas. We showcase the capabilities of RFLearn by applying it to solving the fundamental problems of modulation and OFDM parameter recognition. Experimental results reveal that RFLearn decreases latency and power by about 17x and 15x with respect to a software-based solution, with a comparatively low hardware resource consumption.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا