Do you want to publish a course? Click here

Ultra-Reliable and Low-Latency Vehicular Transmission: An Extreme Value Theory Approach

298   0   0.0 ( 0 )
 Added by Chen-Feng Liu
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Considering a Manhattan mobility model in vehicle-to-vehicle networks, this work studies a power minimization problem subject to second-order statistical constraints on latency and reliability, captured by a network-wide maximal data queue length. We invoke results in extreme value theory to characterize statistics of extreme events in terms of the maximal queue length. Subsequently, leveraging Lyapunov stochastic optimization to deal with network dynamics, we propose two queue-aware power allocation solutions. In contrast with the baseline, our approaches achieve lower mean and variance of the maximal queue length.



rate research

Read More

To date, model-based reliable communication with low latency is of paramount importance for time-critical wireless control systems. In this work, we study the downlink (DL) controller-to-actuator scheduling problem in a wireless industrial network such that the outage probability is minimized. In contrast to the existing literature based on well-known stationary fading channel models, we assume an arbitrary and unknown channel fading model, which is available only via samples. To overcome the issue of limited data samples, we invoke the generative adversarial network framework and propose an online data-driven approach to jointly schedule the DL transmissions and learn the channel distributions in an online manner. Numerical results show that the proposed approach can effectively learn any arbitrary channel distribution and further achieve the optimal performance by using the predicted outage probability.
To overcome devices limitations in performing computation-intense applications, mobile edge computing (MEC) enables users to offload tasks to proximal MEC servers for faster task computation. However, current MEC system design is based on average-based metrics, which fails to account for the ultra-reliable low-latency requirements in mission-critical applications. To tackle this, this paper proposes a new system design, where probabilistic and statistical constraints are imposed on task queue lengths, by applying extreme value theory. The aim is to minimize users power consumption while trading off the allocated resources for local computation and task offloading. Due to wireless channel dynamics, users are re-associated to MEC servers in order to offload tasks using higher rates or accessing proximal servers. In this regard, a user-server association policy is proposed, taking into account the channel quality as well as the servers computation capabilities and workloads. By marrying tools from Lyapunov optimization and matching theory, a two-timescale mechanism is proposed, where a user-server association is solved in the long timescale while a dynamic task offloading and resource allocation policy is executed in the short timescale. Simulation results corroborate the effectiveness of the proposed approach by guaranteeing highly-reliable task computation and lower delay performance, compared to several baselines.
Effective Capacity defines the maximum communication rate subject to a specific delay constraint, while effective energy efficiency (EEE) indicates the ratio between effective capacity and power consumption. We analyze the EEE of ultra-reliable networks operating in the finite blocklength regime. We obtain a closed form approximation for the EEE in quasi-static Nakagami-$m$ (and Rayleigh as sub-case) fading channels as a function of power, error probability, and latency. Furthermore, we characterize the QoS constrained EEE maximization problem for different power consumption models, which shows a significant difference between finite and infinite blocklength coding with respect to EEE and optimal power allocation strategy. As asserted in the literature, achieving ultra-reliability using one transmission consumes huge amount of power, which is not applicable for energy limited IoT devices. In this context, accounting for empty buffer probability in machine type communication (MTC) and extending the maximum delay tolerance jointly enhances the EEE and allows for adaptive retransmission of faulty packets. Our analysis reveals that obtaining the optimum error probability for each transmission by minimizing the non-empty buffer probability approaches EEE optimality, while being analytically tractable via Dinkelbachs algorithm. Furthermore, the results illustrate the power saving and the significant EEE gain attained by applying adaptive retransmission protocols, while sacrificing a limited increase in latency.
74 - Jeonghun Park 2019
In this letter, we analyze the achievable rate of ultra-reliable low-latency communications (URLLC) in a randomly modeled wireless network. We use two mathematical tools to properly characterize the considered system: i) stochastic geometry to model spatial locations of the transmitters in a network, and ii) finite block-length analysis to reflect the features of the short-packets. Exploiting these tools, we derive an integral-form expression of the decoding error probability as a function of the target rate, the path-loss exponent, the communication range, the density, and the channel coding length. We also obtain a tight approximation as a closed-form. The main finding from the analytical results is that, in URLLC, increasing the signal-to-interference ratio (SIR) brings significant improvement of the rate performance compared to increasing the channel coding length. Via simulations, we show that fractional frequency reuse improves the area spectral efficiency by reducing the amount of mutual interference.
This paper proposes and demonstrates a PHY-layer design of a real-time prototype that supports Ultra-Reliable Communication (URC) in wireless infrastructure networks. The design makes use of Orthogonal Frequency Division Multiple Access (OFDMA) as a means to achieve URC. Compared with Time-Division Multiple Access (TDMA), OFDMA concentrates the transmit power to a narrower bandwidth, resulting in higher effective SNR. Compared with Frequency-Division Multiple Access (FDMA), OFDMA has higher spectrum efficiency thanks to the smaller subcarrier spacing. Although OFDMA has been introduced in 802.11ax, the purpose was to add flexibility in spectrum usage. Our Reliable OFDMA design, referred to as ROFA, is a clean-slate design with a single goal of ultra-reliable packet delivery. ROFA solves a number of key challenges to ensure the ultra-reliability: (1) a downlink-coordinated time-synchronization mechanism to synchronize the uplink transmission of users, with at most $0.1us$ timing offset; (2) an STF-free packet reception synchronization method that makes use of the property of synchronous systems to avoid packet misdetection; and (3) an uplink precoding mechanism to reduce the CFOs between users and the AP to a negligible level. We implemented ROFA on the Universal Software Radio Peripheral (USRP) SDR platform with real-time signal processing. Extensive experimental results show that ROFA can achieve ultra-reliable packet delivery ($PER<10^5$) with $11.5dB$ less transmit power compared with OFDM-TDMA when they use $3$ and $52$ subcarriers respectively.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا