ترغب بنشر مسار تعليمي؟ اضغط هنا

RAN Slicing for Massive IoT and Bursty URLLC Service Multiplexing: Analysis and Optimization

159   0   0.0 ( 0 )
 نشر من قبل Peng Yang
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Future wireless networks are envisioned to serve massive Internet of things (mIoT) via some radio access technologies, where the random access channel (RACH) procedure should be exploited for IoT devices to access the networks. However, the theoretical analysis of the RACH procedure for massive IoT devices is challenging. To address this challenge, we first correlate the RACH request of an IoT device with the status of its maintained queue and analyze the evolution of the queue status. Based on the analysis result, we then derive the closed-form expression of the random access (RA) success probability, which is a significant indicator characterizing the RACH procedure of the device. Besides, considering the agreement on converging different services onto a shared infrastructure, we investigate the RAN slicing for mIoT and bursty ultra-reliable and low latency communications (URLLC) service multiplexing. Specifically, we formulate the RAN slicing problem as an optimization one to maximize the total RA success probabilities of all IoT devices and provide URLLC services for URLLC devices in an energy-efficient way. A slice resource optimization (SRO) algorithm exploiting relaxation and approximation with provable tightness and error bound is then proposed to mitigate the optimization problem. Simulation results demonstrate that the proposed SRO algorithm can effectively implement the service multiplexing of mIoT and bursty URLLC traffic.

قيم البحث

اقرأ أيضاً

A critical task in 5G networks with heterogeneous services is spectrum slicing of the shared radio resources, through which each service gets performance guarantees. In this paper, we consider a setup in which a Base Station (BS) should serve two typ es of traffic in the downlink, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Two resource allocation strategies are considered, non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). A framework for power minimization is presented, in which the BS knows the channel state information (CSI) of the eMBB users only. Nevertheless, due to the resource sharing, it is shown that this knowledge can be used also to the benefit of the URLLC users. The numerical results show that NOMA leads to a lower power consumption compared to OMA for every simulation parameter under test.
An important modulation technique for Internet of Things (IoT) is the one proposed by the LoRa allianceTM. In this paper we analyze the M-ary LoRa modulation in the time and frequency domains. First, we provide the signal description in the time doma in, and show that LoRa is a memoryless continuous phase modulation. The cross-correlation between the transmitted waveforms is determined, proving that LoRa can be considered approximately an orthogonal modulation only for large M. Then, we investigate the spectral characteristics of the signal modulated by random data, obtaining a closed-form expression of the spectrum in terms of Fresnel functions. Quite surprisingly, we found that LoRa has both continuous and discrete spectra, with the discrete spectrum containing exactly a fraction 1/M of the total signal power.
292 - Wen Wu , Nan Chen , Conghao Zhou 2020
In this paper, we investigate a radio access network (RAN) slicing problem for Internet of vehicles (IoV) services with different quality of service (QoS) requirements, in which multiple logically-isolated slices are constructed on a common roadside network infrastructure. A dynamic RAN slicing framework is presented to dynamically allocate radio spectrum and computing resource, and distribute computation workloads for the slices. To obtain an optimal RAN slicing policy for accommodating the spatial-temporal dynamics of vehicle traffic density, we first formulate a constrained RAN slicing problem with the objective to minimize long-term system cost. This problem cannot be directly solved by traditional reinforcement learning (RL) algorithms due to complicated coupled constraints among decisions. Therefore, we decouple the problem into a resource allocation subproblem and a workload distribution subproblem, and propose a two-layer constrained RL algorithm, named Resource Allocation and Workload diStribution (RAWS) to solve them. Specifically, an outer layer first makes the resource allocation decision via an RL algorithm, and then an inner layer makes the workload distribution decision via an optimization subroutine. Extensive trace-driven simulations show that the RAWS effectively reduces the system cost while satisfying QoS requirements with a high probability, as compared with benchmarks.
Ultra-reliable low latency communications (URLLC) arose to serve industrial IoT (IIoT) use cases within the 5G. Currently, it has inherent limitations to support future services. Based on state-of-the-art research and practical deployment experience, in this article, we introduce and advocate for three variants: broadband, scalable and extreme URLLC. We discuss use cases and key performance indicators and identify technology enablers for the new service modes. We bring practical considerations from the IIoT testbed and provide an outlook toward some new research directions.
The combination of cloud computing capabilities at the network edge and artificial intelligence promise to turn future mobile networks into service- and radio-aware entities, able to address the requirements of upcoming latency-sensitive applications . In this context, a challenging research goal is to exploit edge intelligence to dynamically and optimally manage the Radio Access Network Slicing (that is a less mature and more complex technology than fifth-generation Network Slicing) and Radio Resource Management, which is a very complex task due to the mostly unpredictably nature of the wireless channel. This paper presents a novel architecture that leverages Deep Reinforcement Learning at the edge of the network in order to address Radio Access Network Slicing and Radio Resource Management optimization supporting latency-sensitive applications. The effectiveness of our proposal against baseline methodologies is investigated through computer simulation, by considering an autonomous-driving use-case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا