Do you want to publish a course? Click here

Spectrum Sensing and Resource Allocation for 5G Heterogeneous Cloud Radio Access Networks

174   0   0.0 ( 0 )
 Added by Hossein Safi
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper, the problem of opportunistic spectrum sharing for the next generation of wireless systems empowered by the cloud radio access network (C-RAN) is studied. More precisely, low-priority users employ cooperative spectrum sensing to detect a vacant portion of the spectrum that is not currently used by high-priority users. The design of the scheme is to maximize the overall throughput of the low-priority users while guaranteeing the quality of service of the high-priority users. This objective is attained by optimally adjusting spectrum sensing time with respect to imposed target probabilities of detection and false alarm as well as dynamically allocating and assigning C-RAN resources, i.e., transmit powers, sub-carriers, remote radio heads (RRHs), and base-band units. The presented optimization problem is non-convex and NP-hard that is extremely hard to tackle directly. To solve the problem, a low-complex iterative approach is proposed in which sensing time, user association parameters and transmit powers of RRHs are alternatively assigned and optimized at every step. Numerical results are then provided to demonstrate the necessity of performing sensing time adjustment in such systems as well as balancing the sensing-throughput tradeoff.



rate research

Read More

In this paper, we propose a joint radio and core resource allocation framework for NFV-enabled networks. In the proposed system model, the goal is to maximize energy efficiency (EE), by guaranteeing end-to-end (E2E) quality of service (QoS) for different service types. To this end, we formulate an optimization problem in which power and spectrum resources are allocated in the radio part. In the core part, the chaining, placement, and scheduling of functions are performed to ensure the QoS of all users. This joint optimization problem is modeled as a Markov decision process (MDP), considering time-varying characteristics of the available resources and wireless channels. A soft actor-critic deep reinforcement learning (SAC-DRL) algorithm based on the maximum entropy framework is subsequently utilized to solve the above MDP. Numerical results reveal that the proposed joint approach based on the SAC-DRL algorithm could significantly reduce energy consumption compared to the case in which R-RA and NFV-RA problems are optimized separately.
Next generation wireless networks are expected to be extremely complex due to their massive heterogeneity in terms of the types of network architectures they incorporate, the types and numbers of smart IoT devices they serve, and the types of emerging applications they support. In such large-scale and heterogeneous networks (HetNets), radio resource allocation and management (RRAM) becomes one of the major challenges encountered during system design and deployment. In this context, emerging Deep Reinforcement Learning (DRL) techniques are expected to be one of the main enabling technologies to address the RRAM in future wireless HetNets. In this paper, we conduct a systematic in-depth, and comprehensive survey of the applications of DRL techniques in RRAM for next generation wireless networks. Towards this, we first overview the existing traditional RRAM methods and identify their limitations that motivate the use of DRL techniques in RRAM. Then, we provide a comprehensive review of the most widely used DRL algorithms to address RRAM problems, including the value- and policy-based algorithms. The advantages, limitations, and use-cases for each algorithm are provided. We then conduct a comprehensive and in-depth literature review and classify existing related works based on both the radio resources they are addressing and the type of wireless networks they are investigating. To this end, we carefully identify the types of DRL algorithms utilized in each related work, the elements of these algorithms, and the main findings of each related work. Finally, we highlight important open challenges and provide insights into several future research directions in the context of DRL-based RRAM. This survey is intentionally designed to guide and stimulate more research endeavors towards building efficient and fine-grained DRL-based RRAM schemes for future wireless networks.
In this paper, we explore perpetual, scalable, Low-powered Wide-area networks (LPWA). Specifically we focus on the uplink transmissions of non-orthogonal multiple access (NOMA)-based LPWA networks consisting of multiple self-powered nodes and a NOMA-based single gateway. The self-powered LPWA nodes use the harvest-then-transmit protocol where they harvest energy from ambient sources (solar and radio frequency signals), then transmit their signals. The main features of the studied LPWA network are different transmission times-on-air, multiple uplink transmission attempts, and duty cycle restrictions. The aim of this work is to maximize the time-averaged sum of the uplink transmission rates by optimizing the transmission time-on-air allocation, the energy harvesting time allocation and the power allocation; subject to a maximum transmit power and to the availability of the harvested energy. We propose a low complex solution which decouples the optimization problem into three sub-problems: we assign the LPWA node transmission times (using either the fair or unfair approaches), we optimize the energy harvesting (EH) times using a one-dimensional search method, and optimize the transmit powers using a concave-convex (CCCP) procedure. In the simulation results, we focus on Long Range (LoRa) networks as a practical example LPWA network. We validate our proposed solution and we observe a $15%$ performance improvement when using NOMA.
In cloud radio access networks (C-RANs), the baseband units and radio units of base stations are separated, which requires high-capacity fronthaul links connecting both parts. In this paper, we consider the delay-aware fronthaul allocation problem for C-RANs. The stochastic optimization problem is formulated as an infinite horizon average cost Markov decision process. To deal with the curse of dimensionality, we derive a closed-form approximate priority function and the associated error bound using perturbation analysis. Based on the closed-form approximate priority function, we propose a low-complexity delay-aware fronthaul allocation algorithm solving the per-stage optimization problem. The proposed solution is further shown to be asymptotically optimal for sufficiently small cross link path gains. Finally, the proposed fronthaul allocation algorithm is compared with various baselines through simulations, and it is shown that significant performance gain can be achieved.
111 - Yu Zhang , Xuelu Wu , Hong Peng 2021
This letter studies a cloud radio access network (C-RAN) with multiple intelligent reflecting surfaces (IRS) deployed between users and remote radio heads (RRH). Specifically, we consider the uplink transmission where each RRH quantizes the received signals from the users by either point-to-point compression or Wyner-Ziv compression and then transmits the quantization bits to the BBU pool through capacity limited fronthhual links. To maximize the uplink sum rate, we jointly optimize the passive beamformers of IRSs and the quantization noise covariance matrices of fronthoul compression. An joint fronthaul compression and passive beamforming design is proposed by exploiting the Arimoto-Blahut algorithm and semidefinte relaxation (SDR). Numerical results show the performance gain achieved by the proposed algorithm.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا