Do you want to publish a course? Click here

A Lightweight Cell Switching and Traffic Offloading Scheme for Energy Optimization in Ultra-Dense Heterogeneous Networks

161   0   0.0 ( 0 )
 Added by Attai Abubakar
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

One of the major capacity boosters for 5G networks is the deployment of ultra-dense heterogeneous networks (UDHNs). However, this deployment results in tremendousincrease in the energy consumption of the network due to the large number of base stations (BSs) involved. In addition to enhanced capacity, 5G networks must also be energy efficient for it to be economically viable and environmentally friendly. Dynamic cell switching is a very common way of reducing the total energy consumption of the network but most of the proposed methods are computationally demanding which makes them unsuitable for application in ultra-dense network deployment with massive number of BSs. To tackle this problem, we propose a lightweight cell switching scheme also known as Threshold-based Hybrid cEllswItching Scheme (THESIS) for energy optimization in UDHNs. The developed approach combines the benefits of clustering and exhaustive search (ES) algorithm to produce a solution whose optimality is close to that of the ES (which is guaranteed tobe optimal), but is computationally more efficient than ES and as such can be applied for cell switching in real networks even when their dimension is large. The performance evaluation shows that the THESIS produces a significant reduction in the energy consumption of the UDHN and is able to reduce the complexity of finding a near-optimal solution from exponential to polynomial complexity.



rate research

Read More

Ultra-dense deployments in 5G, the next generation of cellular networks, are an alternative to provide ultra-high throughput by bringing the users closer to the base stations. On the other hand, 5G deployments must not incur a large increase in energy consumption in order to keep them cost-effective and most importantly to reduce the carbon footprint of cellular networks. We propose a reinforcement learning cell switching algorithm, to minimize the energy consumption in ultra-dense deployments without compromising the quality of service (QoS) experienced by the users. In this regard, the proposed algorithm can intelligently learn which small cells (SCs) to turn off at any given time based on the traffic load of the SCs and the macro cell. To validate the idea, we used the open call detail record (CDR) data set from the city of Milan, Italy, and tested our algorithm against typical operational benchmark solutions. With the obtained results, we demonstrate exactly when and how the proposed algorithm can provide energy savings, and moreover how this happens without reducing QoS of users. Most importantly, we show that our solution has a very similar performance to the exhaustive search, with the advantage of being scalable and less complex.
Cell association scheme determines which base station (BS) and mobile user (MU) should be associated with and also plays a significant role in determining the average data rate a MU can achieve in heterogeneous networks. However, the explosion of digital devices and the scarcity of spectra collectively force us to carefully re-design cell association scheme which was kind of taken for granted before. To address this, we develop a new cell association scheme in heterogeneous networks based on joint consideration of the signal-to-interference-plus-noise ratio (SINR) which a MU experiences and the traffic load of candidate BSs1. MUs and BSs in each tier are modeled as several independent Poisson point processes (PPPs) and all channels experience independently and identically distributed ( i.i.d.) Rayleigh fading. Data rate ratio and traffic load ratio distributions are derived to obtain the tier association probability and the average ergodic MU data rate. Through numerical results, We find that our proposed cell association scheme outperforms cell range expansion (CRE) association scheme. Moreover, results indicate that allocating small sized and high-density BSs will improve spectral efficiency if using our proposed cell association scheme in heterogeneous networks.
Traffic load balancing and resource allocation is set to play a crucial role in leveraging the dense and increasingly heterogeneous deployment of multi-radio wireless networks. Traffic aggregation across different access points (APs)/radio access technologies (RATs) has become an important feature of recently introduced cellular standards on LTE dual connectivity and LTE-WLAN aggregation (LWA). Low complexity traffic splitting solutions for scenarios where the APs are not necessarily collocated are of great interest for operators. In this paper, we consider a scenario, where traffic for each user may be split across macrocell and an LTE or WiFi small cells connected by non-ideal backhaul links, and develop a closed form solution for optimal aggregation accounting for the backhaul delay. The optimal solution lends itself to a water-filling based interpretation, where the fraction of users traffic sent over macrocell is proportional to ratio of users peak capacity on that macrocell and its throughput on the small cell. Using comprehensive system level simulations, the developed optimal solution is shown to provide substantial edge and median throughput gain over algorithms representative of current 3GPP-WLAN interworking solutions. The achievable performance benefits hold promise for operators expecting to introduce aggregation solutions with their existing WLAN deployments.
Traffic load balancing and radio resource management is key to harness the dense and increasingly heterogeneous deployment of next generation $5$G wireless infrastructure. Strategies for aggregating user traffic from across multiple radio access technologies (RATs) and/or access points (APs) would be crucial in such heterogeneous networks (HetNets), but are not well investigated. In this paper, we develop a low complexity solution for maximizing an $alpha$-optimal network utility leveraging the multi-link aggregation (simultaneous connectivity to multiple RATs/APs) capability of users in the network. The network utility maximization formulation has maximization of sum rate ($alpha=0$), maximization of minimum rate ($alpha to infty$), and proportional fair ($alpha=1$) as its special cases. A closed form is also developed for the special case where a user aggregates traffic from at most two APs/RATs, and hence can be applied to practical scenarios like LTE-WLAN aggregation (LWA) and LTE dual-connectivity solutions. It is shown that the required objective may also be realized through a decentralized implementation requiring a series of message exchanges between the users and network. Using comprehensive system level simulations, it is shown that optimal leveraging of multi-link aggregation leads to substantial throughput gains over single RAT/AP selection techniques.
Quantum communication networks are emerging as a promising technology that could constitute a key building block in future communication networks in the 6G era and beyond. These networks have an inherent feature of parallelism that allows them to boost the capacity and enhance the security of communication systems. Recent advances led to the deployment of small- and large-scale quantum communication networks with real quantum hardware. In quantum networks, entanglement is a key resource that allows for data transmission between different nodes. However, to reap the benefits of entanglement and enable efficient quantum communication, the number of generated entangled pairs must be optimized. Indeed, if the entanglement generation rates are not optimized, then some of these valuable resources will be discarded and lost. In this paper, the problem of optimizing the entanglement generation rates and their distribution over a quantum memory is studied. In particular, a quantum network in which users have heterogeneous distances and applications is considered. This problem is posed as a mixed integer nonlinear programming optimization problem whose goal is to efficiently utilize the available quantum memory by distributing the quantum entangled pairs in a way that maximizes the user satisfaction. An interior point optimization method is used to solve the optimization problem and extensive simulations are conducted to evaluate the effectiveness of the proposed system. Simulation results show the key design considerations for efficient quantum networks, and the effect of different network parameters on the network performance.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا