Do you want to publish a course? Click here

6G: from Densification to Diversification

118   0   0.0 ( 0 )
 Added by Hyunsoo Kim
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The 5G system has finally begun commercialization, and now is the time to start discussing the road map for the 6G system. While the 5G system was designed with a focus on discovering new service types for high speed, low-latency, and massive connective services, the evolution of the network interface for 6G should be considered with an eye toward supporting these complicated communication environments. As machine-driven data traffic continues to increase exponentially, 6G must be able to support a series of connection methods that did not previously exist. In departure from base-station-oriented cell densification, network diversification is necessary if we are to satisfy the comprehensive requirements of end terminals for diverse applications. In this article, we predict what will drive 6G and look at what key requirements should be considered in 6G. We then diversify four types of network architectures according to link characteristics, communication ranges, and target services. The four types of networks play complementary roles while at the same time collaborating across the entire 6G network. Lastly, we call attention to key technologies and challenges in the air, network, and assistive technologies that will have to be addressed when designing the 6G system.



rate research

Read More

In this paper, we investigate the impact of network densification on the performance in terms of downlink signal-to-interference (SIR) coverage probability and network area spectral efficiency (ASE). A sophisticated bounded dual-slope path loss model and practical user equipment (UE) densities are incorporated in the analysis, which have never been jointly considered before. By using stochastic geometry, we derive an integral expression along with closed-form bounds of the coverage probability and ASE, validated by simulation results. Through these, we provide the asymptotic behavior of ultra-densification. The coverage probability and ASE have non-zero convergence in asymptotic regions unless UE density goes to infinity (full load). Meanwhile, the effect of UE density on the coverage probability is analyzed. The coverage probability will reveal an U-shape for large UE densities due to interference fall into the near-field, but it will keep increasing for low UE densites. Furthermore, our results indicate that the performance is overestimated without applying the bounded dual-slope path loss model. The derived expressions and results in this work pave the way for future network provisioning.
Next generation wireless networks are expected to support diverse vertical industries and offer countless emerging use cases. To satisfy stringent requirements of diversified services, network slicing is developed, which enables service-oriented resource allocation by tailoring the infrastructure network into multiple logical networks. However, there are still some challenges in cross-domain multi-dimensional resource management for end-to-end (E2E) slices under the dynamic and uncertain environment. Trading off the revenue and cost of resource allocation while guaranteeing service quality is significant to tenants. Therefore, this article introduces a hierarchical resource management framework, utilizing deep reinforcement learning in admission control of resource requests from different tenants and resource adjustment within admitted slices for each tenant. Particularly, we first discuss the challenges in customized resource management of 6G. Second, the motivation and background are presented to explain why artificial intelligence (AI) is applied in resource customization of multi-tenant slicing. Third, E2E resource management is decomposed into two problems, multi-dimensional resource allocation decision based on slice-level feedback and real-time slice adaption aimed at avoiding service quality degradation. Simulation results demonstrate the effectiveness of AI-based customized slicing. Finally, several significant challenges that need to be addressed in practical implementation are investigated.
Non-terrestrial networks (NTNs) traditionally had certain limited applications. However, the recent technological advancements opened up myriad applications of NTNs for 5G and beyond networks, especially when integrated into terrestrial networks (TNs). This article comprehensively surveys the evolution of NTNs highlighting its relevance to 5G networks and essentially, how it will play a pivotal role in the development of 6G and beyond wireless networks. The survey discusses important features of NTNs integration into TNs by delving into the new range of services and use cases, various architectures, and new approaches being adopted to develop a new wireless ecosystem. Our survey includes the major progresses and outcomes from academic research as well as industrial efforts. We first start with introducing the relevant 5G use cases and general integration challenges such as handover and deployment difficulties. Then, we review the NTNs operations in mmWave and their potential for the internet of things (IoT). Further, we discuss the significance of mobile edge computing (MEC) and machine learning (ML) in NTNs by reviewing the relevant research works. Furthermore, we also discuss the corresponding higher layer advancements and relevant field trials/prototyping at both academic and industrial levels. Finally, we identify and review 6G and beyond application scenarios, novel architectures, technological enablers, and higher layer aspects pertinent to NTNs integration.
We study and compare three coded schemes for single-server wireless broadcast of multiple description coded content to heterogeneous users. The users (sink nodes) demand different number of descriptions over links with different packet loss rates. The three coded schemes are based on the LT codes, growth codes, and randomized chunked codes. The schemes are compared on the basis of the total number of transmissions required to deliver the demands of all users, which we refer to as the server (source) delivery time. We design the degree distributions of LT codes by solving suitably defined linear optimization problems, and numerically characterize the achievable delivery time for different coding schemes. We find that including a systematic phase (uncoded transmission) is significantly beneficial for scenarios with low demands, and that coding is necessary for efficiently delivering high demands. Different demand and error rate scenarios may require very different coding schemes. Growth codes and chunked codes do not perform as well as optimized LT codes in the heterogeneous communication scenario.
We consider the problem of efficient packet dissemination in wireless networks with point-to-multi-point wireless broadcast channels. We propose a dynamic policy, which achieves the broadcast capacity of the network. This policy is obtained by first transforming the original multi-hop network into a precedence-relaxed virtual single-hop network and then finding an optimal broadcast policy for the relaxed network. The resulting policy is shown to be throughput-optimal for the original wireless network using a sample-path argument. We also prove the NP-completeness of the finite-horizon broadcast problem, which is in contrast with the polynomial time solvability of the problem with point-to-point channels. Illustrative simulation results demonstrate the efficacy of the proposed broadcast policy in achieving the full broadcast capacity with low delay.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا