ترغب بنشر مسار تعليمي؟ اضغط هنا

AI-Native Network Slicing for 6G Networks

165   0   0.0 ( 0 )
 نشر من قبل Conghao Zhou
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the global roll-out of the fifth generation (5G) networks, it is necessary to look beyond 5G and envision the sixth generation (6G) networks. The 6G networks are expected to have space-air-ground integrated networking, advanced network virtualization, and ubiquitous intelligence. This article proposes an artificial intelligence (AI)-native network slicing architecture for 6G networks to facilitate intelligent network management and support emerging AI services. AI is built in the proposed network slicing architecture to enable the synergy of AI and network slicing. AI solutions are investigated for the entire lifecycle of network slicing to facilitate intelligent network management, i.e., AI for slicing. Furthermore, network slicing approaches are discussed to support emerging AI services by constructing slice instances and performing efficient resource management, i.e., slicing for AI. Finally, a case study is presented, followed by a discussion of open research issues that are essential for AI-native network slicing in 6G.

قيم البحث

اقرأ أيضاً

Network softwarization has revolutionized the architecture of cellular wireless networks. State-of-the-art container based virtual radio access networks (vRAN) provide enormous flexibility and reduced life cycle management costs, but they also come w ith prohibitive energy consumption. We argue that for future AI-native wireless networks to be flexible and energy efficient, there is a need for a new abstraction in network softwarization that caters for neural network type of workloads and allows a large degree of service composability. In this paper we present the NeuroRAN architecture, which leverages stateful function as a user facing execution model, and is complemented with virtualized resources and decentralized resource management. We show that neural network based implementations of common transceiver functional blocks fit the proposed architecture, and we discuss key research challenges related to compilation and code generation, resource management, reliability and security.
We present DeepIA, a deep neural network (DNN) framework for enabling fast and reliable initial access for AI-driven beyond 5G and 6G millimeter (mmWave) networks. DeepIA reduces the beam sweep time compared to a conventional exhaustive search-based IA process by utilizing only a subset of the available beams. DeepIA maps received signal strengths (RSSs) obtained from a subset of beams to the beam that is best oriented to the receiver. In both line of sight (LoS) and non-line of sight (NLoS) conditions, DeepIA reduces the IA time and outperforms the conventional IAs beam prediction accuracy. We show that the beam prediction accuracy of DeepIA saturates with the number of beams used for IA and depends on the particular selection of the beams. In LoS conditions, the selection of the beams is consequential and improves the accuracy by up to 70%. In NLoS situations, it improves accuracy by up to 35%. We find that, averaging multiple RSS snapshots further reduces the number of beams needed and achieves more than 95% accuracy in both LoS and NLoS conditions. Finally, we evaluate the beam prediction time of DeepIA through embedded hardware implementation and show the improvement over the conventional beam sweeping.
The mobile communication system has transformed to be the fundamental infrastructure to support digital demands from all industry sectors, and 6G is envisioned to go far beyond the communication-only purpose. There is coming to a consensus that 6G wi ll treat Artificial Intelligence (AI) as the cornerstone and has a potential capability to provide intelligence inclusion, which implies to enable the access of AI services at anytime and anywhere by anyone. Apparently, the intelligent inclusion vision produces far-reaching influence on the corresponding network architecture design in 6G and deserves a clean-slate rethink. In this article, we propose an end-to-end system architecture design scope for 6G, and talk about the necessity to incorporate an independent data plane and a novel intelligent plane with particular emphasis on end-to-end AI workflow orchestration, management and operation. We also highlight the advantages to provision converged connectivity and computing services at the network function plane. Benefiting from these approaches, we believe that 6G will turn to an everything as a service (XaaS) platform with significantly enhanced business merits.
Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challengi ng technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.
Next generation wireless networks are expected to support diverse vertical industries and offer countless emerging use cases. To satisfy stringent requirements of diversified services, network slicing is developed, which enables service-oriented reso urce allocation by tailoring the infrastructure network into multiple logical networks. However, there are still some challenges in cross-domain multi-dimensional resource management for end-to-end (E2E) slices under the dynamic and uncertain environment. Trading off the revenue and cost of resource allocation while guaranteeing service quality is significant to tenants. Therefore, this article introduces a hierarchical resource management framework, utilizing deep reinforcement learning in admission control of resource requests from different tenants and resource adjustment within admitted slices for each tenant. Particularly, we first discuss the challenges in customized resource management of 6G. Second, the motivation and background are presented to explain why artificial intelligence (AI) is applied in resource customization of multi-tenant slicing. Third, E2E resource management is decomposed into two problems, multi-dimensional resource allocation decision based on slice-level feedback and real-time slice adaption aimed at avoiding service quality degradation. Simulation results demonstrate the effectiveness of AI-based customized slicing. Finally, several significant challenges that need to be addressed in practical implementation are investigated.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا