ترغب بنشر مسار تعليمي؟ اضغط هنا

Development of A Fully Data-Driven Artificial Intelligence and Deep Learning for URLLC Application in 6G Wireless Systems: A Survey

91   0   0.0 ( 0 )
 نشر من قبل Qazwan Abdullah
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The full future of the sixth generation will develop a fully data-driven that provide terabit rate per second, and adopt an average of 1000+ massive number of connections per person in 10 years 2030 virtually instantaneously. Data-driven for ultra-reliable and low latency communication is a new service paradigm provided by a new application of future sixth-generation wireless communication and network architecture, involving 100+ Gbps data rates with one millisecond latency. The key constraint is the amount of computing power available to spread massive data and well-designed artificial neural networks. Artificial Intelligence provides a new technique to design wireless networks by apply learning, predicting, and make decisions to manage the stream of big data training individuals, which provides more the capacity to transform that expert learning to develop the performance of wireless networks. We study the developing technologies that will be the driving force are artificial intelligence, communication systems to guarantee low latency. This paper aims to discuss the efficiency of the developing network and alleviate the great challenge for application scenarios and study Holographic radio, enhanced wireless channel coding, enormous Internet of Things integration, and haptic communication for virtual and augmented reality provide new services on the 6G network. Furthermore, improving a multi-level architecture for ultra-reliable and low latency in deep Learning allows for data-driven AI and 6G networks for device intelligence, as well as allowing innovations based on effective learning capabilities. These difficulties must be solved in order to meet the needs of future smart networks. Furthermore, this research categorizes various unexplored research gaps between machine learning and the sixth generation.

قيم البحث

اقرأ أيضاً

While fifth-generation (5G) communications are being rolled out worldwide, sixth-generation (6G) communications have attracted much attention from both the industry and the academia. Compared with 5G, 6G will have a wider frequency band, higher trans mission rate, spectrum efficiency, greater connection capacity, shorter delay, broader coverage, and more robust anti-interference capability to satisfy various network requirements. This survey presents an insightful understanding of 6G wireless communications by introducing requirements, features, critical technologies, challenges, and applications. First, we give an overview of 6G from perspectives of technologies, security and privacy, and applications. Subsequently, we introduce various 6G technologies and their existing challenges in detail, e.g., artificial intelligence (AI), intelligent surfaces, THz, space-air-ground-sea integrated network, cell-free massive MIMO, etc. Because of these technologies, 6G is expected to outperform existing wireless communication systems regarding the transmission rate, latency, global coverage, etc. Next, we discuss security and privacy techniques that can be applied to protect data in 6G. Since edge devices are expected to gain popularity soon, the vast amount of generated data and frequent data exchange make the leakage of data easily. Finally, we predict real-world applications built on the technologies and features of 6G; for example, smart healthcare, smart city, and smart manufacturing will be implemented by taking advantage of AI.
Driven by the unprecedented high throughput and low latency requirements in next-generation wireless networks, this paper introduces an artificial intelligence (AI) enabled framework in which unmanned aerial vehicles (UAVs) use non-orthogonal multipl e access (NOMA) and mobile edge computing (MEC) techniques to service terrestrial mobile users (MUs). The proposed framework enables the terrestrial MUs to offload their computational tasks simultaneously, intelligently, and flexibly, thus enhancing their connectivity as well as reducing their transmission latency and their energy consumption. To this end, the fundamentals of this framework are first introduced. Then, a number of communication and AI techniques are proposed to improve the quality of experiences of terrestrial MUs. To this end, federated learning and reinforcement learning are introduced for intelligent task offloading and computing resource allocation. For each learning technique, motivations, challenges, and representative results are introduced. Finally, several key technical challenges and open research issues of the proposed framework are summarized.
Flash floods in urban areas occur with increasing frequency. Detecting these floods would greatlyhelp alleviate human and economic losses. However, current flood prediction methods are eithertoo slow or too simplified to capture the flood development in details. Using Deep Neural Networks,this work aims at boosting the computational speed of a physics-based 2-D urban flood predictionmethod, governed by the Shallow Water Equation (SWE). Convolutional Neural Networks(CNN)and conditional Generative Adversarial Neural Networks(cGANs) are applied to extract the dy-namics of flood from the data simulated by a Partial Differential Equation(PDE) solver. Theperformance of the data-driven model is evaluated in terms of Mean Squared Error(MSE) andPeak Signal to Noise Ratio(PSNR). The deep learning-based, data-driven flood prediction modelis shown to be able to provide precise real-time predictions of flood development
This paper reviews the current development of artificial intelligence (AI) techniques for the application area of robot communication. The study of the control and operation of multiple robots collaboratively toward a common goal is fast growing. Com munication among members of a robot team and even including humans is becoming essential in many real-world applications. The survey focuses on the AI techniques for robot communication to enhance the communication capability of the multi-robot team, making more complex activities, taking an appreciated decision, taking coordinated action, and performing their tasks efficiently.
With the advent of 5G and the research into beyond 5G (B5G) networks, a novel and very relevant research issue is how to manage the coexistence of different types of traffic, each with very stringent but completely different requirements. In this pap er we propose a deep reinforcement learning (DRL) algorithm to slice the available physical layer resources between ultra-reliable low-latency communications (URLLC) and enhanced Mobile BroadBand (eMBB) traffic. Specifically, in our setting the time-frequency resource grid is fully occupied by eMBB traffic and we train the DRL agent to employ proximal policy optimization (PPO), a state-of-the-art DRL algorithm, to dynamically allocate the incoming URLLC traffic by puncturing eMBB codewords. Assuming that each eMBB codeword can tolerate a certain limited amount of puncturing beyond which is in outage, we show that the policy devised by the DRL agent never violates the latency requirement of URLLC traffic and, at the same time, manages to keep the number of eMBB codewords in outage at minimum levels, when compared to other state-of-the-art schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا