Do you want to publish a course? Click here

True-data Testbed for 5G/B5G Intelligent Network

79   0   0.0 ( 0 )
 Added by Shengheng Liu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Future beyond fifth-generation (B5G) and sixth-generation (6G) mobile communications will shift from facilitating interpersonal communications to supporting Internet of Everything (IoE), where intelligent communications with full integration of big data and artificial intelligence (AI) will play an important role in improving network efficiency and providing high-quality service. As a rapid evolving paradigm, the AI-empowered mobile communications demand large amounts of data acquired from real network environment for systematic test and verification. Hence, we build the worlds first true-data testbed for 5G/B5G intelligent network (TTIN), which comprises 5G/B5G on-site experimental networks, data acquisition & data warehouse, and AI engine & network optimization. In the TTIN, true network data acquisition, storage, standardization, and analysis are available, which enable system-level online verification of B5G/6G-orientated key technologies and support data-driven network optimization through the closed-loop control mechanism. This paper elaborates on the system architecture and module design of TTIN. Detailed technical specifications and some of the established use cases are also showcased.

rate research

Read More

The combination of cloud computing capabilities at the network edge and artificial intelligence promise to turn future mobile networks into service- and radio-aware entities, able to address the requirements of upcoming latency-sensitive applications. In this context, a challenging research goal is to exploit edge intelligence to dynamically and optimally manage the Radio Access Network Slicing (that is a less mature and more complex technology than fifth-generation Network Slicing) and Radio Resource Management, which is a very complex task due to the mostly unpredictably nature of the wireless channel. This paper presents a novel architecture that leverages Deep Reinforcement Learning at the edge of the network in order to address Radio Access Network Slicing and Radio Resource Management optimization supporting latency-sensitive applications. The effectiveness of our proposal against baseline methodologies is investigated through computer simulation, by considering an autonomous-driving use-case.
Ultra-reliable Low-Latency Communication (URLLC) is a key feature of 5G systems. The quality of service (QoS) requirements imposed by URLLC are less than 10ms delay and less than $10^{-5}$ packet loss rate (PLR). To satisfy such strict requirements with minimal channel resource consumption, the devices need to accurately predict the channel quality and select Modulation and Coding Scheme (MCS) for URLLC in a proper way. This paper presents a novel real-time channel prediction system based on Software-Defined Radio that uses a neural network. The paper also describes and shares an open channel measurement dataset that can be used to compare various channel prediction approaches in different mobility scenarios in future research on URLLC
The capability of smarter networked devices to dynamically select appropriate radio connectivity options is especially important in the emerging millimeter-wave (mmWave) systems to mitigate abrupt link blockage in complex environments. To enrich the levels of diversity, mobile mmWave relays can be employed for improved connection reliability. These are considered by 3GPP for on-demand densification on top of the static mmWave infrastructure. However, performance dynamics of mobile mmWave relaying is not nearly well explored, especially in realistic conditions, such as urban vehicular scenarios. In this paper, we develop a mathematical framework for the performance evaluation of mmWave vehicular relaying in a typical street deployment. We analyze and compare alternative connectivity strategies by quantifying the performance gains made available to smart devices in the presence of mmWave relays. We identify situations where the use of mmWave vehicular relaying is particularly beneficial. Our methodology and results can support further standardization and deployment of mmWave relaying in more intelligent 5G+ all-mmWave cellular networks.
Emerging applications -- cloud computing, the internet of things, and augmented/virtual reality -- need responsive, available, secure, ubiquitous, and scalable datacenter networks. Network management currently uses simple, per-packet, data-plane heuristics (e.g., ECMP and sketches) under an intelligent, millisecond-latency control plane that runs data-driven performance and security policies. However, to meet users quality-of-service expectations in a modern data center, networks must operate intelligently at line rate. In this paper, we present Taurus, an intelligent data plane capable of machine-learning inference at line rate. Taurus adds custom hardware based on a map-reduce abstraction to programmable network devices, such as switches and NICs; this new hardware uses pipelined and SIMD parallelism for fast inference. Our evaluation of a Taurus-enabled switch ASIC -- supporting several real-world benchmarks -- shows that Taurus operates three orders of magnitude faster than a server-based control plane, while increasing area by 24% and latency, on average, by 178 ns. On the long road to self-driving networks, Taurus is the equivalent of adaptive cruise control: deterministic rules steer flows, while machine learning tunes performance and heightens security.
Conventional wireless techniques are becoming inadequate for beyond fifth-generation (5G) networks due to latency and bandwidth considerations. To improve the error performance and throughput of wireless communication systems, we propose physical layer network coding (PNC) in an intelligent reflecting surface (IRS)-assisted environment. We consider an IRS-aided butterfly network, where we propose an algorithm for obtaining the optimal IRS phases. Also, analytic expressions for the bit error rate (BER) are derived. The numerical results demonstrate that the proposed scheme significantly improves the BER performance. For instance, the BER at the relay in the presence of a 32-element IRS is three orders of magnitudes less than that without an IRS.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا