Do you want to publish a course? Click here

The Next Decade of Telecommunications Artificial Intelligence

66   0   0.0 ( 0 )
 Added by Aidong Yang
 Publication date 2021
and research's language is English
 Authors Ye Ouyang




Ask ChatGPT about the research

It has been an exciting journey since the mobile communications and artificial intelligence were conceived 37 years and 64 years ago. While both fields evolved independently and profoundly changed communications and computing industries, the rapid convergence of 5G and deep learning is beginning to significantly transform the core communication infrastructure, network management and vertical applications. The paper first outlines the individual roadmaps of mobile communications and artificial intelligence in the early stage, with a concentration to review the era from 3G to 5G when AI and mobile communications started to converge. With regard to telecommunications artificial intelligence, the paper further introduces in detail the progress of artificial intelligence in the ecosystem of mobile communications. The paper then summarizes the classifications of AI in telecom ecosystems along with its evolution paths specified by various international telecommunications standardization bodies. Towards the next decade, the paper forecasts the prospective roadmap of telecommunications artificial intelligence. In line with 3GPP and ITU-R timeline of 5G & 6G, the paper further explores the network intelligence following 3GPP and ORAN routes respectively, experience and intention driven network management and operation, network AI signalling system, intelligent middle-office based BSS, intelligent customer experience management and policy control driven by BSS and OSS convergence, evolution from SLA to ELA, and intelligent private network for verticals. The paper is concluded with the vision that AI will reshape the future B5G or 6G landscape and we need pivot our R&D, standardizations, and ecosystem to fully take the unprecedented opportunities.

rate research

Read More

The rise of Artificial Intelligence (AI) will bring with it an ever-increasing willingness to cede decision-making to machines. But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems. There is a vital need for research in AI and Cooperation that seeks to understand the ways in which systems of AIs and systems of AIs with people can engender cooperative behavior. Trust in AI is also key: trust that is intrinsic and trust that can only be earned over time. Here we use the term AI in its broadest sense, as employed by the recent 20-Year Community Roadmap for AI Research (Gil and Selman, 2019), including but certainly not limited to, recent advances in deep learning. With success, cooperation between humans and AIs can build society just as human-human cooperation has. Whether coming from an intrinsic willingness to be helpful, or driven through self-interest, human societies have grown strong and the human species has found success through cooperation. We cooperate in the small -- as family units, with neighbors, with co-workers, with strangers -- and in the large as a global community that seeks cooperative outcomes around questions of commerce, climate change, and disarmament. Cooperation has evolved in nature also, in cells and among animals. While many cases involving cooperation between humans and AIs will be asymmetric, with the human ultimately in control, AI systems are growing so complex that, even today, it is impossible for the human to fully comprehend their reasoning, recommendations, and actions when functioning simply as passive observers.
The Internet of Things (IoT) and edge computing applications aim to support a variety of societal needs, including the global pandemic situation that the entire world is currently experiencing and responses to natural disasters. The need for real-time interactive applications such as immersive video conferencing, augmented/virtual reality, and autonomous vehicles, in education, healthcare, disaster recovery and other domains, has never been higher. At the same time, there have been recent technological breakthroughs in highly relevant fields such as artificial intelligence (AI)/machine learning (ML), advanced communication systems (5G and beyond), privacy-preserving computations, and hardware accelerators. 5G mobile communication networks increase communication capacity, reduce transmission latency and error, and save energy -- capabilities that are essential for new applications. The envisioned future 6G technology will integrate many more technologies, including for example visible light communication, to support groundbreaking applications, such as holographic communications and high precision manufacturing. Many of these applications require computations and analytics close to application end-points: that is, at the edge of the network, rather than in a centralized cloud. AI techniques applied at the edge have tremendous potential both to power new applications and to need more efficient operation of edge infrastructure. However, it is critical to understand where to deploy AI systems within complex ecosystems consisting of advanced applications and the specific real-time requirements towards AI systems.
Next generation wireless networks are expected to support diverse vertical industries and offer countless emerging use cases. To satisfy stringent requirements of diversified services, network slicing is developed, which enables service-oriented resource allocation by tailoring the infrastructure network into multiple logical networks. However, there are still some challenges in cross-domain multi-dimensional resource management for end-to-end (E2E) slices under the dynamic and uncertain environment. Trading off the revenue and cost of resource allocation while guaranteeing service quality is significant to tenants. Therefore, this article introduces a hierarchical resource management framework, utilizing deep reinforcement learning in admission control of resource requests from different tenants and resource adjustment within admitted slices for each tenant. Particularly, we first discuss the challenges in customized resource management of 6G. Second, the motivation and background are presented to explain why artificial intelligence (AI) is applied in resource customization of multi-tenant slicing. Third, E2E resource management is decomposed into two problems, multi-dimensional resource allocation decision based on slice-level feedback and real-time slice adaption aimed at avoiding service quality degradation. Simulation results demonstrate the effectiveness of AI-based customized slicing. Finally, several significant challenges that need to be addressed in practical implementation are investigated.
166 - Liping Yang 2021
In recent years, Bitcoin price prediction has attracted the interest of researchers and investors. However, the accuracy of previous studies is not well enough. Machine learning and deep learning methods have been proved to have strong prediction ability in this area. This paper proposed a method combined with Ensemble Empirical Mode Decomposition (EEMD) and a deep learning method called long short-term memory (LSTM) to research the problem of next-day Bitcoin price forecast.
In this work, we develop a framework that jointly decides on the optimal location of wireless extenders and the channel configuration of extenders and access points (APs) in a Wireless Mesh Network (WMN). Typically, the rule-based approaches in the literature result in limited exploration while reinforcement learning based approaches result in slow convergence. Therefore, Artificial Intelligence (AI) is adopted to support network autonomy and to capture insights on system and environment evolution. We propose a Self-X (self-optimizing and self-learning) framework that encapsulates both environment and intelligent agent to reach optimal operation through sensing, perception, reasoning and learning in a truly autonomous fashion. The agent derives adequate knowledge from previous actions improving the quality of future decisions. Domain experience was provided to guide the agent while exploring and exploiting the set of possible actions in the environment. Thus, it guarantees a low-cost learning and achieves a near-optimal network configuration addressing the non-deterministic polynomial-time hardness (NP-hard) problem of joint channel assignment and location optimization in WMNs. Extensive simulations are run to validate its fast convergence, high throughput and resilience to dynamic interference conditions. We deploy the framework on off-the-shelf wireless devices to enable autonomous self-optimization and self-deployment, using APs and wireless extenders.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا