ترغب بنشر مسار تعليمي؟ اضغط هنا

Edge Intelligence: Architectures, Challenges, and Applications

208   0   0.0 ( 0 )
 نشر من قبل Dianlei Xu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.



قيم البحث

اقرأ أيضاً

276 - Bo Yang , Xuelin Cao , Kai Xiong 2020
In a level-5 autonomous driving system, the autonomous driving vehicles (AVs) are expected to sense the surroundings via analyzing a large amount of data captured by a variety of onboard sensors in near-real-time. As a result, enormous computing cost s will be introduced to the AVs for processing the tasks with the deployed machine learning (ML) model, while the inference accuracy may not be guaranteed. In this context, the advent of edge intelligence (EI) and sixth-generation (6G) wireless networking are expected to pave the way to more reliable and safer autonomous driving by providing multi-access edge computing (MEC) together with ML to AVs in close proximity. To realize this goal, we propose a two-tier EI-empowered autonomous driving framework. In the autonomous-vehicles tier, the autonomous vehicles are deployed with the shallow layers by splitting the trained deep neural network model. In the edge-intelligence tier, an edge server is implemented with the remaining layers (also deep layers) and an appropriately trained multi-task learning (MTL) model. In particular, obtaining the optimal offloading strategy (including the binary offloading decision and the computational resources allocation) can be formulated as a mixed-integer nonlinear programming (MINLP) problem, which is solved via MTL in near-real-time with high accuracy. On another note, an edge-vehicle joint inference is proposed through neural network segmentation to achieve efficient online inference with data privacy-preserving and less communication delay. Experiments demonstrate the effectiveness of the proposed framework, and open research topics are finally listed.
The mobile communication system has transformed to be the fundamental infrastructure to support digital demands from all industry sectors, and 6G is envisioned to go far beyond the communication-only purpose. There is coming to a consensus that 6G wi ll treat Artificial Intelligence (AI) as the cornerstone and has a potential capability to provide intelligence inclusion, which implies to enable the access of AI services at anytime and anywhere by anyone. Apparently, the intelligent inclusion vision produces far-reaching influence on the corresponding network architecture design in 6G and deserves a clean-slate rethink. In this article, we propose an end-to-end system architecture design scope for 6G, and talk about the necessity to incorporate an independent data plane and a novel intelligent plane with particular emphasis on end-to-end AI workflow orchestration, management and operation. We also highlight the advantages to provision converged connectivity and computing services at the network function plane. Benefiting from these approaches, we believe that 6G will turn to an everything as a service (XaaS) platform with significantly enhanced business merits.
Future machine learning (ML) powered applications, such as autonomous driving and augmented reality, involve training and inference tasks with timeliness requirements and are communication and computation intensive, which demands for the edge learnin g framework. The real-time requirements drive us to go beyond accuracy for ML. In this article, we introduce the concept of timely edge learning, aiming to achieve accurate training and inference while minimizing the communication and computation delay. We discuss key challenges and propose corresponding solutions from data, model and resource management perspectives to meet the timeliness requirements. Particularly, for edge training, we argue that the total training delay rather than rounds should be considered, and propose data or model compression, and joint device scheduling and resource management schemes for both centralized training and federated learning systems. For edge inference, we explore the dependency between accuracy and delay for communication and computation, and propose dynamic data compression and flexible pruning schemes. Two case studies show that the timeliness performances, including the training accuracy under a given delay budget and the completion ratio of inference tasks within deadline, are highly improved with the proposed solutions.
The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many socio-economic and environmental challenges; however, this cannot happen without securing AI-enabled technologies. In recent years, most AI models are vulner able to advanced and sophisticated hacking techniques. This challenge has motivated concerted research efforts into adversarial AI, with the aim of developing robust machine and deep learning models that are resilient to different types of adversarial scenarios. In this paper, we present a holistic cyber security review that demonstrates adversarial attacks against AI applications, including aspects such as adversarial knowledge and capabilities, as well as existing methods for generating adversarial examples and existing cyber defence models. We explain mathematical AI models, especially new variants of reinforcement and federated learning, to demonstrate how attack vectors would exploit vulnerabilities of AI models. We also propose a systematic framework for demonstrating attack techniques against AI applications and reviewed several cyber defences that would protect AI applications against those attacks. We also highlight the importance of understanding the adversarial goals and their capabilities, especially the recent attacks against industry applications, to develop adaptive defences that assess to secure AI applications. Finally, we describe the main challenges and future research directions in the domain of security and privacy of AI technologies.
In this paper, we provide a comprehensive review and updated solutions related to 5G network slicing using SDN and NFV. Firstly, we present 5G service quality and business requirements followed by a description of 5G network softwarization and slicin g paradigms including essential concepts, history and different use cases. Secondly, we provide a tutorial of 5G network slicing technology enablers including SDN, NFV, MEC, cloud/Fog computing, network hypervisors, virtual machines & containers. Thidly, we comprehensively survey different industrial initiatives and projects that are pushing forward the adoption of SDN and NFV in accelerating 5G network slicing. A comparison of various 5G architectural approaches in terms of practical implementations, technology adoptions and deployment strategies is presented. Moreover, we provide a discussion on various open source orchestrators and proof of concepts representing industrial contribution. The work also investigates the standardization efforts in 5G networks regarding network slicing and softwarization. Additionally, the article presents the management and orchestration of network slices in a single domain followed by a comprehensive survey of management and orchestration approaches in 5G network slicing across multiple domains while supporting multiple tenants. Furthermore, we highlight the future challenges and research directions regarding network softwarization and slicing using SDN and NFV in 5G networks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا