ترغب بنشر مسار تعليمي؟ اضغط هنا

Artificial Intelligence Driven UAV-NOMA-MEC in Next Generation Wireless Networks

125   0   0.0 ( 0 )
 نشر من قبل Zhong Yang
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Driven by the unprecedented high throughput and low latency requirements in next-generation wireless networks, this paper introduces an artificial intelligence (AI) enabled framework in which unmanned aerial vehicles (UAVs) use non-orthogonal multiple access (NOMA) and mobile edge computing (MEC) techniques to service terrestrial mobile users (MUs). The proposed framework enables the terrestrial MUs to offload their computational tasks simultaneously, intelligently, and flexibly, thus enhancing their connectivity as well as reducing their transmission latency and their energy consumption. To this end, the fundamentals of this framework are first introduced. Then, a number of communication and AI techniques are proposed to improve the quality of experiences of terrestrial MUs. To this end, federated learning and reinforcement learning are introduced for intelligent task offloading and computing resource allocation. For each learning technique, motivations, challenges, and representative results are introduced. Finally, several key technical challenges and open research issues of the proposed framework are summarized.



قيم البحث

اقرأ أيضاً

Multi-access edge computing (MEC) can enhance the computing capability of mobile devices, while non-orthogonal multiple access (NOMA) can provide high data rates. Combining these two strategies can effectively benefit the network with spectrum and en ergy efficiency. In this paper, we investigate the task delay minimization in multi-user NOMA-MEC networks, where multiple users can offload their tasks simultaneously through the same frequency band. We adopt the partial offloading policy, in which each user can partition its computation task into offloading and locally computing parts. We aim to minimize the task delay among users by optimizing their tasks partition ratios and offloading transmit power. The delay minimization problem is first formulated, and it is shown that it is a nonconvex one. By carefully investigating its structure, we transform the original problem into an equivalent quasi-convex. In this way, a bisection search iterative algorithm is proposed in order to achieve the minimum task delay. To reduce the complexity of the proposed algorithm and evaluate its optimality, we further derive closed-form expressions for the optimal task partition ratio and offloading power for the case of two-user NOMA-MEC networks. Simulations demonstrate the convergence and optimality of the proposed algorithm and the effectiveness of the closed-form analysis.
Location information claimed by devices will play an ever-increasing role in future wireless networks such as 5G, the Internet of Things (IoT). Against this background, the verification of such claimed location information will be an issue of growing importance. A formal information-theoretic Location Verification System (LVS) can address this issue to some extent, but such a system usually operates within the limits of idealistic assumptions on a-priori information on the proportion of genuine users in the field. In this work we address this critical limitation by using a Neural Network (NN) showing how such a NN based LVS is capable of efficiently functioning even when the proportion of genuine users is completely unknown a-priori. We demonstrate the improved performance of this new form of LVS based on Time of Arrival measurements from multiple verifying base stations within the context of vehicular networks, quantifying how our NN-LVS outperforms the stand-alone information-theoretic LVS in a range of anticipated real-world conditions. We also show the efficient performance for the NN-LVS when the users signals have added Non-Line-of-Site (NLoS) bias in them. This new LVS can be applied to a range of location-centric applications within the domain of the IoT.
78 - Xiaobo Zhou , Shihao Yan , Min Li 2020
This work, for the first time, considers confidential data collection in the context of unmanned aerial vehicle (UAV) wireless networks, where the scheduled ground sensor node (SN) intends to transmit confidential information to the UAV without being intercepted by other unscheduled ground SNs. Specifically, a full-duplex (FD) UAV collects data from each scheduled SN on the ground and generates artificial noise (AN) to prevent the scheduled SNs confidential information from being wiretapped by other unscheduled SNs. We first derive the reliability outage probability (ROP) and secrecy outage probability (SOP) of a considered fixed-rate transmission, based on which we formulate an optimization problem that maximizes the minimum average secrecy rate (ASR) subject to some specific constraints. We then transform the formulated optimization problem into a convex problem with the aid of first-order restrictive approximation technique and penalty method. The resultant problem is a generalized nonlinear convex programming (GNCP) and solving it directly still leads to a high complexity, which motivates us to further approximate this problem as a second-order cone program (SOCP) in order to reduce the computational complexity. Finally, we develop an iteration procedure based on penalty successive convex approximation (P-SCA) algorithm to pursue the solution to the formulated optimization problem. Our examination shows that the developed joint design achieves a significant performance gain compared to a benchmark scheme.
The combination of non-orthogonal multiple access (NOMA) and mobile edge computing (MEC) can significantly improve the spectrum efficiency beyond the fifth-generation network. In this paper, we mainly focus on energy-efficient resource allocation for a multi-user, multi-BS NOMA assisted MEC network with imperfect channel state information (CSI), in which each user can upload its tasks to multiple base stations (BSs) for remote executions. To minimize the energy consumption, we consider jointly optimizing the task assignment, power allocation and user association. As the main contribution, with imperfect CSI, the optimal closed-form expressions of task assignment and power allocation are analytically derived for the two-BS case. Specifically, the original formulated problem is nonconvex. We first transform the probabilistic problem into a non-probabilistic one. Subsequently, a bilevel programming method is proposed to derive the optimal solution. In addition, by incorporating the matching algorithm with the optimal task and power allocation, we propose a low complexity algorithm to efficiently optimize user association for the multi-user and multi-BS case. Simulations demonstrate that the proposed algorithm can yield much better performance than the conventional OMA scheme but also the identical results with lower complexity from the exhaustive search with the small number of BSs.
Some new findings for chaos-based wireless communication systems have been identified recently. First, chaos has proven to be the optimal communication waveform because chaotic signals can achieve the maximum signal to noise ratio at receiver with th e simplest matched filter. Second, the information transmitted in chaotic signals is not modified by the multipath wireless channel. Third, chaos properties can be used to relief inter-symbol interference (ISI) caused by multipath propagation. Although recent work has reported the method of obtaining the optimal threshold to eliminate the ISI in chaos-based wireless communication, its practical implementation is still a challenge. By knowing the channel parameters and all symbols, especially the future symbol to be transmitted in advance, it is almost an impossible task in the practical communication systems. Owning to Artificial intelligence (AI) recent developments, Convolutional Neural Network (CNN) with deep learning structure is being proposed to predict future symbols based on the received signal, so as to further reduce ISI and obtain better bit error rate (BER) performance as compared to that used the existing sub-optimal threshold. The feature of the method involves predicting the future symbol and obtaining a better threshold suitable for time variant channel. Numerical simulation and experimental results validate our theory and the superiority of the proposed method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا