ترغب بنشر مسار تعليمي؟ اضغط هنا

Secure Computation Offloading in Blockchain based IoT Networks with Deep Reinforcement Learning

87   0   0.0 ( 0 )
 نشر من قبل Dinh Nguyen
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

For current and future Internet of Things (IoT) networks, mobile edge-cloud computation offloading (MECCO) has been regarded as a promising means to support delay-sensitive IoT applications. However, offloading mobile tasks to the cloud is vulnerable to security issues due to malicious mobile devices (MDs). How to implement offloading to alleviate computation burdens at MDs while guaranteeing high security in mobile edge cloud is a challenging problem. In this paper, we investigate simultaneously the security and computation offloading problems in a multi-user MECCO system with blockchain. First, to improve the offloading security, we propose a trustworthy access control using blockchain, which can protect cloud resources against illegal offloading behaviours. Then, to tackle the computation management of authorized MDs, we formulate a computation offloading problem by jointly optimizing the offloading decisions, the allocation of computing resource and radio bandwidth, and smart contract usage. This optimization problem aims to minimize the long-term system costs of latency, energy consumption and smart contract fee among all MDs. To solve the proposed offloading problem, we develop an advanced deep reinforcement learning algorithm using a double-dueling Q-network. Evaluation results from real experiments and numerical simulations demonstrate the significant advantages of our scheme over existing approaches.



قيم البحث

اقرأ أيضاً

95 - Dali Zhu , Haitao Liu , Ting Li 2021
In remote regions (e.g., mountain and desert), cellular networks are usually sparsely deployed or unavailable. With the appearance of new applications (e.g., industrial automation and environment monitoring) in remote regions, resource-constrained te rminals become unable to meet the latency requirements. Meanwhile, offloading tasks to urban terrestrial cloud (TC) via satellite link will lead to high delay. To tackle above issues, Satellite Edge Computing architecture is proposed, i.e., users can offload computing tasks to visible satellites for executing. However, existing works are usually limited to offload tasks in pure satellite networks, and make offloading decisions based on the predefined models of users. Besides, the runtime consumption of existing algorithms is rather high. In this paper, we study the task offloading problem in satellite-terrestrial edge computing networks, where tasks can be executed by satellite or urban TC. The proposed Deep Reinforcement learning-based Task Offloading (DRTO) algorithm can accelerate learning process by adjusting the number of candidate locations. In addition, offloading location and bandwidth allocation only depend on the current channel states. Simulation results show that DRTO achieves near-optimal offloading cost performance with much less runtime consumption, which is more suitable for satellite-terrestrial network with fast fading channel.
87 - Zhuo Li , Xu Zhou , Taixin Li 2021
With the mass deployment of computing-intensive applications and delay-sensitive applications on end devices, only adequate computing resources can meet differentiated services delay requirements. By offloading tasks to cloud servers or edge servers, computation offloading can alleviate computing and storage limitations and reduce delay and energy consumption. However, few of the existing offloading schemes take into consideration the cloud-edge collaboration and the constraint of energy consumption and task dependency. This paper builds a collaborative computation offloading model in cloud and edge computing and formulates a multi-objective optimization problem. Constructed by fusing optimal transport and Policy-Based RL, we propose an Optimal-Transport-Based RL approach to resolve the offloading problem and make the optimal offloading decision for minimizing the overall cost of delay and energy consumption. Simulation results show that the proposed approach can effectively reduce the cost and significantly outperforms existing optimization solutions.
Last year, IEEE 802.11 Extremely High Throughput Study Group (EHT Study Group) was established to initiate discussions on new IEEE 802.11 features. Coordinated control methods of the access points (APs) in the wireless local area networks (WLANs) are discussed in EHT Study Group. The present study proposes a deep reinforcement learning-based channel allocation scheme using graph convolutional networks (GCNs). As a deep reinforcement learning method, we use a well-known method double deep Q-network. In densely deployed WLANs, the number of the available topologies of APs is extremely high, and thus we extract the features of the topological structures based on GCNs. We apply GCNs to a contention graph where APs within their carrier sensing ranges are connected to extract the features of carrier sensing relationships. Additionally, to improve the learning speed especially in an early stage of learning, we employ a game theory-based method to collect the training data independently of the neural network model. The simulation results indicate that the proposed method can appropriately control the channels when compared to extant methods.
This paper investigates the application of deep deterministic policy gradient (DDPG) to intelligent reflecting surface (IRS) based unmanned aerial vehicles (UAV) assisted non-orthogonal multiple access (NOMA) downlink networks. The deployment of the UAV equipped with an IRS is important, as the UAV increases the flexibility of the IRS significantly, especially for the case of users who have no line of sight (LoS) path to the base station (BS). Therefore, the aim of this letter is to maximize the sum rate by jointly optimizing the power allocation of the BS, the phase shifting of the IRS and the horizontal position of the UAV. Because the formulated problem is not convex, the DDPG algorithm is utilized to solve it. The computer simulation results are provided to show the superior performance of the proposed DDPG based algorithm.
In delay-sensitive industrial internet of things (IIoT) applications, the age of information (AoI) is employed to characterize the freshness of information. Meanwhile, the emerging network function virtualization provides flexibility and agility for service providers to deliver a given network service using a sequence of virtual network functions (VNFs). However, suitable VNF placement and scheduling in these schemes is NP-hard and finding a globally optimal solution by traditional approaches is complex. Recently, deep reinforcement learning (DRL) has appeared as a viable way to solve such problems. In this paper, we first utilize single agent low-complex compound action actor-critic RL to cover both discrete and continuous actions and jointly minimize VNF cost and AoI in terms of network resources under end-to end Quality of Service constraints. To surmount the single-agent capacity limitation for learning, we then extend our solution to a multi-agent DRL scheme in which agents collaborate with each other. Simulation results demonstrate that single-agent schemes significantly outperform the greedy algorithm in terms of average network cost and AoI. Moreover, multi-agent solution decreases the average cost by dividing the tasks between the agents. However, it needs more iterations to be learned due to the requirement on the agents collaboration.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا