ترغب بنشر مسار تعليمي؟ اضغط هنا

Energy-Efficient Proactive Caching for Fog Computing with Correlated Task Arrivals

68   0   0.0 ( 0 )
 نشر من قبل Hong Xing
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

With the proliferation of latency-critical applications, fog-radio network (FRAN) has been envisioned as a paradigm shift enabling distributed deployment of cloud-clone facilities at the network edge. In this paper, we consider proactive caching for a one-user one-access point (AP) fog computing system over a finite time horizon, in which consecutive tasks of the same type of application are temporarily correlated. Under the assumption of predicable length of the task-input bits, we formulate a long-term weighted-sum energy minimization problem with three-slot correlation to jointly optimize computation offloading policies and caching decisions subject to stringent per-slot deadline constraints. The formulated problem is hard to solve due to the mixed-integer non-convexity. To tackle this challenge, first, we assume that task-related information are perfectly known {em a priori}, and provide offline solution leveraging the technique of semi-definite relaxation (SDR), thereby serving as theoretical upper bound. Next, based on the offline solution, we propose a sliding-window based online algorithm under arbitrarily distributed prediction error. Finally, the advantage of computation caching as well the proposed algorithm is verified by numerical examples by comparison with several benchmarks.



قيم البحث

اقرأ أيضاً

351 - Zhifei Lin , Feng Wang , 2021
This paper considers an energy harvesting (EH) based multiuser mobile edge computing (MEC) system, where each user utilizes the harvested energy from renewable energy sources to execute its computation tasks via computation offloading and local compu ting. Towards maximizing the systems weighted computation rate (i.e., the number of weighted users computing bits within a finite time horizon) subject to the users energy causality constraints due to dynamic energy arrivals, the decision for joint computation offloading and local computing over time is optimized {em over time}. Assuming that the profile of channel state information and dynamic task arrivals at the users is known in advance, the weighted computation rate maximization problem becomes a convex optimization problem. Building on the Lagrange duality method, the well-structured optimal solution is analytically obtained. Both the users local computing and offloading rates are shown to have a monotonically increasing structure. Numerical results show that the proposed design scheme can achieve a significant performance gain over the alternative benchmark schemes.
Edge machine learning involves the deployment of learning algorithms at the network edge to leverage massive distributed data and computation resources to train artificial intelligence (AI) models. Among others, the framework of federated edge learni ng (FEEL) is popular for its data-privacy preservation. FEEL coordinates global model training at an edge server and local model training at edge devices that are connected by wireless links. This work contributes to the energy-efficient implementation of FEEL in wireless networks by designing joint computation-and-communication resource management ($text{C}^2$RM). The design targets the state-of-the-art heterogeneous mobile architecture where parallel computing using both a CPU and a GPU, called heterogeneous computing, can significantly improve both the performance and energy efficiency. To minimize the sum energy consumption of devices, we propose a novel $text{C}^2$RM framework featuring multi-dimensional control including bandwidth allocation, CPU-GPU workload partitioning and speed scaling at each device, and $text{C}^2$ time division for each link. The key component of the framework is a set of equilibriums in energy rates with respect to different control variables that are proved to exist among devices or between processing units at each device. The results are applied to designing efficient algorithms for computing the optimal $text{C}^2$RM policies faster than the standard optimization tools. Based on the equilibriums, we further design energy-efficient schemes for device scheduling and greedy spectrum sharing that scavenges spectrum holes resulting from heterogeneous $text{C}^2$ time divisions among devices. Using a real dataset, experiments are conducted to demonstrate the effectiveness of $text{C}^2$RM on improving the energy efficiency of a FEEL system.
The ever-continuing explosive growth of on-demand content requests has imposed great pressure on mobile/wireless network infrastructures. To ease congestion in the network and increase perceived user experience, caching popular content closer to the end-users can play a significant role and as such this issue received significant attention over the last few years. Additionally, energy efficiency is treated as a fundamental requirement in the design of next-generation mobile networks. However, there has been little attention to the overlapping area between energy efficiency and network caching especially when considering multipath routing. To this end, this paper proposes an energy-efficient caching with multipath routing support. The proposed scheme provides a joint anchoring of popular content into a set of potential caching nodes with optimized multi-path support whilst ensuring a balance between transmission and caching energy cost. The proposed model also considers different content delivery modes, such as multicast and unicast. Two separated Integer-Linear Programming (ILP) models are formulated for each delivery mode. To tackle the curse of dimensionality we then provide a greedy simulated annealing algorithm, which not only reduces the time complexity but also provides a competitive performance as ILP models. A wide set of numerical investigations has shown that the proposed scheme is more energy-efficient compared with other widely used approaches in caching under the premise of network resource limitation.
Reconfigurable intelligent surface (RIS) has emerged as a promising technology for achieving high spectrum and energy efficiency in future wireless communication networks. In this paper, we investigate an RIS-aided single-cell multi-user mobile edge computing (MEC) system where an RIS is deployed to support the communication between a base station (BS) equipped with MEC servers and multiple single-antenna users. To utilize the scarce frequency resource efficiently, we assume that users communicate with BS based on a non-orthogonal multiple access (NOMA) protocol. Each user has a computation task which can be computed locally or partially/fully offloaded to the BS. We aim to minimize the sum energy consumption of all users by jointly optimizing the passive phase shifters, the size of transmission data, transmission rate, power control, transmission time and the decoding order. Since the resulting problem is non-convex, we use the block coordinate descent method to alternately optimize two separated subproblems. More specifically, we use the dual method to tackle a subproblem with given phase shift and obtain the closed-form solution; and then we utilize penalty method to solve another subproblem for given power control. Moreover, in order to demonstrate the effectiveness of our proposed algorithm, we propose three benchmark schemes: the time-division multiple access (TDMA)-MEC scheme, the full local computing scheme and the full offloading scheme. We use an alternating 1-D search method and the dual method that can solve the TDMA-based transmission problem well. Numerical results demonstrate that the proposed scheme can increase the energy efficiency and achieve significant performance gains over the three benchmark schemes.
This paper studies edge caching in fog computing networks, where a capacity-aware edge caching framework is proposed by considering both the limited fog cache capacity and the connectivity capacity of base stations (BSs). By allowing cooperation betw een fog nodes and cloud data center, the average-download-time (ADT) minimization problem is formulated as a multi-class processor queuing process. We prove the convexity of the formulated problem and propose an Alternating Direction Method of Multipliers (ADMM)-based algorithm that can achieve the minimum ADT and converge much faster than existing algorithms. Simulation results demonstrate that the allocation of fog cache capacity and connectivity capacity of BSs needs to be balanced according to the network status. While the maximization of the edge-cache-hit-ratio (ECHR) by utilizing all available fog cache capacity is helpful when the BS connectivity capacity is sufficient, it is preferable to keep a lower ECHR and allocate more traffic to the cloud when the BS connectivity capacity is deficient.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا