Do you want to publish a course? Click here

Pricing-Driven Service Caching and Task Offloading in Mobile Edge Computing

103   0   0.0 ( 0 )
 Added by Jia Yan
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Provided with mobile edge computing (MEC) services, wireless devices (WDs) no longer have to experience long latency in running their desired programs locally, but can pay to offload computation tasks to the edge server. Given its limited storage space, it is important for the edge server at the base station (BS) to determine which service programs to cache by meeting and guiding WDs offloading decisions. In this paper, we propose an MEC service pricing scheme to coordinate with the service caching decisions and control WDs task offloading behavior. We propose a two-stage dynamic game of incomplete information to model and analyze the two-stage interaction between the BS and multiple associated WDs. Specifically, in Stage I, the BS determines the MEC service caching and announces the service program prices to the WDs, with the objective to maximize its expected profit under both storage and computation resource constraints. In Stage II, given the prices of different service programs, each WD selfishly decides its offloading decision to minimize individual service delay and cost, without knowing the other WDs desired program types or local execution delays. Despite the lack of WDs information and the coupling of all the WDs offloading decisions, we derive the optimal threshold-based offloading policy that can be easily adopted by the WDs in Stage II at the Bayesian equilibrium. Then, by predicting the WDs offloading equilibrium, we jointly optimize the BS pricing and service caching in Stage I via a low-complexity algorithm. In particular, we study both the uniform and differentiated pricing schemes. For differentiated pricing, we prove that the same price should be charged to the cached programs of the same workload.



rate research

Read More

While mobile edge computing (MEC) alleviates the computation and power limitations of mobile devices, additional latency is incurred when offloading tasks to remote MEC servers. In this work, the power-delay tradeoff in the context of task offloading is studied in a multi-user MEC scenario. In contrast with current system designs relying on average metrics (e.g., the average queue length and average latency), a novel network design is proposed in which latency and reliability constraints are taken into account. This is done by imposing a probabilistic constraint on users task queue lengths and invoking results from extreme value theory to characterize the occurrence of low-probability events in terms of queue length (or queuing delay) violation. The problem is formulated as a computation and transmit power minimization subject to latency and reliability constraints, and solved using tools from Lyapunov stochastic optimization. Simulation results demonstrate the effectiveness of the proposed approach, while examining the power-delay tradeoff and required computational resources for various computation intensities.
This letter studies an ultra-reliable low latency communication problem focusing on a vehicular edge computing network in which vehicles either fetch and synthesize images recorded by surveillance cameras or acquire the synthesized image from an edge computing server. The notion of risk-sensitive in financial mathematics is leveraged to define a reliability measure, and the studied problem is formulated as a risk minimization problem for each vehicles end-to-end (E2E) task fetching and offloading delays. Specifically, by resorting to a joint utility and policy estimation-based learning algorithm, a distributed risk-sensitive solution for task fetching and offloading is proposed. Simulation results show that our proposed solution achieves performance improvements up to 40% variance reduction and steeper distribution tail of the E2E delay over an averaged-based baseline.
Recently, along with the rapid development of mobile communication technology, edge computing theory and techniques have been attracting more and more attentions from global researchers and engineers, which can significantly bridge the capacity of cloud and requirement of devices by the network edges, and thus can accelerate the content deliveries and improve the quality of mobile services. In order to bring more intelligence to the edge systems, compared to traditional optimization methodology, and driven by the current deep learning techniques, we propose to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with the mobile edge systems, for optimizing the mobile edge computing, caching and communication. And thus, we design the In-Edge AI framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load. In-Edge AI is evaluated and proved to have near-optimal performance but relatively low overhead of learning, while the system is cognitive and adaptive to the mobile communication systems. Finally, we discuss several related challenges and opportunities for unveiling a promising upcoming future of In-Edge AI.
Mobile edge computing (MEC) is proposed to boost high-efficient and time-sensitive 5G applications. However, the microburst may occur even in lightly-loaded scenarios, which leads to the indeterministic service latency (i.e., unpredictable delay or delay variation), hence hindering the deployment of MEC. Deterministic IP networking (DIP) has been proposed that can provide bounds on latency, and high reliability in the large-scale networks. Nevertheless, the direct migration of DIP into the MEC network is non-trivial owing to its original design for the Ethernet with homogeneous devices. Meanwhile, DIP also faces the challenges on the network throughput and scheduling flexibility. In this paper, we delve into the adoption of DIP for the MEC networks and some of the relevant aspects. A deterministic MEC (D-MEC) network is proposed to deliver the deterministic service (i.e., providing the MEC service with bounded service latency). In the D-MEC network, two mechanisms, including the cycle mapping and cycle shifting, are designed to enable: (i) seamless and deterministic transmission with heterogeneous underlaid resources; and (ii) traffic shaping on the edges to improve the resource utilization. We also formulate a joint configuration to maximize the network throughput with deterministic QoS guarantees. Extensive simulations verify that the proposed D-MEC network can achieve a deterministic MEC service, even in the highly-loaded scenarios.
Recently, Mobile-Edge Computing (MEC) has arisen as an emerging paradigm that extends cloud-computing capabilities to the edge of the Radio Access Network (RAN) by deploying MEC servers right at the Base Stations (BSs). In this paper, we envision a collaborative joint caching and processing strategy for on-demand video streaming in MEC networks. Our design aims at enhancing the widely used Adaptive BitRate (ABR) streaming technology, where multiple bitra
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا