Do you want to publish a course? Click here

GreenDelivery: Proactive Content Caching and Push with Energy-Harvesting-based Small Cells

139   0   0.0 ( 0 )
 Added by Sheng Zhou
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

The explosive growth of mobile multimedia traffic calls for scalable wireless access with high quality of service and low energy cost. Motivated by the emerging energy harvesting communications, and the trend of caching multimedia contents at the access edge and user terminals, we propose a paradigm-shift framework, namely GreenDelivery, enabling efficient content delivery with energy harvesting based small cells. To resolve the two-dimensional randomness of energy harvesting and content request arrivals, proactive caching and push are jointly optimized, with respect to the content popularity distribution and battery states. We thus develop a novel way of understanding the interplay between content and energy over time and space. Case studies are provided to show the substantial reduction of macro BS activities, and thus the related energy consumption from the power grid is reduced. Research issues of the proposed GreenDelivery framework are also discussed.



rate research

Read More

Motivated by the recent development of energy harvesting communications, and the trend of multimedia contents caching and push at the access edge and user terminals, this paper considers how to design an effective push mechanism of energy harvesting powered small-cell base stations (SBSs) in heterogeneous networks. The problem is formulated as a Markov decision process by optimizing the push policy based on the battery energy, user request and content popularity state to maximize the service capability of SBSs. We extensively analyze the problem and propose an effective policy iteration algorithm to find the optimal policy. According to the numerical results, we find that the optimal policy reveals a state dependent threshold based structure. Besides, more than 50% performance gain is achieved by the optimal push policy compared with the non-push policy.
Motivated by the rapid development of energy harvesting technology and content-aware communication in access networks, this paper considers the push mechanism design in small-cell base stations (SBSs) powered by renewable energy. A user request can be satisfied by either push or unicast from the SBS. If the SBS cannot handle the request, the user is blocked by the SBS and is served by the macro-cell BS (MBS) instead, which typically consumes more energy. We aim to minimize the ratio of user requests blocked by the SBS. With finite battery capacity, Markov decision process based problem is formulated, and the optimal policy is found by dynamic programming (DP). Two threshold-based policies are proposed: the push-only threshold-based (POTB) policy and the energy-efficient threshold-based (EETB) policy, and the closed-form blocking probabilities with infinite battery capacity are derived. Numerical results show that the proposed policies outperform the conventional non-push policy if the content popularity changes slowly or the content request generating rate is high, and can achieve the performance of the greedy optimal threshold-based (GOTB) policy. In addition, the performance gap between the threshold-based policies and the DP optimal policy is small when the energy arrival rate is low or the request generating rate is high.
Wireless communication enabled by unmanned aerial vehicles (UAVs) has emerged as an appealing technology for many application scenarios in future wireless systems. However, the limited endurance of UAVs greatly hinders the practical implementation of UAV-enabled communications. To overcome this issue, this paper proposes a novel scheme for UAV-enabled communications by utilizing the promising technique of proactive caching at the users. Specifically, we focus on content-centric communication systems, where a UAV is dispatched to serve a group of ground nodes (GNs) with random and asynchronous requests for files drawn from a given set. With the proposed scheme, at the beginning of each operation period, the UAV pro-actively transmits the files to a subset of selected GNs that cooperatively cache all the files in the set. As a result, when requested, a file can be retrieved by each GN either directly from its local cache or from its nearest neighbor that has cached the file via device-to-device (D2D) communications. It is revealed that there exists a fundamental trade-off between the file caching cost, which is the total time required for the UAV to transmit the files to their designated caching GNs, and the file retrieval cost, which is the average time required for serving one file request. To characterize this trade-off, we formulate an optimization problem to minimize the weighted sum of the two costs, via jointly designing the file caching policy, the UAV trajectory and communication scheduling. As the formulated problem is NP-hard in general, we propose efficient algorithms to find high-quality approximate solutions for it. Numerical results are provided to corroborate our study and show the great potential of proactive caching for overcoming the limited endurance issue in UAV-enabled communications.
With the proliferation of latency-critical applications, fog-radio network (FRAN) has been envisioned as a paradigm shift enabling distributed deployment of cloud-clone facilities at the network edge. In this paper, we consider proactive caching for a one-user one-access point (AP) fog computing system over a finite time horizon, in which consecutive tasks of the same type of application are temporarily correlated. Under the assumption of predicable length of the task-input bits, we formulate a long-term weighted-sum energy minimization problem with three-slot correlation to jointly optimize computation offloading policies and caching decisions subject to stringent per-slot deadline constraints. The formulated problem is hard to solve due to the mixed-integer non-convexity. To tackle this challenge, first, we assume that task-related information are perfectly known {em a priori}, and provide offline solution leveraging the technique of semi-definite relaxation (SDR), thereby serving as theoretical upper bound. Next, based on the offline solution, we propose a sliding-window based online algorithm under arbitrarily distributed prediction error. Finally, the advantage of computation caching as well the proposed algorithm is verified by numerical examples by comparison with several benchmarks.
Caching at the wireless edge nodes is a promising way to boost the spatial and spectral efficiency, for the sake of alleviating networks from content-related traffic. Coded caching originally introduced by Maddah-Ali and Niesen significantly speeds up communication efficiency by transmitting multicast messages simultaneously useful to multiple users. Most prior works on coded caching are based on the assumption that each user may request all content in the library. However, in many applications the users are interested only in a limited set of content items that depends on their location. For example, visitors in a museum may stream audio and video related to the artworks in the room they are visiting, or assisted self-driving vehicles may access super-high definition maps of the area through which they are travelling. Motivated by these considerations, this paper formulates the coded caching problem for location-based content with edge cache nodes. The considered problem includes a content server with access to N location-based files, K edge cache nodes located at different regions, and K users each of which is in the serving region of one cache node and can retrieve the cached content of this cache node with negligible cost. Depending on the location, each user only requests a file from a location-dependent subset of the library. The objective is to minimize the worst-case load transmitted from the content server among all possible demands. We propose a highly non-trivial converse bound under uncoded cache placement, which shows that a simple achievable scheme is optimal. In addition, this achievable scheme is generally order optimal within 3. Finally, we extend the coded caching problem for location-based content to the multiaccess coded caching topology, where each user is connected to L nearest cache nodes. When $L geq 2$ we characterize the exact optimality on the worst-case load.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا