Do you want to publish a course? Click here

Competitive Caching of Contents in 5G Edge Cloud Networks

359   0   0.0 ( 0 )
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

The surge of mobile data traffic forces network operators to cope with capacity shortage. The deployment of small cells in 5G networks is meant to reduce latency, backhaul traffic and increase radio access capacity. In this context, mobile edge computing technology will be used to manage dedicated cache space in the radio access network. Thus, mobile network operators will be able to provision OTT content providers with new caching services to enhance the quality of experience of their customers on the move. In turn, the cache memory in the mobile edge network will become a shared resource. Hence, we study a competitive caching scheme where contents are stored at given price set by the mobile network operator. We first formulate a resource allocation problem for a tagged content provider seeking to minimize the expected missed cache rate. The optimal caching policy is derived accounting for popularity and availability of contents, the spatial distribution of small cells, and the caching strategies of competing content providers. It is showed to induce a specific order on contents to be cached based on their popularity and availability. Next, we study a game among content providers in the form of a generalized Kelly mechanism with bounded strategy sets and heterogeneous players. Existence and uniqueness of the Nash equilibrium are proved. Finally, extensive numerical results validate and characterize the performance of the model.



rate research

Read More

119 - Lixing Chen , Jie Xu 2017
Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the proximity of data sources, thereby reducing service provision latency and saving backhaul network bandwidth. Although computation offloading has been extensively studied in the literature, service caching is an equally, if not more, important design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related data (libraries/databases) in the edge server, e.g. MEC-enabled Base Station (BS), enabling corresponding computation tasks to be executed. Since only a small number of services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the system performance. In this paper, we investigate collaborative service caching in MEC-enabled dense small cell (SC) networks. We propose an efficient decentralized algorithm, called CSC (Collaborative Service Caching), where a network of small cell BSs optimize service caching collaboratively to address a number of key challenges in MEC systems, including service heterogeneity, spatial demand coupling, and decentralized coordination. Our algorithm is developed based on parallel Gibbs sampling by exploiting the special structure of the considered problem using graphing coloring. The algorithm significantly improves the time efficiency compared to conventional Gibbs sampling, yet guarantees provable convergence and optimality. CSC is further extended to the SC network with selfish BSs, where a coalitional game is formulated to incentivize collaboration. A coalition formation algorithm is developed by employing the merge-and-split rules and ensures the stability of the SC coalitions.
Fog Radio Access Network (F-RAN) architectures can leverage both cloud processing and edge caching for content delivery to the users. To this end, F-RAN utilizes caches at the edge nodes (ENs) and fronthaul links connecting a cloud processor to ENs. Assuming time-invariant content popularity, existing information-theoretic analyses of content delivery in F-RANs rely on offline caching with separate content placement and delivery phases. In contrast, this work focuses on the scenario in which the set of popular content is time-varying, hence necessitating the online replenishment of the ENs caches along with the delivery of the requested files. The analysis is centered on the characterization of the long-term Normalized Delivery Time (NDT), which captures the temporal dependence of the coding latencies accrued across multiple time slots in the high signal-to-noise ratio regime. Online edge caching and delivery schemes are investigated for both serial and pipelined transmission modes across fronthaul and edge segments. Analytical results demonstrate that, in the presence of a time-varying content popularity, the rate of fronthaul links sets a fundamental limit to the long-term NDT of F- RAN system. Analytical results are further verified by numerical simulation, yielding important design insights.
In a Fog Radio Access Network (F-RAN) architecture, edge nodes (ENs), such as base stations, are equipped with limited-capacity caches, as well as with fronthaul links that can support given transmission rates from a cloud processor. Existing information-theoretic analyses of content delivery in F-RANs have focused on offline caching with separate content placement and delivery phases. In contrast, this work considers an online caching set-up, in which the set of popular files is time-varying and both cache replenishment and content delivery can take place in each time slot. The analysis is centered on the characterization of the long-term Normalized Delivery Time (NDT), which captures the temporal dependence of the coding latencies accrued across multiple time slots in the high signal-to- noise ratio regime. Online caching and delivery schemes based on reactive and proactive caching are investigated, and their performance is compared to optimal offline caching schemes both analytically and via numerical results.
133 - Zhiyuan Wang , Lin Gao , Tong Wang 2020
In mobile Internet ecosystem, Mobile Users (MUs) purchase wireless data services from Internet Service Provider (ISP) to access to Internet and acquire the interested content services (e.g., online game) from Content Provider (CP). The popularity of intelligent functions (e.g., AI and 3D modeling) increases the computation-intensity of the content services, leading to a growing computation pressure for the MUs resource-limited devices. To this end, edge computing service is emerging as a promising approach to alleviate the MUs computation pressure while keeping their quality-of-service, via offloading some computation tasks of MUs to edge (computing) servers deployed at the local network edge. Thus, Edge Service Provider (ESP), who deploys the edge servers and offers the edge computing service, becomes an upcoming new stakeholder in the ecosystem. In this work, we study the economic interactions of MUs, ISP, CP, and ESP in the new ecosystem with edge computing service, where MUs can acquire the computation-intensive content services (offered by CP) and offload some computation tasks, together with the necessary raw input data, to edge servers (deployed by ESP) through ISP. We first study the MUs Joint Content Acquisition and Task Offloading (J-CATO) problem, which aims to maximize his long-term payoff. We derive the off-line solution with crucial insights, based on which we design an online strategy with provable performance. Then, we study the ESPs edge service monetization problem. We propose a pricing policy that can achieve a constant fraction of the ex-post optimal revenue with an extra constant loss for the ESP. Numerical results show that the edge computing service can stimulate the MUs content acquisition and improve the payoffs of MUs, ISP, and CP.
In this paper we investigate the performance of caching schemes based on fountain codes in a heterogeneous satellite network. We consider multiple cache-aided hubs which are connected to a geostationary satellite through backhaul links. With the aimof reducing the average number of transmissions over the satellite backhaul link, we propose the use of a caching scheme based on fountain codes. We derive a simple analytical expression of the average backhaul transmission rate and provide a tightupper bound on it. Furthermore, we show how the performance of the fountain code based caching scheme is similar to that of a caching scheme based on maximum distance separable codes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا