ترغب بنشر مسار تعليمي؟ اضغط هنا

Online Edge Caching in Fog-Aided Wireless Network

121   0   0.0 ( 0 )
 نشر من قبل Seyyed Mohammadreza Azimi
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In a Fog Radio Access Network (F-RAN) architecture, edge nodes (ENs), such as base stations, are equipped with limited-capacity caches, as well as with fronthaul links that can support given transmission rates from a cloud processor. Existing information-theoretic analyses of content delivery in F-RANs have focused on offline caching with separate content placement and delivery phases. In contrast, this work considers an online caching set-up, in which the set of popular files is time-varying and both cache replenishment and content delivery can take place in each time slot. The analysis is centered on the characterization of the long-term Normalized Delivery Time (NDT), which captures the temporal dependence of the coding latencies accrued across multiple time slots in the high signal-to- noise ratio regime. Online caching and delivery schemes based on reactive and proactive caching are investigated, and their performance is compared to optimal offline caching schemes both analytically and via numerical results.

قيم البحث

اقرأ أيضاً

Fog Radio Access Network (F-RAN) architectures can leverage both cloud processing and edge caching for content delivery to the users. To this end, F-RAN utilizes caches at the edge nodes (ENs) and fronthaul links connecting a cloud processor to ENs. Assuming time-invariant content popularity, existing information-theoretic analyses of content delivery in F-RANs rely on offline caching with separate content placement and delivery phases. In contrast, this work focuses on the scenario in which the set of popular content is time-varying, hence necessitating the online replenishment of the ENs caches along with the delivery of the requested files. The analysis is centered on the characterization of the long-term Normalized Delivery Time (NDT), which captures the temporal dependence of the coding latencies accrued across multiple time slots in the high signal-to-noise ratio regime. Online edge caching and delivery schemes are investigated for both serial and pipelined transmission modes across fronthaul and edge segments. Analytical results demonstrate that, in the presence of a time-varying content popularity, the rate of fronthaul links sets a fundamental limit to the long-term NDT of F- RAN system. Analytical results are further verified by numerical simulation, yielding important design insights.
Fog Radio Access Network (F-RAN) exploits cached contents at edge nodes (ENs) and fronthaul connection to the cloud for content delivery. Assuming dedicated fronthaul links between cloud and each EN, previous works focused on analyses of F-RANs using offline or online caching depending whether the content popularity is time-invariant or time-variant. Extension has been done for multicast fronthaul link connecting cloud to only two ENs and time-invariant popularity. In contrast, the scope of this work is on the case where multicast fronthaul link connects arbitrary number of ENs to the cloud and content popularity is time-variant. Normalized Delivery Time (NDT) is used as a performance measure and by investigating proactive online caching, analytical results reveal that the power scaling of fronthaul transmission sets a limit on the performance of F-RAN.
In this work, we propose a content caching and delivery strategy to maximize throughput capacity in cache-enabled wireless networks. To this end, efficient betweenness (EB), which indicates the ratio of content delivery paths passing through a node, is first defined to capture the impact of content caching and delivery on network traffic load distribution. Aided by EB, throughput capacity is shown to be upper bounded by the minimal ratio of successful delivery probability (SDP) to EB among all nodes. Through effectively matching nodes EB with their SDP, the proposed strategy improves throughput capacity with low computation complexity. Simulation results show that the gap between the proposed strategy and the optimal one (obtained through exhausted search) is kept smaller than 6%.
Optimal caching of files in a content distribution network (CDN) is a problem of fundamental and growing commercial interest. Although many different caching algorithms are in use today, the fundamental performance limits of network caching algorithm s from an online learning point-of-view remain poorly understood to date. In this paper, we resolve this question in the following two settings: (1) a single user connected to a single cache, and (2) a set of users and a set of caches interconnected through a bipartite network. Recently, an online gradient-based coded caching policy was shown to enjoy sub-linear regret. However, due to the lack of known regret lower bounds, the question of the optimality of the proposed policy was left open. In this paper, we settle this question by deriving tight non-asymptotic regret lower bounds in both of the above settings. In addition to that, we propose a new Follow-the-Perturbed-Leader-based uncoded caching policy with near-optimal regret. Technically, the lower-bounds are obtained by relating the online caching problem to the classic probabilistic paradigm of balls-into-bins. Our proofs make extensive use of a new result on the expected load in the most populated half of the bins, which might also be of independent interest. We evaluate the performance of the caching policies by experimenting with the popular MovieLens dataset and conclude the paper with design recommendations and a list of open problems.
A fundamental challenge in wireless heterogeneous networks (HetNets) is to effectively utilize the limited transmission and storage resources in the presence of increasing deployment density and backhaul capacity constraints. To alleviate bottlenecks and reduce resource consumption, we design optimal caching and power control algorithms for multi-hop wireless HetNets. We formulate a joint optimization framework to minimize the average transmission delay as a function of the caching variables and the signal-to-interference-plus-noise ratios (SINR) which are determined by the transmission powers, while explicitly accounting for backhaul connection costs and the power constraints. Using convex relaxation and rounding, we obtain a reduced-complexity formulation (RCF) of the joint optimization problem, which can provide a constant factor approximation to the globally optimal solution. We then solve RCF in two ways: 1) alternating optimization of the power and caching variables by leveraging biconvexity, and 2) joint optimization of power control and caching. We characterize the necessary (KKT) conditions for an optimal solution to RCF, and use strict quasi-convexity to show that the KKT points are Pareto optimal for RCF. We then devise a subgradient projection algorithm to jointly update the caching and power variables, and show that under appropriate conditions, the algorithm converges at a linear rate to the local minima of RCF, under general SINR conditions. We support our analytical findings with results from extensive numerical experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا