ترغب بنشر مسار تعليمي؟ اضغط هنا

LeadCache: Regret-Optimal Caching in Networks

52   0   0.0 ( 0 )
 نشر من قبل Abhishek Sinha
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a set-valued online prediction problem in the context of network caching. Assume that multiple users are connected to several caches via a bipartite network. At any time slot, each user requests an arbitrary file chosen from a large catalog. A users request at a slot is met if the requested file is cached in at least one of the caches connected to the user. Our objective is to predict, prefetch, and optimally distribute the files on the caches to maximize the total number of cache hits in an online setting. The problem is non-trivial due to the non-convex and non-smooth nature of the objective function. In this paper, we propose $texttt{LeadCache}$ - an online caching policy based on the Follow-the-Perturbed-Leader paradigm. We show that the policy is regret-optimal up to a factor of $tilde{O}(n^{3/8}),$ where $n$ is the number of users. We design two efficient implementations of the $texttt{LeadCache}$ policy, one based on Pipage rounding and the other based on Madows sampling, each of which makes precisely one call to an LP-solver per iteration. With a Strong-Law-type assumption, we show that the total number of file fetches under $texttt{LeadCache}$ remains almost surely finite over an infinite horizon. Finally, we derive a tight regret lower bound using results from graph coloring. We conclude that the learning-based $texttt{LeadCache}$ policy decisively outperforms the known caching policies both theoretically and empirically.



قيم البحث

اقرأ أيضاً

For ultra-dense networks with wireless backhaul, caching strategy at small base stations (SBSs), usually with limited storage, is critical to meet massive high data rate requests. Since the content popularity profile varies with time in an unknown wa y, we exploit reinforcement learning (RL) to design a cooperative caching strategy with maximum-distance separable (MDS) coding. We model the MDS coding based cooperative caching as a Markov decision process to capture the popularity dynamics and maximize the long-term expected cumulative traffic load served directly by the SBSs without accessing the macro base station. For the formulated problem, we first find the optimal solution for a small-scale system by embedding the cooperative MDS coding into Q-learning. To cope with the large-scale case, we approximate the state-action value function heuristically. The approximated function includes only a small number of learnable parameters and enables us to propose a fast and efficient action-selection approach, which dramatically reduces the complexity. Numerical results verify the optimality/near-optimality of the proposed RL based algorithms and show the superiority compared with the baseline schemes. They also exhibit good robustness to different environments.
Federated edge learning (FEEL) is a widely adopted framework for training an artificial intelligence (AI) model distributively at edge devices to leverage their data while preserving their data privacy. The execution of a power-hungry learning task a t energy-constrained devices is a key challenge confronting the implementation of FEEL. To tackle the challenge, we propose the solution of powering devices using wireless power transfer (WPT). To derive guidelines on deploying the resultant wirelessly powered FEEL (WP-FEEL) system, this work aims at the derivation of the tradeoff between the model convergence and the settings of power sources in two scenarios: 1) the transmission power and density of power-beacons (dedicated charging stations) if they are deployed, or otherwise 2) the transmission power of a server (access-point). The development of the proposed analytical framework relates the accuracy of distributed stochastic gradient estimation to the WPT settings, the randomness in both communication and WPT links, and devices computation capacities. Furthermore, the local-computation at devices (i.e., mini-batch size and processor clock frequency) is optimized to efficiently use the harvested energy for gradient estimation. The resultant learning-WPT tradeoffs reveal the simple scaling laws of the model-convergence rate with respect to the transferred energy as well as the devices computational energy efficiencies. The results provide useful guidelines on WPT provisioning to provide a guaranteer on learning performance. They are corroborated by experimental results using a real dataset.
Inter-operator spectrum sharing in millimeter-wave bands has the potential of substantially increasing the spectrum utilization and providing a larger bandwidth to individual user equipment at the expense of increasing inter-operator interference. Un fortunately, traditional model-based spectrum sharing schemes make idealistic assumptions about inter-operator coordination mechanisms in terms of latency and protocol overhead, while being sensitive to missing channel state information. In this paper, we propose hybrid model-based and data-driven multi-operator spectrum sharing mechanisms, which incorporate model-based beamforming and user association complemented by data-driven model refinements. Our solution has the same computational complexity as a model-based approach but has the major advantage of having substantially less signaling overhead. We discuss how limited channel state information and quantized codebook-based beamforming affect the learning and the spectrum sharing performance. We show that the proposed hybrid sharing scheme significantly improves spectrum utilization under realistic assumptions on inter-operator coordination and channel state information acquisition.
In this paper we investigate the performance of caching schemes based on fountain codes in a heterogeneous satellite network. We consider multiple cache-aided hubs which are connected to a geostationary satellite through backhaul links. With the aimo f reducing the average number of transmissions over the satellite backhaul link, we propose the use of a caching scheme based on fountain codes. We derive a simple analytical expression of the average backhaul transmission rate and provide a tightupper bound on it. Furthermore, we show how the performance of the fountain code based caching scheme is similar to that of a caching scheme based on maximum distance separable codes.
We study noisy broadcast networks with local cache memories at the receivers, where the transmitter can pre-store information even before learning the receivers requests. We mostly focus on packet-erasure broadcast networks with two disjoint sets of receivers: a set of weak receivers with all-equal erasure probabilities and equal cache sizes and a set of strong receivers with all-equal erasure probabilities and no cache memories. We present lower and upper bounds on the capacity-memory tradeoff of this network. The lower bound is achieved by a new joint cache-channel coding idea and significantly improves on schemes that are based on separate cache-channel coding. We discuss how this coding idea could be extended to more general discrete memoryless broadcast channels and to unequal cache sizes. Our upper bound holds for all stochastically degraded broadcast channels. For the described packet-erasure broadcast network, our lower and upper bounds are tight when there is a single weak receiver (and any number of strong receivers) and the cache memory size does not exceed a given threshold. When there are a single weak receiver, a single strong receiver, and two files, then we can strengthen our upper and lower bounds so as they coincide over a wide regime of cache sizes. Finally, we completely characterise the rate-memory tradeoff for general discrete-memoryless broadcast channels with arbitrary cache memory sizes and arbitrary (asymmetric) rates when all receivers always demand exactly the same file.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا