ترغب بنشر مسار تعليمي؟ اضغط هنا

Benefits of Coded Placement for Networks with Heterogeneous Cache Sizes

189   0   0.0 ( 0 )
 نشر من قبل Abdelrahman Ibrahim
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we study coded placement in caching systems where the users have unequal cache sizes and demonstrate its performance advantage. In particular, we propose a caching scheme with coded placement for three-user systems that outperforms the best caching scheme with uncoded placement. In our proposed scheme, users cache both uncoded and coded pieces of the files, and the coded pieces at the users with large memories are decoded using the unicast/multicast signals intended to serve users with smaller memories. Furthermore, we extend the proposed scheme to larger systems and show the reduction in delivery load with coded placement compared to uncoded placement.



قيم البحث

اقرأ أيضاً

This paper considers a cache-aided device-to-device (D2D) system where the users are equipped with cache memories of different size. During low traffic hours, a server places content in the users cache memories, knowing that the files requested by th e users during peak traffic hours will have to be delivered by D2D transmissions only. The worst-case D2D delivery load is minimized by jointly designing the uncoded cache placement and linear coded D2D delivery. Next, a novel lower bound on the D2D delivery load with uncoded placement is proposed and used in explicitly characterizing the minimum D2D delivery load (MD2DDL) with uncoded placement for several cases of interest. In particular, having characterized the MD2DDL for equal cache sizes, it is shown that the same delivery load can be achieved in the network with users of unequal cache sizes, provided that the smallest cache size is greater than a certain threshold. The MD2DDL is also characterized in the small cache size regime, the large cache size regime, and the three-user case. Comparisons of the server-based delivery load with the D2D delivery load are provided. Finally, connections and mathematical parallels between cache-aided D2D systems and coded distributed computing (CDC) systems are discussed.
93 - Behzad Asadi , Lawrence Ong , 2018
We address a centralized caching problem with unequal cache sizes. We consider a system with a server of files connected through a shared error-free link to a group of cache-enabled users where one subgroup has a larger cache size than the other. We propose an explicit caching scheme for the considered system aimed at minimizing the load of worst-case demands over the shared link. As suggested by numerical evaluations, our scheme improves upon the best existing explicit scheme by having a lower worst-case load; also, our scheme performs within a multiplicative factor of 1.11 from the scheme that can be obtained by solving an optimisation problem in which the number of parameters grows exponentially with the number of users.
We consider a cache-aided wireless device-to-device (D2D) network under the constraint of one-shot delivery, where the placement phase is orchestrated by a central server. We assume that the devices caches are filled with uncoded data, and the whole file database at the server is made available in the collection of caches. Following this phase, the files requested by the users are serviced by inter-device multicast communication. For such a system setting, we provide the exact characterization of load-memory trade-off, by deriving both the minimum average and the minimum peak sum-loads of links between devices, for a given individual memory size at disposal of each user.
107 - Bojie Lv , Rui Wang , Ying Cui 2020
In this paper, the scheduling of downlink file transmission in one cell with the assistance of cache nodes with finite cache space is studied. Specifically, requesting users arrive randomly and the base station (BS) reactively multicasts files to the requesting users and selected cache nodes. The latter can offload the traffic in their coverage areas from the BS. We consider the joint optimization of the abovementioned file placement and delivery within a finite lifetime subject to the cache space constraint. Within the lifetime, the allocation of multicast power and symbol number for each file transmission at the BS is formulated as a dynamic programming problem with a random stage number. Note that there are no existing solutions to this problem. We develop an asymptotically optimal solution framework by transforming the original problem to an equivalent finite-horizon Markov decision process (MDP) with a fixed stage number. A novel approximation approach is then proposed to address the curse of dimensionality, where the analytical expressions of approximate value functions are provided. We also derive analytical bounds on the exact value function and approximation error. The approximate value functions depend on some system statistics, e.g., requesting users distribution. One reinforcement learning algorithm is proposed for the scenario where these statistics are unknown.
Degraded K-user broadcast channels (BC) are studied when receivers are facilitated with cache memories. Lower and upper bounds are derived on the capacity-memory tradeoff, i.e., on the largest rate of reliable communication over the BC as a function of the receivers cache sizes, and the bounds are shown to match for some special cases. The lower bounds are achieved by two new coding schemes that benefit from non-uniform cache assignment. Lower and upper bounds are also established on the global capacity-memory tradeoff, i.e., on the largest capacity-memory tradeoff that can be attained by optimizing the receivers cache sizes subject to a total cache memory budget. The bounds coincide when the total cache memory budget is sufficiently small or sufficiently large, characterized in terms of the BC statistics. For small cache memories, it is optimal to assign all the cache memory to the weakest receiver. In this regime, the global capacity-memory tradeoff grows as the total cache memory budget divided by the number of files in the system. In other words, a perfect global caching gain is achievable in this regime and the performance corresponds to a system where all cache contents in the network are available to all receivers. For large cache memories, it is optimal to assign a positive cache memory to every receiver such that the weaker receivers are assigned larger cache memories compared to the stronger receivers. In this regime, the growth rate of the global capacity-memory tradeoff is further divided by the number of users, which corresponds to a local caching gain. Numerical indicate suggest that a uniform cache-assignment of the total cache memory is suboptimal in all regimes unless the BC is completely symmetric. For erasure BCs, this claim is proved analytically in the regime of small cache-sizes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا