ترغب بنشر مسار تعليمي؟ اضغط هنا

Coded Caching for Heterogeneous Systems: An Optimization Perspective

133   0   0.0 ( 0 )
 نشر من قبل Ahmed A. Zewail
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In cache-aided networks, the server populates the cache memories at the users during low-traffic periods, in order to reduce the delivery load during peak-traffic hours. In turn, there exists a fundamental trade-off between the delivery load on the server and the cache sizes at the users. In this paper, we study this trade-off in a multicast network where the server is connected to users with unequal cache sizes and the number of users is less than or equal to the number of library files. We propose centralized uncoded placement and linear delivery schemes which are optimized by solving a linear program. Additionally, we derive a lower bound on the delivery memory trade-off with uncoded placement that accounts for the heterogeneity in cache sizes. We explicitly characterize this trade-off for the case of three end-users, as well as an arbitrary number of end-users when the total memory size at the users is small, and when it is large. Next, we consider a system where the server is connected to the users via rate limited links of different capacities and the server assigns the users cache sizes subject to a total cache budget. We characterize the optimal cache sizes that minimize the delivery completion time with uncoded placement and linear delivery. In particular, the optimal memory allocation balances between assigning larger cache sizes to users with low capacity links and uniform memory allocation.

قيم البحث

اقرأ أيضاً

108 - Qi Wang , Ying Cui , Sian Jin 2019
Existing decentralized coded caching solutions cannot guarantee small loads in the general scenario with arbitrary file sizes and cache sizes. In this paper, we propose an optimization framework for decentralized coded caching in the general scenario to minimize the worst-case load and average load (under an arbitrary file popularity), respectively. Specifically, we first propose a class of decentralized coded caching schemes for the general scenario, which are specified by a general caching parameter and include several known schemes as special cases. Then, we optimize the caching parameter to minimize the worst-case load and average load, respectively. Each of the two optimization problems is a challenging nonconvex problem with a nondifferentiable objective function. For each optimization problem, we develop an iterative algorithm to obtain a stationary point using techniques for solving Complementary Geometric Programming (GP). We also obtain a low-complexity approximate solution by solving an approximate problem with a differentiable objective function which is an upper bound on the original nondifferentiable one, and characterize the performance loss caused by the approximation. Finally, we present two information-theoretic converse bounds on the worst-case load and average load (under an arbitrary file popularity) in the general scenario, respectively. To the best of our knowledge, this is the first work that provides optimization-based decentralized coded caching schemes and information-theoretic converse bounds for the general scenario.
In a traditional $(H, r)$ combination network, each user is connected to a unique set of $r$ relays. However, few research efforts to consider $(H, r, u)$ multiaccess combination network problem where each $u$ users are connected to a unique set of $ r$ relays. A naive strategy to obtain a coded caching scheme for $(H, r, u)$ multiaccess combination network is by $u$ times repeated application of a coded caching scheme for a traditional $(H, r)$ combination network. Obviously, the transmission load for each relay of this trivial scheme is exactly $u$ times that of the original scheme, which implies that as the number of users multiplies, the transmission load for each relay will also multiply. Therefore, it is very meaningful to design a coded caching scheme for $(H, r, u)$ multiaccess combination network with lower transmission load for each relay. In this paper, by directly applying the well known coding method (proposed by Zewail and Yener) for $(H, r)$ combination network, a coded caching scheme (ZY scheme) for $(H, r, u)$ multiaccess combination network is obtained. However, the subpacketization of this scheme has exponential order with the number of users, which leads to a high implementation complexity. In order to reduce the subpacketization, a direct construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed by means of Combinational Design Theory, where the parameter $u$ must be a combinatorial number. For arbitrary parameter $u$, a hybrid construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed based on our direct construction. Theoretical and numerical analysis show that our last two schemes have smaller transmission load for each relay compared with the trivial scheme, and have much lower subpacketization compared with ZY scheme.
The coded caching problem with secrecy constraint i.e., the users should not be able to gain any information about the content of the files that they did not demand, is known as the secretive coded caching problem. This was proposed by Ravindrakumar et al. in the paper titled ``Private Coded Caching that appeared in emph{ IEEE Transactions on Information Forensics and Security}, 2018 and is characterised by subpacketization levels growing exponentially with the number of users. In the context of coded caching without secrecy, coded caching schemes at subexponential subpacketization levels are feasible by representing the caching system in the form of a Placement Delivery Array (PDA) and designing placement and delivery policies from it. Motivated by this, we propose a secretive coded caching scheme with low subpacketization using PDA, for users with dedicated caches in the centralized setting. When our scheme is applied to a special class of PDA known as MN PDA, the scheme proposed by Ravindrakumar et al. is recovered.
Improved lower bounds on the average and the worst-case rate-memory tradeoffs for the Maddah-Ali&Niesen coded caching scenario are presented. For any number of users and files and for arbitrary cache sizes, the multiplicative gap between the exact ra te-memory tradeoff and the new lower bound is less than 2.315 in the worst-case scenario and less than 2.507 in the average-case scenario.
In this paper, we investigate the transmission delay of cache-aided broadcast networks with user cooperation. Novel coded caching schemes are proposed for both centralized and decentralized caching settings, by efficiently exploiting time and cache r esources and creating parallel data delivery at the server and users. We derive a lower bound on the transmission delay and show that the proposed centralized coded caching scheme is emph{order-optimal} in the sense that it achieves a constant multiplicative gap within the lower bound. Our decentralized coded caching scheme is also order-optimal when each users cache size is larger than the threshold $N(1-sqrt[{K-1}]{ {1}/{(K+1)}})$ (approaching 0 as $Kto infty$), where $K$ is the total number of users and $N$ is the size of file library. Moreover, for both the centralized and decentralized caching settings, our schemes obtain an additional emph{cooperation gain} offered by user cooperation and an additional emph{parallel gain} offered by the parallel transmission among the server and users. It is shown that in order to reduce the transmission delay, the number of users parallelly sending signals should be appropriately chosen according to users cache size, and alway letting more users parallelly send information could cause high transmission delay.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا