Do you want to publish a course? Click here

Optimization-based Decentralized Coded Caching for Files and Caches with Arbitrary Size

109   0   0.0 ( 0 )
 Added by Ying Cui
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Existing decentralized coded caching solutions cannot guarantee small loads in the general scenario with arbitrary file sizes and cache sizes. In this paper, we propose an optimization framework for decentralized coded caching in the general scenario to minimize the worst-case load and average load (under an arbitrary file popularity), respectively. Specifically, we first propose a class of decentralized coded caching schemes for the general scenario, which are specified by a general caching parameter and include several known schemes as special cases. Then, we optimize the caching parameter to minimize the worst-case load and average load, respectively. Each of the two optimization problems is a challenging nonconvex problem with a nondifferentiable objective function. For each optimization problem, we develop an iterative algorithm to obtain a stationary point using techniques for solving Complementary Geometric Programming (GP). We also obtain a low-complexity approximate solution by solving an approximate problem with a differentiable objective function which is an upper bound on the original nondifferentiable one, and characterize the performance loss caused by the approximation. Finally, we present two information-theoretic converse bounds on the worst-case load and average load (under an arbitrary file popularity) in the general scenario, respectively. To the best of our knowledge, this is the first work that provides optimization-based decentralized coded caching schemes and information-theoretic converse bounds for the general scenario.



rate research

Read More

We consider the problem of emph{secretive coded caching} in a shared cache setup where the number of users accessing a particular emph{helper cache} is more than one, and every user can access exactly one helper cache. In secretive coded caching, the constraint of emph{perfect secrecy} must be satisfied. It requires that the users should not gain, either from their caches or from the transmissions, any information about the content of the files that they did not request from the server. In order to accommodate the secrecy constraint, our problem setup requires, in addition to a helper cache, a dedicated emph{user cache} of minimum capacity of 1 unit to every user. This is where our formulation differs from the original work on shared caches (``Fundamental Limits of Coded Caching With Multiple Antennas, Shared Caches and Uncoded Prefetching by E.~Parrinello, A.~{{U}}nsal and P.~Elia in Trans. Inf. Theory, 2020). In this work, we propose a secretively achievable coded caching scheme with shared caches under centralized placement. When our scheme is applied to the dedicated cache setting, it matches the scheme by Ravindrakumar emph{et al.} (``Private Coded Caching, in Trans. Inf. Forensics and Security, 2018).
Classical coded caching setting avails each user to have one dedicated cache. This is generalized to a more general shared cache scheme and the exact expression for the worst case rate was derived in [E. Parrinello, A. Unsal, P. Elia, Fundamental Limits of Caching in Heterogeneous Networks with Uncoded Prefetching, available on arXiv:1811.06247 [cs.IT], Nov. 2018]. For this case, an optimal linear error correcting delivery scheme is proposed and an expression for the peak rate is established for the same. Furthermore, a new delivery scheme is proposed, which gives an improved rate for the case when the demands are not distinct.
In cache-aided networks, the server populates the cache memories at the users during low-traffic periods, in order to reduce the delivery load during peak-traffic hours. In turn, there exists a fundamental trade-off between the delivery load on the server and the cache sizes at the users. In this paper, we study this trade-off in a multicast network where the server is connected to users with unequal cache sizes and the number of users is less than or equal to the number of library files. We propose centralized uncoded placement and linear delivery schemes which are optimized by solving a linear program. Additionally, we derive a lower bound on the delivery memory trade-off with uncoded placement that accounts for the heterogeneity in cache sizes. We explicitly characterize this trade-off for the case of three end-users, as well as an arbitrary number of end-users when the total memory size at the users is small, and when it is large. Next, we consider a system where the server is connected to the users via rate limited links of different capacities and the server assigns the users cache sizes subject to a total cache budget. We characterize the optimal cache sizes that minimize the delivery completion time with uncoded placement and linear delivery. In particular, the optimal memory allocation balances between assigning larger cache sizes to users with low capacity links and uniform memory allocation.
In a traditional $(H, r)$ combination network, each user is connected to a unique set of $r$ relays. However, few research efforts to consider $(H, r, u)$ multiaccess combination network problem where each $u$ users are connected to a unique set of $r$ relays. A naive strategy to obtain a coded caching scheme for $(H, r, u)$ multiaccess combination network is by $u$ times repeated application of a coded caching scheme for a traditional $(H, r)$ combination network. Obviously, the transmission load for each relay of this trivial scheme is exactly $u$ times that of the original scheme, which implies that as the number of users multiplies, the transmission load for each relay will also multiply. Therefore, it is very meaningful to design a coded caching scheme for $(H, r, u)$ multiaccess combination network with lower transmission load for each relay. In this paper, by directly applying the well known coding method (proposed by Zewail and Yener) for $(H, r)$ combination network, a coded caching scheme (ZY scheme) for $(H, r, u)$ multiaccess combination network is obtained. However, the subpacketization of this scheme has exponential order with the number of users, which leads to a high implementation complexity. In order to reduce the subpacketization, a direct construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed by means of Combinational Design Theory, where the parameter $u$ must be a combinatorial number. For arbitrary parameter $u$, a hybrid construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed based on our direct construction. Theoretical and numerical analysis show that our last two schemes have smaller transmission load for each relay compared with the trivial scheme, and have much lower subpacketization compared with ZY scheme.
In this paper, we investigate the transmission delay of cache-aided broadcast networks with user cooperation. Novel coded caching schemes are proposed for both centralized and decentralized caching settings, by efficiently exploiting time and cache resources and creating parallel data delivery at the server and users. We derive a lower bound on the transmission delay and show that the proposed centralized coded caching scheme is emph{order-optimal} in the sense that it achieves a constant multiplicative gap within the lower bound. Our decentralized coded caching scheme is also order-optimal when each users cache size is larger than the threshold $N(1-sqrt[{K-1}]{ {1}/{(K+1)}})$ (approaching 0 as $Kto infty$), where $K$ is the total number of users and $N$ is the size of file library. Moreover, for both the centralized and decentralized caching settings, our schemes obtain an additional emph{cooperation gain} offered by user cooperation and an additional emph{parallel gain} offered by the parallel transmission among the server and users. It is shown that in order to reduce the transmission delay, the number of users parallelly sending signals should be appropriately chosen according to users cache size, and alway letting more users parallelly send information could cause high transmission delay.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا