No Arabic abstract
Recently Hachem et al. formulated a multiaccess coded caching model which consists of a central server connected to $K$ users via an error-free shared link, and $K$ cache-nodes. Each cache-node is equipped with a local cache and each user can access $L$ neighbouring cache-nodes with a cyclic wrap-around fashion. In this paper, we take the privacy of the users demands into consideration, i.e., each user can only get its required file and can not get any information about the demands of other users. By storing some private keys at the cache-nodes, we propose a novel transformation approach to transform a non-private multiaccess coded caching scheme into a private multiaccess coded caching scheme.
In a traditional $(H, r)$ combination network, each user is connected to a unique set of $r$ relays. However, few research efforts to consider $(H, r, u)$ multiaccess combination network problem where each $u$ users are connected to a unique set of $r$ relays. A naive strategy to obtain a coded caching scheme for $(H, r, u)$ multiaccess combination network is by $u$ times repeated application of a coded caching scheme for a traditional $(H, r)$ combination network. Obviously, the transmission load for each relay of this trivial scheme is exactly $u$ times that of the original scheme, which implies that as the number of users multiplies, the transmission load for each relay will also multiply. Therefore, it is very meaningful to design a coded caching scheme for $(H, r, u)$ multiaccess combination network with lower transmission load for each relay. In this paper, by directly applying the well known coding method (proposed by Zewail and Yener) for $(H, r)$ combination network, a coded caching scheme (ZY scheme) for $(H, r, u)$ multiaccess combination network is obtained. However, the subpacketization of this scheme has exponential order with the number of users, which leads to a high implementation complexity. In order to reduce the subpacketization, a direct construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed by means of Combinational Design Theory, where the parameter $u$ must be a combinatorial number. For arbitrary parameter $u$, a hybrid construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed based on our direct construction. Theoretical and numerical analysis show that our last two schemes have smaller transmission load for each relay compared with the trivial scheme, and have much lower subpacketization compared with ZY scheme.
We consider the coded caching problem with an additional privacy constraint that a user should not get any information about the demands of the other users. We first show that a demand-private scheme for $N$ files and $K$ users can be obtained from a non-private scheme that serves only a subset of the demands for the $N$ files and $NK$ users problem. We further use this fact to construct a demand-private scheme for $N$ files and $K$ users from a particular known non-private scheme for $N$ files and $NK-K+1$ users. It is then demonstrated that, the memory-rate pair $(M,min {N,K}(1-M/N))$, which is achievable for non-private schemes with uncoded transmissions, is also achievable under demand privacy. We further propose a scheme that improves on these ideas by removing some redundant transmissions. The memory-rate trade-off achieved using our schemes is shown to be within a multiplicative factor of 3 from the optimal when $K < N$ and of 8 when $Nleq K$. Finally, we give the exact memory-rate trade-off for demand-private coded caching problems with $Ngeq K=2$.
Coded Caching is an efficient technique to reduce peak hour network traffic. One limitation of known coded caching schemes is that the demands of all users are revealed to their peers in the delivery phase. Schemes that assure privacy for user demands are studied in recent past. Assuming that the users are equipped with caches of small memory sizes, the achievable rate under demand privacy constraints is investigated in this work. We present an MDS code based demand private coded caching scheme with $K$ users and $N$ files that achieves a memory rate pair $left(frac{1}{K(N-1)+1},Nleft(1-frac{1}{K(N-1)+1}right)right)$. The presented memory-rate pair meets the lower bound under demand-privacy requirements, proposed by Yan textit{et al.} in the recent work cite{c13}. By memory sharing this characterizes the exact rate-memory trade-off for the demand private coded caching scheme for cache memory $Min left[0,frac{1}{K(N-1)+1}right]$.
This paper considers the multiaccess coded caching systems formulated by Hachem et al., including a central server containing $N$ files connected to $K$ cache-less users through an error-free shared link, and $K$ cache-nodes, each equipped with a cache memory size of $M$ files. Each user has access to $L$ neighbouring cache-nodes with a cyclic wrap-around topology. The coded caching scheme proposed by Hachem et al. suffers from the case that $L$ does not divide $K$, where the needed number of transmissions (a.k.a. load) is at most four times the load expression for the case where $L$ divides $K$. Our main contribution is to propose a novel {it transformation} approach to smartly extend the schemes satisfying some conditions for the well known shared-link caching systems to the multiaccess caching systems. Then we can get many coded caching schemes with different subpacketizations for multiaccess coded caching system. These resulting schemes have the maximum local caching gain (i.e., the cached contents stored at any $L$ neighbouring cache-nodes are different such that the number of retrieval packets by each user from the connected cache-nodes is maximal) and the same coded caching gain as the original schemes. Applying the transformation approach to the well-known shared-link coded caching scheme proposed by Maddah-Ali and Niesen, we obtain a new multiaccess coded caching scheme that achieves the same load as the scheme of Hachem et al. but for any system parameters. Under the constraint of the cache placement used in this new multiaccess coded caching scheme, our delivery strategy is approximately optimal when $K$ is sufficiently large. Finally, we also show that the transmission load of the proposed scheme can be further reduced by compressing the multicast message.
In this paper, we consider the coded-caching broadcast network with user cooperation, where a server connects with multiple users and the users can cooperate with each other through a cooperation network. We propose a centralized coded caching scheme based on a new deterministic placement strategy and a parallel delivery strategy. It is shown that the new scheme optimally allocate the communication loads on the server and users, obtaining cooperation gain and parallel gain that greatly reduces the transmission delay. Furthermore, we show that the number of users who parallelly send information should decrease when the users caching size increases. In other words, letting more users parallelly send information could be harmful. Finally, we derive a constant multiplicative gap between the lower bound and upper bound on the transmission delay, which proves that our scheme is order optimal.