Do you want to publish a course? Click here

Coded Caching for Broadcast Networks with User Cooperation

172   0   0.0 ( 0 )
 Added by Youlong Wu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we investigate the transmission delay of cache-aided broadcast networks with user cooperation. Novel coded caching schemes are proposed for both centralized and decentralized caching settings, by efficiently exploiting time and cache resources and creating parallel data delivery at the server and users. We derive a lower bound on the transmission delay and show that the proposed centralized coded caching scheme is emph{order-optimal} in the sense that it achieves a constant multiplicative gap within the lower bound. Our decentralized coded caching scheme is also order-optimal when each users cache size is larger than the threshold $N(1-sqrt[{K-1}]{ {1}/{(K+1)}})$ (approaching 0 as $Kto infty$), where $K$ is the total number of users and $N$ is the size of file library. Moreover, for both the centralized and decentralized caching settings, our schemes obtain an additional emph{cooperation gain} offered by user cooperation and an additional emph{parallel gain} offered by the parallel transmission among the server and users. It is shown that in order to reduce the transmission delay, the number of users parallelly sending signals should be appropriately chosen according to users cache size, and alway letting more users parallelly send information could cause high transmission delay.



rate research

Read More

In this paper, we consider the coded-caching broadcast network with user cooperation, where a server connects with multiple users and the users can cooperate with each other through a cooperation network. We propose a centralized coded caching scheme based on a new deterministic placement strategy and a parallel delivery strategy. It is shown that the new scheme optimally allocate the communication loads on the server and users, obtaining cooperation gain and parallel gain that greatly reduces the transmission delay. Furthermore, we show that the number of users who parallelly send information should decrease when the users caching size increases. In other words, letting more users parallelly send information could be harmful. Finally, we derive a constant multiplicative gap between the lower bound and upper bound on the transmission delay, which proves that our scheme is order optimal.
In a traditional $(H, r)$ combination network, each user is connected to a unique set of $r$ relays. However, few research efforts to consider $(H, r, u)$ multiaccess combination network problem where each $u$ users are connected to a unique set of $r$ relays. A naive strategy to obtain a coded caching scheme for $(H, r, u)$ multiaccess combination network is by $u$ times repeated application of a coded caching scheme for a traditional $(H, r)$ combination network. Obviously, the transmission load for each relay of this trivial scheme is exactly $u$ times that of the original scheme, which implies that as the number of users multiplies, the transmission load for each relay will also multiply. Therefore, it is very meaningful to design a coded caching scheme for $(H, r, u)$ multiaccess combination network with lower transmission load for each relay. In this paper, by directly applying the well known coding method (proposed by Zewail and Yener) for $(H, r)$ combination network, a coded caching scheme (ZY scheme) for $(H, r, u)$ multiaccess combination network is obtained. However, the subpacketization of this scheme has exponential order with the number of users, which leads to a high implementation complexity. In order to reduce the subpacketization, a direct construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed by means of Combinational Design Theory, where the parameter $u$ must be a combinatorial number. For arbitrary parameter $u$, a hybrid construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed based on our direct construction. Theoretical and numerical analysis show that our last two schemes have smaller transmission load for each relay compared with the trivial scheme, and have much lower subpacketization compared with ZY scheme.
We study noisy broadcast networks with local cache memories at the receivers, where the transmitter can pre-store information even before learning the receivers requests. We mostly focus on packet-erasure broadcast networks with two disjoint sets of receivers: a set of weak receivers with all-equal erasure probabilities and equal cache sizes and a set of strong receivers with all-equal erasure probabilities and no cache memories. We present lower and upper bounds on the capacity-memory tradeoff of this network. The lower bound is achieved by a new joint cache-channel coding idea and significantly improves on schemes that are based on separate cache-channel coding. We discuss how this coding idea could be extended to more general discrete memoryless broadcast channels and to unequal cache sizes. Our upper bound holds for all stochastically degraded broadcast channels. For the described packet-erasure broadcast network, our lower and upper bounds are tight when there is a single weak receiver (and any number of strong receivers) and the cache memory size does not exceed a given threshold. When there are a single weak receiver, a single strong receiver, and two files, then we can strengthen our upper and lower bounds so as they coincide over a wide regime of cache sizes. Finally, we completely characterise the rate-memory tradeoff for general discrete-memoryless broadcast channels with arbitrary cache memory sizes and arbitrary (asymmetric) rates when all receivers always demand exactly the same file.
In an $(H,r)$ combination network, a single content library is delivered to ${Hchoose r}$ users through deployed $H$ relays without cache memories, such that each user with local cache memories is simultaneously served by a different subset of $r$ relays on orthogonal non-interfering and error-free channels. The combinatorial placement delivery array (CPDA in short) can be used to realize a coded caching scheme for combination networks. In this paper, a new algorithm realizing a coded caching scheme for combination network based on a CPDA is proposed such that the schemes obtained have smaller subpacketization levels or are implemented more flexible than the previously known schemes. Then we focus on directly constructing CPDAs for any positive integers $H$ and $r$ with $r<H$. This is different from the grouping method in reference (IEEE ISIT, 17-22, 2018) under the constraint that $r$ divides $H$. Consequently two classes of CPDAs are obtained. Finally comparing to the schemes and the method proposed by Yan et al., (IEEE ISIT, 17-22, 2018) the schemes realized by our CPDAs have significantly advantages on the subpacketization levels and the transmission rates.
In coded caching system we prefer to design a coded caching scheme with low subpacketization and small transmission rate (i.e., the low implementation complexity and the efficient transmission during the peak traffic times). Placement delivery arrays (PDA) can be used to design code caching schemes. In this paper we propose a framework of constructing PDAs via Hamming distance. As an application, two classes of coded caching schemes with linear subpacketizations and small transmission rates are obtained.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا