ترغب بنشر مسار تعليمي؟ اضغط هنا

Device-to-Device Coded Caching with Distinct Cache Sizes

384   0   0.0 ( 0 )
 نشر من قبل Ahmed A. Zewail
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper considers a cache-aided device-to-device (D2D) system where the users are equipped with cache memories of different size. During low traffic hours, a server places content in the users cache memories, knowing that the files requested by the users during peak traffic hours will have to be delivered by D2D transmissions only. The worst-case D2D delivery load is minimized by jointly designing the uncoded cache placement and linear coded D2D delivery. Next, a novel lower bound on the D2D delivery load with uncoded placement is proposed and used in explicitly characterizing the minimum D2D delivery load (MD2DDL) with uncoded placement for several cases of interest. In particular, having characterized the MD2DDL for equal cache sizes, it is shown that the same delivery load can be achieved in the network with users of unequal cache sizes, provided that the smallest cache size is greater than a certain threshold. The MD2DDL is also characterized in the small cache size regime, the large cache size regime, and the three-user case. Comparisons of the server-based delivery load with the D2D delivery load are provided. Finally, connections and mathematical parallels between cache-aided D2D systems and coded distributed computing (CDC) systems are discussed.



قيم البحث

اقرأ أيضاً

This paper studies device to device (D2D) coded-caching with information theoretic security guarantees. A broadcast network consisting of a server, which has a library of files, and end users equipped with cache memories, is considered. Information t heoretic security guarantees for confidentiality are imposed upon the files. The server populates the end user caches, after which D2D communications enable the delivery of the requested files. Accordingly, we require that a user must not have access to files it did not request, i.e., secure caching. First, a centralized coded caching scheme is provided by jointly optimizing the cache placement and delivery policies. Next, a decentralized coded caching scheme is developed that does not require the knowledge of the number of active users during the caching phase. Both schemes utilize non-perfect secret sharing and one-time pad keying, to guarantee secure caching. Furthermore, the proposed schemes provide secure delivery as a side benefit, i.e., any external entity which overhears the transmitted signals during the delivery phase cannot obtain any information about the database files. The proposed schemes provide the achievable upper bound on the minimum delivery sum rate. Lower bounds on the required transmission sum rate are also derived using cut-set arguments indicating the multiplicative gap between the lower and upper bounds. Numerical results indicate that the gap vanishes with increasing memory size. Overall, the work demonstrates the effectiveness of D2D communications in cache-aided systems even when confidentiality constraints are imposed at the participating nodes and against external eavesdroppers.
93 - Behzad Asadi , Lawrence Ong , 2018
We address a centralized caching problem with unequal cache sizes. We consider a system with a server of files connected through a shared error-free link to a group of cache-enabled users where one subgroup has a larger cache size than the other. We propose an explicit caching scheme for the considered system aimed at minimizing the load of worst-case demands over the shared link. As suggested by numerical evaluations, our scheme improves upon the best existing explicit scheme by having a lower worst-case load; also, our scheme performs within a multiplicative factor of 1.11 from the scheme that can be obtained by solving an optimisation problem in which the number of parameters grows exponentially with the number of users.
In this work, we study coded placement in caching systems where the users have unequal cache sizes and demonstrate its performance advantage. In particular, we propose a caching scheme with coded placement for three-user systems that outperforms the best caching scheme with uncoded placement. In our proposed scheme, users cache both uncoded and coded pieces of the files, and the coded pieces at the users with large memories are decoded using the unicast/multicast signals intended to serve users with smaller memories. Furthermore, we extend the proposed scheme to larger systems and show the reduction in delivery load with coded placement compared to uncoded placement.
This paper investigates user cooperation in massive multiple-input multiple-output (MIMO) systems with cascaded precoding. The high-dimensional physical channel in massive MIMO systems can be converted into a low-dimensional effective channel through the inner precoder to reduce the overhead of channel estimation and feedback. The inner precoder depends on the spatial covariance matrix of the channels, and thus the same precoder can be used for different users as long as they have the same spatial covariance matrix. Spatial covariance matrix is determined by the surrounding environment of user terminals. Therefore, the users that are close to each other will share the same spatial covariance matrix. In this situation, it is possible to achieve user cooperation by sharing receiver information through some dedicated link, such as device-to-device communications. To reduce the amount of information that needs to be shared, we propose a decoding codebook based scheme, which can achieve user cooperation without the need of channel state information. Moreover, we also investigate the amount of bandwidth required to achieve efficient user cooperation. Simulation results show that user cooperation can improve the capacity compared to the non-cooperation scheme.
In this paper, we consider the coded-caching broadcast network with user cooperation, where a server connects with multiple users and the users can cooperate with each other through a cooperation network. We propose a centralized coded caching scheme based on a new deterministic placement strategy and a parallel delivery strategy. It is shown that the new scheme optimally allocate the communication loads on the server and users, obtaining cooperation gain and parallel gain that greatly reduces the transmission delay. Furthermore, we show that the number of users who parallelly send information should decrease when the users caching size increases. In other words, letting more users parallelly send information could be harmful. Finally, we derive a constant multiplicative gap between the lower bound and upper bound on the transmission delay, which proves that our scheme is order optimal.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا