ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive Prioritized Random Linear Coding and Scheduling for Layered Data Delivery from Multiple Servers

104   0   0.0 ( 0 )
 نشر من قبل Nikolaos Thomos
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we deal with the problem of jointly determining the optimal coding strategy and the scheduling decisions when receivers obtain layered data from multiple servers. The layered data is encoded by means of Prioritized Random Linear Coding (PRLC) in order to be resilient to channel loss while respecting the unequal levels of importance in the data, and data blocks are transmitted simultaneously in order to reduce decoding delays and improve the delivery performance. We formulate the optimal coding and scheduling decisions problem in our novel framework with the help of Markov Decision Processes (MDP), which are effective tools for modeling adapting streaming systems. Reinforcement learning approaches are then proposed to derive reduced computational complexity solutions to the adaptive coding and scheduling problems. The novel reinforcement learning approaches and the MDP solution are examined in an illustrative example for scalable video transmission. Our methods offer large performance gains over competing methods that deliver the data blocks sequentially. The experimental evaluation also shows that our novel algorithms offer continuous playback and guarantee small quality variations which is not the case for baseline solutions. Finally, our work highlights the advantages of reinforcement learning algorithms to forecast the temporal evolution of data demands and to decide the optimal coding and scheduling decisions.



قيم البحث

اقرأ أيضاً

We consider the setting of distributed storage system where a single file is subdivided into smaller fragments of same size which are then replicated with a common replication factor across servers of identical cache size. An incoming file download r equest is sent to all the servers, and the download is completed whenever request gathers all the fragments. At each server, we are interested in determining the set of fragments to be stored, and the sequence in which fragments should be accessed, such that the mean file download time for a request is minimized. We model the fragment download time as an exponential random variable independent and identically distributed for all fragments across all servers, and show that the mean file download time can be lower bounded in terms of the expected number of useful servers summed over all distinct fragment downloads. We present deterministic storage schemes that attempt to maximize the number of useful servers. We show that finding the optimal sequence of accessing the fragments is a Markov decision problem, whose complexity grows exponentially with the number of fragments. We propose heuristic algorithms that determine the sequence of access to the fragments which are empirically shown to perform well.
This work investigates a system where each user aims to retrieve a scalar linear function of the files of a library, which are Maximum Distance Separable coded and stored at multiple distributed servers. The system needs to guarantee robust decoding in the sense that each user must decode its demanded function with signals received from any subset of servers whose cardinality exceeds a threshold. In addition, (a) the content of the library must be kept secure from a wiretapper who obtains all the signals from the servers;(b) any subset of users together can not obtain any information about the demands of the remaining users; and (c) the users demands must be kept private against all the servers even if they collude. Achievable schemes are derived by modifying existing Placement Delivery Array (PDA) constructions, originally proposed for single-server single-file retrieval coded caching systems without any privacy or security or robustness constraints. It is shown that the PDAs describing the original Maddah-Ali and Niesens coded caching scheme result in a load-memory tradeoff that is optimal to within a constant multiplicative gap, except for the small memory regime when the number of file is smaller than the number of users. As by-products, improved order optimality results are derived for three less restrictive systems in all parameter regimes.
65 - WenLin Chen , Fang Lu , Yan Dong 2021
Sparse random linear network coding (SRLNC) used as a class of erasure codes to ensure the reliability of multicast communications has been widely investigated. However, an exact expression for the decoding success probability of SRLNC is still unkno wn, and existing expressions are either asymptotic or approximate. In this paper, we derive an exact expression for the decoding success probability of SRLNC. The key to achieving this is to propose a criterion that a vector is contained in a subspace. To obtain this criterion, we construct a basis of a subspace, with respect to this basis, the coordinates of a vector are known, based on a maximal linearly independent set of the columns of a matrix. The exactness and the computation of the derived expression are demonstrated by a simple example.
This paper focuses on mitigating the impact of stragglers in distributed learning system. Unlike the existing results designed for a fixed number of stragglers, we developed a new scheme called Adaptive Gradient Coding(AGC) with flexible tolerance of various number of stragglers. Our scheme gives an optimal tradeoff between computation load, straggler tolerance and communication cost. In particular, it allows to minimize the communication cost according to the real-time number of stragglers in the practical environments. Implementations on Amazon EC2 clusters using Python with mpi4py package verify the flexibility in several situations.
Motivated by the analogy between successive interference cancellation and iterative belief-propagation on erasure channels, irregular repetition slotted ALOHA (IRSA) strategies have received a lot of attention in the design of medium access control p rotocols. The IRSA schemes have been mostly analyzed for theoretical scenarios for homogenous sources, where they are shown to substantially improve the system performance compared to classical slotted ALOHA protocols. In this work, we consider generic systems where sources in different importance classes compete for a common channel. We propose a new prioritized IRSA algorithm and derive the probability to correctly resolve collisions for data from each source class. We then make use of our theoretical analysis to formulate a new optimization problem for selecting the transmission strategies of heterogenous sources. We optimize both the replication probability per class and the source rate per class, in such a way that the overall system utility is maximized. We then propose a heuristic-based algorithm for the selection of the transmission strategy, which is built on intrinsic characteristics of the iterative decoding methods adopted for recovering from collisions. Experimental results validate the accuracy of the theoretical study and show the gain of well-chosen prioritized transmission strategies for transmission of data from heterogenous classes over shared wireless channels.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا