ترغب بنشر مسار تعليمي؟ اضغط هنا

Small-Sample Inferred Adaptive Recoding for Batched Network Coding

69   0   0.0 ( 0 )
 نشر من قبل Jie Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Batched network coding is a low-complexity network coding solution to feedbackless multi-hop wireless packet network transmission with packet loss. The data to be transmitted is encoded into batches where each of which consists of a few coded packets. Unlike the traditional forwarding strategy, the intermediate network nodes have to perform recoding, which generates recoded packets by network coding operations restricted within the same batch. Adaptive recoding is a technique to adapt the fluctuation of packet loss by optimizing the number of recoded packets per batch to enhance the throughput. The input rank distribution, which is a piece of information regarding the batches arriving at the node, is required to apply adaptive recoding. However, this distribution is not known in advance in practice as the incoming links channel condition may change from time to time. On the other hand, to fully utilize the potential of adaptive recoding, we need to have a good estimation of this distribution. In other words, we need to guess this distribution from a few samples so that we can apply adaptive recoding as soon as possible. In this paper, we propose a distributionally robust optimization for adaptive recoding with a small-sample inferred prediction of the input rank distribution. We develop an algorithm to efficiently solve this optimization with the support of theoretical guarantees that our optimizations performance would constitute as a confidence lower bound of the optimal throughput with high probability.

قيم البحث

اقرأ أيضاً

Multi-hop networks become popular network topologies in various emerging Internet of things applications. Batched network coding (BNC) is a solution to reliable communications in such networks with packet loss. By grouping packets into small batches and restricting recoding to the packets belonging to the same batch, BNC has a much smaller computational and storage requirements at the intermediate nodes compared with a direct application of random linear network coding. In this paper, we propose a practical recoding scheme called blockwise adaptive recoding (BAR) which learns the latest channel knowledge from short observations so that BAR can adapt to the fluctuation of channel conditions. We focus on investigating practical concerns such as the design of efficient BAR algorithms. We also design and investigate feedback schemes for BAR under imperfect feedback systems. Our numerical evaluations show that BAR has significant throughput gain for small batch size compared with the existing baseline recoding scheme. More importantly, this gain is insensitive to inaccurate channel knowledge. This encouraging result suggests that BAR is suitable to be realized in practice as the exact channel model and its parameters could be unknown and subject to change from time to time.
Batched network coding is a variation of random linear network coding which has low computational and storage costs. In order to adapt to random fluctuations in the number of erasures in individual batches, it is not optimal to recode and transmit th e same number of packets for all batches. Different distributed optimization models, which are called adaptive recoding schemes, were formulated for this purpose. The key component of these optimization problems is the expected value of the rank distribution of a batch at the next network node, which is also known as the expected rank. In this paper, we put forth a unified adaptive recoding framework with an arbitrary recoding field size. We show that the expected rank functions are concave when the packet loss pattern is a stationary stochastic process, which covers but not limited to independent packet loss and Gilbert-Elliott packet loss model. Under this concavity assumption, we show that there always exists a solution which not only can minimize the randomness on the number of recoded packets but also can tolerate rank distribution errors due to inaccurate measurements or limited precision of the machine. We provide an algorithm to obtain such an optimal optimal solution, and propose tuning schemes that can turn any feasible solution into a desired optimal solution.
Batched network coding (BNC) is a low-complexity solution to network transmission in multi-hop packet networks with packet loss. BNC encodes the source data into batches of packets. As a network coding scheme, the intermediate nodes perform recoding on the received packets belonging to the same batch instead of just forwarding them. A recoding scheme that may generate more recoded packets for batches of a higher rank is also called adaptive recoding. Meanwhile, in order to combat burst packet loss, the transmission of a block of batches can be interleaved. Stream interleaving studied in literature achieves the maximum separation among any two consecutive packets of a batch, but permutes packets across blocks and hence cannot bound the buffer size and the latency. To resolve the issue of stream interleaver, we design an intrablock interleaver for adaptive recoding that can preserve the advantages of using a block interleaver when the number of recoded packets is the same for all batches. We use potential energy in classical mechanics to measure the performance of an interleaver, and propose an algorithm to optimize the interleaver with this performance measure. Our problem formulation and algorithm for intrablock interleaving are also of independent interest.
This paper focuses on mitigating the impact of stragglers in distributed learning system. Unlike the existing results designed for a fixed number of stragglers, we developed a new scheme called Adaptive Gradient Coding(AGC) with flexible tolerance of various number of stragglers. Our scheme gives an optimal tradeoff between computation load, straggler tolerance and communication cost. In particular, it allows to minimize the communication cost according to the real-time number of stragglers in the practical environments. Implementations on Amazon EC2 clusters using Python with mpi4py package verify the flexibility in several situations.
In this paper we introduce Neural Network Coding(NNC), a data-driven approach to joint source and network coding. In NNC, the encoders at each source and intermediate node, as well as the decoder at each destination node, are neural networks which ar e all trained jointly for the task of communicating correlated sources through a network of noisy point-to-point links. The NNC scheme is application-specific and makes use of a training set of data, instead of making assumptions on the source statistics. In addition, it can adapt to any arbitrary network topology and power constraint. We show empirically that, for the task of transmitting MNIST images over a network, the NNC scheme shows improvement over baseline schemes, especially in the low-SNR regime.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا