Do you want to publish a course? Click here

Throughput-Smoothness Trade-offs in Multicasting of an Ordered Packet Stream

510   0   0.0 ( 0 )
 Added by Gauri Joshi
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

An increasing number of streaming applications need packets to be strictly in-order at the receiver. This paper provides a framework for analyzing in-order packet delivery in such applications. We consider the problem of multicasting an ordered stream of packets to two users over independent erasure channels with instantaneous feedback to the source. Depending upon the channel erasures, a packet which is in-order for one user, may be redundant for the other. Thus there is an inter-dependence between throughput and the smoothness of in-order packet delivery to the two users. We use a Markov chain model of packet decoding to analyze these throughput-smoothness trade-offs of the users, and propose coding schemes that can span different points on each trade-off.



rate research

Read More

Unlike traditional file transfer where only total delay matters, streaming applications impose delay constraints on each packet and require them to be in order. To achieve fast in-order packet decoding, we have to compromise on the throughput. We study this trade-off between throughput and smoothness in packet decoding. We first consider a point-to-point streaming and analyze how the trade-off is affected by the frequency of block-wise feedback, whereby the source receives full channel state feedback at periodic intervals. We show that frequent feedback can drastically improve the throughput-smoothness trade-off. Then we consider the problem of multicasting a packet stream to two users. For both point-to-point and multicast streaming, we propose a spectrum of coding schemes that span different throughput-smoothness tradeoffs. One can choose an appropriate coding scheme from these, depending upon the delay-sensitivity and bandwidth limitations of the application. This work introduces a novel style of analysis using renewal processes and Markov chains to analyze coding schemes.
265 - Xiaoming Duan , Zhe Xu , Rui Yan 2021
We study privacy-utility trade-offs where users share privacy-correlated useful information with a service provider to obtain some utility. The service provider is adversarial in the sense that it can infer the users private information based on the shared useful information. To minimize the privacy leakage while maintaining a desired level of utility, the users carefully perturb the useful information via a probabilistic privacy mapping before sharing it. We focus on the setting in which the adversary attempting an inference attack on the users privacy has potentially biased information about the statistical correlation between the private and useful variables. This information asymmetry between the users and the limited adversary leads to better privacy guarantees than the case of the omniscient adversary under the same utility requirement. We first identify assumptions on the adversarys information so that the inference costs are well-defined and finite. Then, we characterize the impact of the information asymmetry and show that it increases the inference costs for the adversary. We further formulate the design of the privacy mapping against a limited adversary using a difference of convex functions program and solve it via the concave-convex procedure. When the adversarys information is not precisely available, we adopt a Bayesian view and represent the adversarys information by a probability distribution. In this case, the expected cost for the adversary does not admit a closed-form expression, and we establish and maximize a lower bound of the expected cost. We provide a numerical example regarding a census data set to illustrate the theoretical results.
This paper investigates delay-distortion-power trade offs in transmission of quasi-stationary sources over block fading channels by studying encoder and decoder buffering techniques to smooth out the source and channel variations. Four source and channel coding schemes that consider buffer and power constraints are presented to minimize the reconstructed source distortion. The first one is a high performance scheme, which benefits from optimized source and channel rate adaptation. In the second scheme, the channel coding rate is fixed and optimized along with transmission power with respect to channel and source variations; hence this scheme enjoys simplicity of implementation. The two last schemes have fixed transmission power with optimized adaptive or fixed channel coding rate. For all the proposed schemes, closed form solutions for mean distortion, optimized rate and power are provided and in the high SNR regime, the mean distortion exponent and the asymptotic mean power gains are derived. The proposed schemes with buffering exploit the diversity due to source and channel variations. Specifically, when the buffer size is limited, fixed channel rate adaptive power scheme outperforms an adaptive rate fixed power scheme. Furthermore, analytical and numerical results demonstrate that with limited buffer size, the system performance in terms of reconstructed signal SNR saturates as transmission power is increased, suggesting that appropriate buffer size selection is important to achieve a desired reconstruction quality.
Multicasting is the general method of conveying the same information to multiple users over a broadcast channel. In this work, the Gaussian MIMO broadcast channel is considered, with multiple users and any number of antennas at each node. A closed loop scenario is assumed, for which a practical capacity-achieving multicast scheme is constructed. In the proposed scheme, linear modulation is carried over time and space together, which allows to transform the problem into that of transmission over parallel scalar sub-channels, the gains of which are equal, except for a fraction of sub-channels that vanishes with the number of time slots used. Over these sub-channels, off-the-shelf fixed-rate AWGN codes can be used to approach capacity.
Dealing with the shear size and complexity of todays massive data sets requires computational platforms that can analyze data in a parallelized and distributed fashion. A major bottleneck that arises in such modern distributed computing environments is that some of the worker nodes may run slow. These nodes a.k.a.~stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. A recent computational framework, called encoded optimization, creates redundancy in the data to mitigate the effect of stragglers. In this paper we develop novel mathematical understanding for this framework demonstrating its effectiveness in much broader settings than was previously understood. We also analyze the convergence behavior of iterative encoded optimization algorithms, allowing us to characterize fundamental trade-offs between convergence rate, size of data set, accuracy, computational load (or data redundancy), and straggler toleration in this framework.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا