ترغب بنشر مسار تعليمي؟ اضغط هنا

We consider the problem of information embedding where the encoder modifies a white Gaussian host signal in a power-constrained manner to encode a message, and the decoder recovers both the embedded message and the modified host signal. This partiall y extends the recent work of Sumszyk and Steinberg to the continuous-alphabet Gaussian setting. Through a control-theoretic lens, we observe that the problem is a minimalist example of what is called the triple role of control actions. We show that a dirty-paper-coding strategy achieves the optimal rate for perfect recovery of the modified host and the message for any message rate. For imperfect recovery of the modified host, by deriving bounds on the minimum mean-square error (MMSE) in recovering the modified host signal, we show that DPC-based strategies are guaranteed to attain within a uniform constant factor of 16 of the optimal weighted sum of power required in host signal modification and the MMSE in the modified host signal reconstruction for all weights and all message rates. When specialized to the zero-rate case, our results provide the tightest known lower bounds on the asymptotic costs for the vector version of a famous open problem in decentralized control: the Witsenhausen counterexample. Numerically, this tighter bound helps us characterize the asymptotically optimal costs for the vector Witsenhausen problem to within a factor of 1.3 for all problem parameters, improving on the earlier best known bound of 2.
We determine the rate region of the vector Gaussian one-helper source-coding problem under a covariance matrix distortion constraint. The rate region is achieved by a simple scheme that separates the lossy vector quantization from the lossless spatia l compression. The converse is established by extending and combining three analysis techniques that have been employed in the past to obtain partial results for the problem.
We study a hypothesis testing problem in which data is compressed distributively and sent to a detector that seeks to decide between two possible distributions for the data. The aim is to characterize all achievable encoding rates and exponents of th e type 2 error probability when the type 1 error probability is at most a fixed value. For related problems in distributed source coding, schemes based on random binning perform well and often optimal. For distributed hypothesis testing, however, the use of binning is hindered by the fact that the overall error probability may be dominated by errors in binning process. We show that despite this complication, binning is optimal for a class of problems in which the goal is to test against conditional independence. We then use this optimality result to give an outer bound for a more general class of instances of the problem.
We consider the problem of estimating the probability of an observed string drawn i.i.d. from an unknown distribution. The key feature of our study is that the length of the observed string is assumed to be of the same order as the size of the underl ying alphabet. In this setting, many letters are unseen and the empirical distribution tends to overestimate the probability of the observed letters. To overcome this problem, the traditional approach to probability estimation is to use the classical Good-Turing estimator. We introduce a natural scaling model and use it to show that the Good-Turing sequence probability estimator is not consistent. We then introduce a novel sequence probability estimator that is indeed consistent under the natural scaling model.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا