Do you want to publish a course? Click here

Coordination Using Individually Shared Randomness

310   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Two processors output correlated sequences using the help of a coordinator with whom they individually share independent randomness. For the case of unlimited shared randomness, we characterize the rate of communication required from the coordinator to the processors over a broadcast link. We also give an achievable trade-off between the communication and shared randomness rates.

rate research

Read More

We study a distributed sampling problem where a set of processors want to output (approximately) independent and identically distributed samples from a joint distribution with the help of a common message from a coordinator. Each processor has access to a subset of sources from a set of independent sources of shared randomness. We consider two cases -- in the omniscient coordinator setting, the coordinator has access to all these sources of shared randomness, while in the oblivious coordinator setting, it has access to none. All processors and the coordinator may privately randomize. In the omniscient coordinator setting, when the subsets at the processors are disjoint (individually shared randomness model), we characterize the rate of communication required from the coordinator to the processors over a multicast link. For the two-processor case, the optimal rate matches a special case of relaxed Wyners common information proposed by Gastpar and Sula (2019), thereby providing an operational meaning to the latter. We also give an upper bound on the communication rate for the randomness-on-the-forehead model where each processor observes all but one source of randomness and we give an achievable strategy for the general case where the processors have access to arbitrary subsets of sources of randomness. Also, we consider a more general model where the processors observe components of correlated sources (with the coordinator observing all the components), where we characterize the communication rate when all the processors wish to output the same random sequence. In the oblivious coordinator setting, we completely characterize the trade-off region between the communication and shared randomness rates for the general case where the processors have access to arbitrary subsets of sources of randomness.
We characterize the rate coverage distribution for a spectrum-shared millimeter wave downlink cellular network. Each of multiple cellular operators owns separate mmWave bandwidth, but shares the spectrum amongst each other while using dynamic inter-operator base station (BS) coordination to suppress the resulting cross-operator interference. We model the BS locations of each operator as mutually independent Poisson point processes, and derive the probability density function (PDF) of the K-th strongest link power, incorporating both line-of-sight and non line-of-sight states. Leveraging the obtained PDF, we derive the rate coverage expression as a function of system parameters such as the BS density, transmit power, bandwidth, and coordination set size. We verify the analysis with extensive simulation results. A major finding is that inter-operator BS coordination is useful in spectrum sharing (i) with dense and high power operators and (ii) with fairly wide beams, e.g., 30 or higher.
105 - Ruida Zhou , Chao Tian , Tie Liu 2020
We propose a new information-theoretic bound on generalization error based on a combination of the error decomposition technique of Bu et al. and the conditional mutual information (CMI) construction of Steinke and Zakynthinou. In a previous work, Haghifam et al. proposed a different bound combining the two aforementioned techniques, which we refer to as the conditional individual mutual information (CIMI) bound. However, in a simple Gaussian setting, both the CMI and the CIMI bounds are order-wise worse than that by Bu et al.. This observation motivated us to propose the new bound, which overcomes this issue by reducing the conditioning terms in the conditional mutual information. In the process of establishing this bound, a conditional decoupling lemma is established, which also leads to a meaningful dichotomy and comparison among these information-theoretic bounds.
In this paper, we study the problem of coordinating two nodes which can only exchange information via a relay at limited rates. The nodes are allowed to do a two-round interactive two-way communication with the relay, after which they should be able to generate i.i.d. copies of two random variables with a given joint distribution within a vanishing total variation distance. We prove inner and outer bounds on the coordination capacity region for this problem. Our inner bound is proved using the technique of output statistics of random binning that has recently been developed by Yassaee, et al.
We consider the problem of quantifying the information shared by a pair of random variables $X_{1},X_{2}$ about another variable $S$. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about $S$ is bounded from below by the information shared about $f(S)$ for any function $f$. We show that our measure leads to a new nonnegative decomposition of the mutual information $I(S;X_1X_2)$ into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا