Do you want to publish a course? Click here

Coordination Through Shared Randomness

215   0   0.0 ( 0 )
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We study a distributed sampling problem where a set of processors want to output (approximately) independent and identically distributed samples from a joint distribution with the help of a common message from a coordinator. Each processor has access to a subset of sources from a set of independent sources of shared randomness. We consider two cases -- in the omniscient coordinator setting, the coordinator has access to all these sources of shared randomness, while in the oblivious coordinator setting, it has access to none. All processors and the coordinator may privately randomize. In the omniscient coordinator setting, when the subsets at the processors are disjoint (individually shared randomness model), we characterize the rate of communication required from the coordinator to the processors over a multicast link. For the two-processor case, the optimal rate matches a special case of relaxed Wyners common information proposed by Gastpar and Sula (2019), thereby providing an operational meaning to the latter. We also give an upper bound on the communication rate for the randomness-on-the-forehead model where each processor observes all but one source of randomness and we give an achievable strategy for the general case where the processors have access to arbitrary subsets of sources of randomness. Also, we consider a more general model where the processors observe components of correlated sources (with the coordinator observing all the components), where we characterize the communication rate when all the processors wish to output the same random sequence. In the oblivious coordinator setting, we completely characterize the trade-off region between the communication and shared randomness rates for the general case where the processors have access to arbitrary subsets of sources of randomness.

rate research

Read More

Two processors output correlated sequences using the help of a coordinator with whom they individually share independent randomness. For the case of unlimited shared randomness, we characterize the rate of communication required from the coordinator to the processors over a broadcast link. We also give an achievable trade-off between the communication and shared randomness rates.
We characterize the rate coverage distribution for a spectrum-shared millimeter wave downlink cellular network. Each of multiple cellular operators owns separate mmWave bandwidth, but shares the spectrum amongst each other while using dynamic inter-operator base station (BS) coordination to suppress the resulting cross-operator interference. We model the BS locations of each operator as mutually independent Poisson point processes, and derive the probability density function (PDF) of the K-th strongest link power, incorporating both line-of-sight and non line-of-sight states. Leveraging the obtained PDF, we derive the rate coverage expression as a function of system parameters such as the BS density, transmit power, bandwidth, and coordination set size. We verify the analysis with extensive simulation results. A major finding is that inter-operator BS coordination is useful in spectrum sharing (i) with dense and high power operators and (ii) with fairly wide beams, e.g., 30 or higher.
Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are modeled via pairwise interactions: Transformers make use of self-attention to incorporate information from other positions; object-centric architectures make use of graph neural networks to model interactions among entities. However, pairwise interactions may not achieve global coordination or a coherent, integrated representation that can be used for downstream tasks. In cognitive science, a global workspace architecture has been proposed in which functionally specialized components share information through a common, bandwidth-limited communication channel. We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments. The proposed method includes a shared workspace through which communication among different specialist modules takes place but due to limits on the communication bandwidth, specialist modules must compete for access. We show that capacity limitations have a rational basis in that (1) they encourage specialization and compositionality and (2) they facilitate the synchronization of otherwise independent specialists.
In this paper, we study the problem of coordinating two nodes which can only exchange information via a relay at limited rates. The nodes are allowed to do a two-round interactive two-way communication with the relay, after which they should be able to generate i.i.d. copies of two random variables with a given joint distribution within a vanishing total variation distance. We prove inner and outer bounds on the coordination capacity region for this problem. Our inner bound is proved using the technique of output statistics of random binning that has recently been developed by Yassaee, et al.
We consider the problem of quantifying the information shared by a pair of random variables $X_{1},X_{2}$ about another variable $S$. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about $S$ is bounded from below by the information shared about $f(S)$ for any function $f$. We show that our measure leads to a new nonnegative decomposition of the mutual information $I(S;X_1X_2)$ into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا