Do you want to publish a course? Click here

Implicit Communication as Minimum Entropy Coupling

221   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In many common-payoff games, achieving good performance requires players to develop protocols for communicating their private information implicitly -- i.e., using actions that have non-communicative effects on the environment. Multi-agent reinforcement learning practitioners typically approach this problem using independent learning methods in the hope that agents will learn implicit communication as a byproduct of expected return maximization. Unfortunately, independent learning methods are incapable of doing this in many settings. In this work, we isolate the implicit communication problem by identifying a class of partially observable common-payoff games, which we call implicit referential games, whose difficulty can be attributed to implicit communication. Next, we introduce a principled method based on minimum entropy coupling that leverages the structure of implicit referential games, yielding a new perspective on implicit communication. Lastly, we show that this method can discover performant implicit communication protocols in settings with very large spaces of messages.



rate research

Read More

We study the problem of identifying the causal relationship between two discrete random variables from observational data. We recently proposed a novel framework called entropic causality that works in a very general functional model but makes the assumption that the unobserved exogenous variable has small entropy in the true causal direction. This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals. This corresponds to minimizing a concave function of nm variables over a convex polytope defined by nm linear constraints, called a transportation polytope. Unfortunately, it was recently shown that this minimum entropy coupling problem is NP-hard, even for 2 variables with n states. Even representing points (joint distributions) over this space can require exponential complexity (in n, m) if done naively. In our recent work we introduced an efficient greedy algorithm to find an approximate solution for this problem. In this paper we analyze this algorithm and establish two results: that our algorithm always finds a local minimum and also is within an additive approximation error from the unknown global optimum.
In this work, we are interested in structure learning for a set of spatially distributed dynamical systems, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables, however we can exploit the properties of certain dynamical systems to formulate exact methods based on state space reconstruction. We approach the problem by using reconstruction theorems to analytically derive a tractable expression for the KL-divergence of a candidate DAG from the observed dataset. We show this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. We then present two mathematically robust scoring functions based on transfer entropy and statistical independence tests. These results support the previously held conjecture that transfer entropy can be used to infer effective connectivity in complex networks.
A central physical question is the extent to which infrared (IR) observations are sufficient to reconstruct a candidate ultraviolet (UV) completion. We recast this question as a problem of communication, with messages encoded in field configurations of the UV being transmitted to the IR degrees of freedom via a noisy channel specified by renormalization group (RG) flow, with noise generated by coarse graining / decimation. We present an explicit formulation of these considerations in terms of lattice field theory, where we show that the misanthropic entropy---the mutual information obtained from decimating neighbors---encodes the extent to which information is lost in marginalizing over / tracing out UV degrees of freedom. Our considerations apply both to statistical field theories as well as density matrix renormalization of quantum systems, where in the quantum case, the statistical field theory analysis amounts to a leading order approximation. As a concrete example, we focus on the case of the 2D Ising model, where we show that the misanthropic entropy detects the onset of the phase transition at the Ising model critical point.
Optical channels, such as fibers or free-space links, are ubiquitous in todays telecommunication networks. They rely on the electromagnetic field associated with photons to carry information from one point to another in space. As a result, a complete physical model of these channels must necessarily take quantum effects into account in order to determine their ultimate performances. Specifically, Gaussian photonic (or bosonic) quantum channels have been extensively studied over the past decades given their importance for practical purposes. In spite of this, a longstanding conjecture on the optimality of Gaussian encodings has yet prevented finding their communication capacity. Here, this conjecture is solved by proving that the vacuum state achieves the minimum output entropy of a generic Gaussian bosonic channel. This establishes the ultimate achievable bit rate under an energy constraint, as well as the long awaited proof that the single-letter classical capacity of these channels is additive. Beyond capacities, it also has broad consequences in quantum information sciences.
Individual and group decisions are complex, often involving choosing an apt alternative from a multitude of options. Evaluating pairwise comparisons breaks down such complex decision problems into tractable ones. Pairwise comparison matrices (PCMs) are regularly used to solve multiple-criteria decision-making (MCDM) problems, for example, using Saatys analytic hierarchy process (AHP) framework. However, there are two significant drawbacks of using PCMs. First, humans evaluate PCMs in an inconsistent manner. Second, not all entries of a large PCM can be reliably filled by human decision makers. We address these two issues by first establishing a novel connection between PCMs and time-irreversible Markov processes. Specifically, we show that every PCM induces a family of dissipative maximum path entropy random walks (MERW) over the set of alternatives. We show that only `consistent PCMs correspond to detailed balanced MERWs. We identify the non-equilibrium entropy production in the induced MERWs as a metric of inconsistency of the underlying PCMs. Notably, the entropy production satisfies all of the recently laid out criteria for reasonable consistency indices. We also propose an approach to use incompletely filled PCMs in AHP. Potential future avenues are discussed as well. keywords: analytic hierarchy process, markov chains, maximum entropy

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا