Do you want to publish a course? Click here

Value-Based Distance Between Information Structures

81   0   0.0 ( 0 )
 Added by Jerome Renault
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

We define the distance between two information structures as the largest possible difference in value across all zero-sum games. We provide a tractable characterization of distance and use it to discuss the relation between the value of information in games versus single-agent problems, the value of additional information, informational substitutes, complements, or joint information. The convergence to a countable information structure under value-based distance is equivalent to the weak convergence of belief hierarchies, implying, among other things, that for zero-sum games, approximate knowledge is equivalent to common knowledge. At the same time, the space of information structures under the value-based distance is large: there exists a sequence of information structures where players acquire increasingly more information, and $epsilon$ > 0 such that any two elements of the sequence have distance of at least $epsilon$. This result answers by the negative the second (and last unsolved) of the three problems posed by J.F. Mertens in his paper Repeated Games , ICM 1986.



rate research

Read More

We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a simple-to-implement remote linear estimator using a constant Kalman gain to track the state of a Gauss Markov process. The remote estimator receives time-stamped data packets which contain noisy observations of the process. Additionally, they also contain the information about the quality of the sensor source, i.e., the variance of the observation noise that was used to generate the packet. In order to minimize the estimation error, the scheduler needs to use both while prioritizing packet transmissions. It is shown that a simple index rule that calculates the value of information (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal. The VoI of a packet decreases with its age, and increases with the precision of the source. Thus, we conclude that, for constant filter gains, a policy which minimizes the age of information does not necessarily maximize the estimator performance.
This paper develops computable metrics to assign priorities for information collection on network systems made up by binary components. Components are worth inspecting because their condition state is uncertain and the system functioning depends on it. The Value of Information (VoI) allows assessing the impact of information in decision making under uncertainty, including the precision of the observation, the available actions and the expected economic loss. Some VoI-based metrics for system-level and component-level maintenance actions, defined as global and local metrics, respectively, are introduced, analyzed and applied to series and parallel systems. Their computationally complexity of applications to general networks is discussed and, to tame the complexity for the local metric assessment, a heuristic is presented and its performance is compared on some case studies.
While Kolmogorov complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures. We give several natural definitions of a universal information metric, based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations. It turns out that these definitions are equivalent up to an additive logarithmic term. We show that the information distance is a universal cognitive similarity distance. We investigate the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs (a generalization of the Slepian-Wolf theorem of classical information theory), and the density properties of the discrete metric spaces induced by the information distances. A related distance measures the amount of nonreversibility of a computation. Using the physical theory of reversible computation, we give an appropriate (universal, anti-symmetric, and transitive) measure of the thermodynamic work required to transform one object in another object by the most efficient process. Information distance between individual objects is needed in pattern recognition where one wants to express effective notions of pattern similarity or cognitive similarity between individual objects and in thermodynamics of computation where one wants to analyse the energy dissipation of a computation from a particular input to a particular output.
We develop a full theory for the new class of Optimal Entropy-Transport problems between nonnegative and finite Radon measures in general topological spaces. They arise quite naturally by relaxing the marginal constraints typical of Optimal Transport problems: given a couple of finite measures (with possibly different total mass), one looks for minimizers of the sum of a linear transport functional and two convex entropy functionals, that quantify in some way the deviation of the marginals of the transport plan from the assigned measures. As a powerful application of this theory, we study the particular case of Logarithmic Entropy-Transport problems and introduce the new Hellinger-Kantorovich distance between measures in metric spaces. The striking connection between these two seemingly far topics allows for a deep analysis of the geometric properties of the new geodesic distance, which lies somehow between the well-known Hellinger-Kakutani and Kantorovich-Wasserstein distances.
360 - Paul M.B. Vitanyi 2008
The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا