Do you want to publish a course? Click here

A Theory of Updating Ambiguous Information

70   0   0.0 ( 0 )
 Added by Rui Tang
 Publication date 2020
  fields Economy
and research's language is English
 Authors Rui Tang




Ask ChatGPT about the research

We introduce a new updating rule, the conditional maximum likelihood rule (CML) for updating ambiguous information. The CML formula replaces the likelihood term in Bayes rule with the maximal likelihood of the given signal conditional on the state. We show that CML satisfies a new axiom, increased sensitivity after updating, while other updating rules do not. With CML, a decision makers posterior is unaffected by the order in which independent signals arrive. CML also accommodates recent experimental findings on updating signals of unknown accuracy and has simple predictions on learning with such signals. We show that an information designer can almost achieve her maximal payoff with a suitable ambiguous information structure whenever the agent updates according to CML.



rate research

Read More

94 - Xiaoyu Cheng 2021
This note shows that the value of ambiguous persuasion characterized in Beauchene, Li and Li(2019) can be given by a concavification program as in Bayesian persuasion (Kamenica and Gentzkow, 2011). More specifically, it implies that an ambiguous persuasion game can be equivalently formalized as a Bayesian persuasion game with distorted utility functions. This result is obtained under a novel construction of ambiguous persuasion.
120 - Xiaoyu Cheng 2021
Cheng(2021) proposes and characterizes Relative Maximum Likelihood (RML) updating rule when the ambiguous beliefs are represented by a set of priors. Relatedly, this note proposes and characterizes Extended RML updating rule when the ambiguous beliefs are represented by a convex capacity. Two classical updating rules for convex capacities, Dempster-Shafer (Shafer, 1976) and Fagin-Halpern rules (Fagin and Halpern, 1990) are included as special cases of Extended RML.
We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a simple-to-implement remote linear estimator using a constant Kalman gain to track the state of a Gauss Markov process. The remote estimator receives time-stamped data packets which contain noisy observations of the process. Additionally, they also contain the information about the quality of the sensor source, i.e., the variance of the observation noise that was used to generate the packet. In order to minimize the estimation error, the scheduler needs to use both while prioritizing packet transmissions. It is shown that a simple index rule that calculates the value of information (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal. The VoI of a packet decreases with its age, and increases with the precision of the source. Thus, we conclude that, for constant filter gains, a policy which minimizes the age of information does not necessarily maximize the estimator performance.
Sensor sources submit updates to a monitor through an unslotted, uncoordinated, unreliable multiple access collision channel. The channel is unreliable; a collision-free transmission is received successfully at the monitor with some transmission success probability. For an infinite-user model in which the sensors collectively transmit updates as a Poisson process and each update has an independent exponential transmission time, a stochastic hybrid system (SHS) approach is used to derive the average age of information (AoI) as a function of the offered load and the transmission success probability. The analysis is then extended to evaluate the individual age of a selected source. When the number of sources and update transmission rate grow large in fixed proportion, the limiting asymptotic individual age is shown to provide an accurate individual age approximation for a small number of sources.
We consider the problem of a decision-maker searching for information on multiple alternatives when information is learned on all alternatives simultaneously. The decision-maker has a running cost of searching for information, and has to decide when to stop searching for information and choose one alternative. The expected payoff of each alternative evolves as a diffusion process when information is being learned. We present necessary and sufficient conditions for the solution, establishing existence and uniqueness. We show that the optimal boundary where search is stopped (free boundary) is star-shaped, and present an asymptotic characterization of the value function and the free boundary. We show properties of how the distance between the free boundary and the diagonal varies with the number of alternatives, and how the free boundary under parallel search relates to the one under sequential search, with and without economies of scale on the search costs.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا