No Arabic abstract
We consider the problem of a decision-maker searching for information on multiple alternatives when information is learned on all alternatives simultaneously. The decision-maker has a running cost of searching for information, and has to decide when to stop searching for information and choose one alternative. The expected payoff of each alternative evolves as a diffusion process when information is being learned. We present necessary and sufficient conditions for the solution, establishing existence and uniqueness. We show that the optimal boundary where search is stopped (free boundary) is star-shaped, and present an asymptotic characterization of the value function and the free boundary. We show properties of how the distance between the free boundary and the diagonal varies with the number of alternatives, and how the free boundary under parallel search relates to the one under sequential search, with and without economies of scale on the search costs.
After observing the outcome of a Blackwell experiment, a Bayesian decisionmaker can form (a) posterior beliefs over the state, as well as (b) posterior beliefs she would observe any given signal (assuming an independent draw from the same experiment). I call the latter her contingent hypothetical beliefs. I show geometrically how contingent hypothetical beliefs relate to information structures. Specifically, the information structure can (generically) be derived by regressing contingent hypothetical beliefs on posterior beliefs over the state. Her prior is the unit eigenvector of a matrix determined from her posterior beliefs over the state and her contingent hypothetical beliefs. Thus, all aspects of a decisionmakers information acquisition problem can be determined using ex-post data (i.e., beliefs after having received signals). I compare my results to similar ones obtained in cases where information is modeled deterministically; the focus on single-agent stochastic information distinguishes my work.
How to guarantee that firms perform due diligence before launching potentially dangerous products? We study the design of liability rules when (i) limited liability prevents firms from internalizing the full damage they may cause, (ii) penalties are paid only if damage occurs, regardless of the products inherent riskiness, (iii) firms have private information about their products riskiness before performing due diligence. We show that (i) any liability mechanism can be implemented by a tariff that depends only on the evidence acquired by the firm if a damage occurs, not on any initial report by the firm about its private information, (ii) firms that assign a higher prior to product riskiness always perform more due diligence but less than is socially optimal, and (iii) under a simple and intuitive condition, any type-specific launch thresholds can be implemented by a monotonic tariff.
We study the payoffs that can arise under some information structure from an interim perspective. There is a set of types distributed according to some prior distribution and a payoff function that assigns a value to each pair of a type and a belief over the types. Any information structure induces an interim payoff profile which describes, for each type, the expected payoff under the information structure conditional on the type. We characterize the set of all interim payoff profiles consistent with some information structure. We illustrate our results through applications.
One of the consequences of persistent technological change is that it force individuals to make decisions under extreme uncertainty. This means that traditional decision-making frameworks cannot be applied. To address this issue we introduce a variant of Case-Based Decision Theory, in which the solution to a problem obtains in terms of the distance to previous problems. We formalize this by defining a space based on an orthogonal basis of features of problems. We show how this framework evolves upon the acquisition of new information, namely features or values of them arising in new problems. We discuss how this can be useful to evaluate decisions based on not yet existing data.
We introduce a new updating rule, the conditional maximum likelihood rule (CML) for updating ambiguous information. The CML formula replaces the likelihood term in Bayes rule with the maximal likelihood of the given signal conditional on the state. We show that CML satisfies a new axiom, increased sensitivity after updating, while other updating rules do not. With CML, a decision makers posterior is unaffected by the order in which independent signals arrive. CML also accommodates recent experimental findings on updating signals of unknown accuracy and has simple predictions on learning with such signals. We show that an information designer can almost achieve her maximal payoff with a suitable ambiguous information structure whenever the agent updates according to CML.