Do you want to publish a course? Click here

Regret-Minimizing Bayesian Persuasion

83   0   0.0 ( 0 )
 Added by Konstantin Zabarnyi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We study a Bayesian persuasion setting with binary actions (adopt and reject) for Receiver. We examine the following question - how well can Sender perform, in terms of persuading Receiver to adopt, when ignorant of Receivers utility? We take a robust (adversarial) approach to study this problem; that is, our goal is to design signaling schemes for Sender that perform well for all possible Receivers utilities. We measure performance of signaling schemes via the notion of (additive) regret: the difference between Senders hypothetically optimal utility had she known Receivers utility function and her actual utility induced by the given scheme. On the negative side, we show that if Sender has no knowledge at all about Receivers utility, then Sender has no signaling scheme that performs robustly well. On the positive side, we show that if Sender only knows Receivers ordinal preferences of the states of nature - i.e., Receivers utility upon adoption is monotonic as a function of the state - then Sender can guarantee a surprisingly low regret even when the number of states tends to infinity. In fact, we exactly pin down the minimum regret value that Sender can guarantee in this case, which turns out to be at most 1/e. We further show that such positive results are not possible under the alternative performance measure of a multiplicative approximation ratio by proving that no constant ratio can be guaranteed even for monotonic Receivers utility; this may serve to demonstrate the merits of regret as a robust performance measure that is not too pessimistic. Finally, we analyze an intermediate setting in between the no-knowledge and the ordinal-knowledge settings.



rate research

Read More

Bayesian persuasion studies how an informed sender should partially disclose information to influence the behavior of a self-interested receiver. Classical models make the stringent assumption that the sender knows the receivers utility. This can be relaxed by considering an online learning framework in which the sender repeatedly faces a receiver of an unknown, adversarially selected type. We study, for the first time, an online Bayesian persuasion setting with multiple receivers. We focus on the case with no externalities and binary actions, as customary in offline models. Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any $0 < alpha leq 1$, there is no polynomial-time no-$alpha$-regret algorithm when the senders utility function is supermodular or anonymous. Then, we focus on the case of submodular senders utility functions and we show that, in this case, it is possible to design a polynomial-time no-$(1 - frac{1}{e})$-regret algorithm. To do so, we introduce a general online gradient descent scheme to handle online learning problems with a finite number of possible loss functions. This requires the existence of an approximate projection oracle. We show that, in our setting, there exists one such projection oracle which can be implemented in polynomial time.
We address Bayesian persuasion between a sender and a receiver with state-dependent quadratic cost measures for general classes of distributions. The receiver seeks to make mean-square-error estimate of a state based on a signal sent by the sender while the sender signals strategically in order to control the receivers estimate in a certain way. Such a scheme could model, e.g., deception and privacy, problems in multi-agent systems. Existing solution concepts are not viable since here the receiver has continuous action space. We show that for finite state spaces, optimal signaling strategies can be computed through an equivalent linear optimization problem over the cone of completely positive matrices. We then establish its strong duality to a copositive program. To exemplify the effectiveness of this equivalence result, we adopt sequential polyhedral approximation of completely-positive cones and analyze its performance numerically. We also quantify the approximation error for a quantized version of a continuous distribution and show that a semi-definite program relaxation of the equivalent problem could be a benchmark lower bound for the senders cost for large state spaces.
Persuasion studies how an informed principal may influence the behavior of agents by the strategic provision of payoff-relevant information. We focus on the fundamental multi-receiver model by Arieli and Babichenko (2019), in which there are no inter-agent externalities. Unlike prior works on this problem, we study the public persuasion problem in the general setting with: (i) arbitrary state spaces; (ii) arbitrary action spaces; (iii) arbitrary senders utility functions. We fully characterize the computational complexity of computing a bi-criteria approximation of an optimal public signaling scheme. In particular, we show, in a voting setting of independent interest, that solving this problem requires at least a quasi-polynomial number of steps even in settings with a binary action space, assuming the Exponential Time Hypothesis. In doing so, we prove that a relaxed version of the Maximum Feasible Subsystem of Linear Inequalities problem requires at least quasi-polynomial time to be solved. Finally, we close the gap by providing a quasi-polynomial time bi-criteria approximation algorithm for arbitrary public persuasion problems that, in specific settings, yields a QPTAS.
Bayesian persuasion is the study of information sharing policies among strategic agents. A prime example is signaling in online ad auctions: what information should a platform signal to an advertiser regarding a user when selling the opportunity to advertise to her? Practical considerations such as preventing discrimination, protecting privacy or acknowledging limited attention of the information receiver impose constraints on information sharing. In this work, we propose and analyze a simple way to mathematically model such constraints as restrictions on Receivers admissible posterior beliefs. We consider two families of constraints - ex ante and ex post, where the latter limits each instance of Sender-Receiver communication, while the former more general family can also pose restrictions in expectation. For the ex ante family, Doval and Skreta establish the existence of an optimal signaling scheme with a small number of signals - at most the number of constraints plus the number of states of nature; we show this result is tight and provide an alternative proof for it. For the ex post family, we tighten a bound of V{o}lund, showing that the required number of signals is at most the number of states of nature, as in the original Kamenica-Gentzkow setting. As our main algorithmic result, we provide an additive bi-criteria FPTAS for an optimal constrained signaling scheme assuming a constant number of states; we improve the approximation to single-criteria under a Slater-like regularity condition. The FPTAS holds under standard assumptions; relaxed assumptions yield a PTAS. Finally, we bound the ratio between Senders optimal utility under convex ex ante constraints and the corresponding ex post constraints. This bound applies to finding an approximately welfare-maximizing constrained signaling scheme in ad auctions.
We study an information-structure design problem (a.k.a. persuasion) with a single sender and multiple receivers with actions of a priori unknown types, independently drawn from action-specific marginal distributions. As in the standard Bayesian persuasion model, the sender has access to additional information regarding the action types, which she can exploit when committing to a (noisy) signaling scheme through which she sends a private signal to each receiver. The novelty of our model is in considering the case where the receivers interact in a sequential game with imperfect information, with utilities depending on the game outcome and the realized action types. After formalizing the notions of ex ante and ex interim persuasiveness (which differ in the time at which the receivers commit to following the senders signaling scheme), we investigate the continuous optimization problem of computing a signaling scheme which maximizes the senders expected revenue. We show that computing an optimal ex ante persuasive signaling scheme is NP-hard when there are three or more receivers. In contrast with previous hardness results for ex interim persuasion, we show that, for games with two receivers, an optimal ex ante persuasive signaling scheme can be computed in polynomial time thanks to a novel algorithm based on the ellipsoid method which we propose.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا