Do you want to publish a course? Click here

BONUS! Maximizing Surprise

120   0   0.0 ( 0 )
 Added by Zhihuan Huang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Multi-round competitions often double or triple the points awarded in the final round, calling it a bonus, to maximize spectators excitement. In a two-player competition with $n$ rounds, we aim to derive the optimal bonus size to maximize the audiences overall expected surprise (as defined in [7]). We model the audiences prior belief over the two players ability levels as a beta distribution. Using a novel analysis that clarifies and simplifies the computation, we find that the optimal bonus depends greatly upon the prior belief and obtain solutions of various forms for both the case of a finite number of rounds and the asymptotic case. In an interesting special case, we show that the optimal bonus approximately and asymptotically equals to the expected lead, the number of points the weaker player will need to come back in expectation. Moreover, we observe that priors with a higher skewness lead to a higher optimal bonus size, and in the symmetric case, priors with a higher uncertainty also lead to a higher optimal bonus size. This matches our intuition since a highly asymmetric prior leads to a high expected lead, and a highly uncertain symmetric prior often leads to a lopsided game, which again benefits from a larger bonus.



rate research

Read More

Elections involving a very large voter population often lead to outcomes that surprise many. This is particularly important for the elections in which results affect the economy of a sizable population. A better prediction of the true outcome helps reduce the surprise and keeps the voters prepared. This paper starts from the basic observation that individuals in the underlying population build estimates of the distribution of preferences of the whole population based on their local neighborhoods. The outcome of the election leads to a surprise if these local estimates contradict the outcome of the election for some fixed voting rule. To get a quantitative understanding, we propose a simple mathematical model of the setting where the individuals in the population and their connections (through geographical proximity, social networks etc.) are described by a random graph with connection probabilities that are biased based on the preferences of the individuals. Each individual also has some estimate of the bias in their connections. We show that the election outcome leads to a surprise if the discrepancy between the estimated bias and the true bias in the local connections exceeds a certain threshold, and confirm the phenomenon that surprising outcomes are associated only with {em closely contested elections}. We compare standard voting rules based on their performance on surprise and show that they have different behavior for different parts of the population. It also hints at an impossibility that a single voting rule will be less surprising for {em all} parts of a population. Finally, we experiment with the UK-EU referendum (a.k.a. Brexit) dataset that attest some of our theoretical predictions.
In markets such as digital advertising auctions, bidders want to maximize value rather than payoff. This is different to the utility functions typically assumed in auction theory and leads to different strategies and outcomes. We refer to bidders who maximize value as value bidders. While simple single-object auction formats are truthful, standard multi-object auction formats allow for manipulation. It is straightforward to show that there cannot be a truthful and revenue-maximizing deterministic auction mechanism with value bidders and general valuations. Approximation has been used as a means to achieve truthfulness, and we study which approximation ratios we can get from truthful approximation mechanisms. We show that the approximation ratio that can be achieved with a deterministic and truthful approximation mechanism with $n$ bidders and $m$ items cannot be higher than 1/n for general valuations. For randomized approximation mechanisms there is a framework with a ratio of O(sqrt(m)). We provide better ratios for environments with restricted valuations.
We study the optimization problem of selecting numerical quantities to clean in order to fact-check claims based on such data. Oftentimes, such claims are technically correct, but they can still mislead for two reasons. First, data may contain uncertainty and errors. Second, data can be fished to advance particular positions. In practice, fact-checkers cannot afford to clean all data and must choose to clean what matters the most to checking a claim. We explore alternative definitions of what matters the most: one is to ascertain claim qualities (by minimizing uncertainty in these measures), while an alternative is just to counter the claim (by maximizing the probability of finding a counterargument). We show whether the two objectives align with each other, with important implications on when fact-checkers should exercise care in selective data cleaning, to avoid potential bias introduced by their desire to counter claims. We develop efficient algorithms for solving the various variants of the optimization problem, showing significant improvements over naive solutions. The problem is particularly challenging because the objectives in the fact-checking context are complex, non-linear functions over data. We obtain results that generalize to a large class of functions, with potential applications beyond fact-checking.
Motivated by applications such as college admission and insurance rate determination, we propose an evaluation problem where the inputs are controlled by strategic individuals who can modify their features at a cost. A learner can only partially observe the features, and aims to classify individuals with respect to a quality score. The goal is to design an evaluation mechanism that maximizes the overall quality score, i.e., welfare, in the population, taking any strategic updating into account. We further study the algorithmic aspect of finding the welfare maximizing evaluation mechanism under two specific settings in our model. When scores are linear and mechanisms use linear scoring rules on the observable features, we show that the optimal evaluation mechanism is an appropriate projection of the quality score. When mechanisms must use linear thresholds, we design a polynomial time algorithm with a (1/4)-approximation guarantee when the underlying feature distribution is sufficiently smooth and admits an oracle for finding dense regions. We extend our results to settings where the prior distribution is unknown and must be learned from samples.
168 - Zheng Wen , Eric Bax , James Li 2015
In quasi-proportional auctions, each bidder receives a fraction of the allocation equal to the weight of their bid divided by the sum of weights of all bids, where each bids weight is determined by a weight function. We study the relationship between the weight function, bidders private values, number of bidders, and the sellers revenue in equilibrium. It has been shown that if one bidder has a much higher private value than the others, then a nearly flat weight function maximizes revenue. Essentially, threatening the bidder who has the highest valuation with having to share the allocation maximizes the revenue. We show that as bidder private values approach parity, steeper weight functions maximize revenue by making the quasi-proportional auction more like a winner-take-all auction. We also show that steeper weight functions maximize revenue as the number of bidders increases. For flatter weight functions, there is known to be a unique pure-strategy Nash equilibrium. We show that a pure-strategy Nash equilibrium also exists for steeper weight functions, and we give lower bounds for bids at an equilibrium. For a special case that includes the two-bidder auction, we show that the pure-strategy Nash equilibrium is unique, and we show how to compute the revenue at equilibrium. We also show that selecting a weight function based on private value ratios and number of bidders is necessary for a quasi-proportional auction to produce more revenue than a second-price auction.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا