No Arabic abstract
This paper introduces an optimization problem for proper scoring rule design. Consider a principal who wants to collect an agents prediction about an unknown state. The agent can either report his prior prediction or access a costly signal and report the posterior prediction. Given a collection of possible distributions containing the agents posterior prediction distribution, the principals objective is to design a bounded scoring rule to maximize the agents worst-case payoff increment between reporting his posterior prediction and reporting his prior prediction. We study two settings of such optimization for proper scoring rules: static and asymptotic settings. In the static setting, where the agent can access one signal, we propose an efficient algorithm to compute an optimal scoring rule when the collection of distributions is finite. The agent can adaptively and indefinitely refine his prediction in the asymptotic setting. We first consider a sequence of collections of posterior distributions with vanishing covariance, which emulates general estimators with large samples, and show the optimality of the quadratic scoring rule. Then, when the agents posterior distribution is a Beta-Bernoulli process, we find that the log scoring rule is optimal. We also prove the optimality of the log scoring rule over a smaller set of functions for categorical distributions with Dirichlet priors.
This paper forges a strong connection between two seemingly unrelated forecasting problems: incentive-compatible forecast elicitation and forecast aggregation. Proper scoring rules are the well-known solution to the former problem. To each such rule s we associate a corresponding method of aggregation, mapping expert forecasts and expert weights to a consensus forecast, which we call *quasi-arithmetic (QA) pooling* with respect to s. We justify this correspondence in several ways: - QA pooling with respect to the two most well-studied scoring rules (quadratic and logarithmic) corresponds to the two most well-studied forecast aggregation methods (linear and logarithmic). - Given a scoring rule s used for payment, a forecaster agent who sub-contracts several experts, paying them in proportion to their weights, is best off aggregating the experts reports using QA pooling with respect to s, meaning this strategy maximizes its worst-case profit (over the possible outcomes). - The score of an aggregator who uses QA pooling is concave in the experts weights. As a consequence, online gradient descent can be used to learn appropriate expert weights from repeated experiments with low regret. - The class of all QA pooling methods is characterized by a natural set of axioms (generalizing classical work by Kolmogorov on quasi-arithmetic means).
We consider the algorithmic question of choosing a subset of candidates of a given size $k$ from a set of $m$ candidates, with knowledge of voters ordinal rankings over all candidates. We consider the well-known and classic scoring rule for achieving diverse representation: the Chamberlin-Courant (CC) or $1$-Borda rule, where the score of a committee is the average over the voters, of the rank of the best candidate in the committee for that voter; and its generalization to the average of the top $s$ best candidates, called the $s$-Borda rule. Our first result is an improved analysis of the natural and well-studied greedy heuristic. We show that greedy achieves a $left(1 - frac{2}{k+1}right)$-approximation to the maximization (or satisfaction) version of CC rule, and a $left(1 - frac{2s}{k+1}right)$-approximation to the $s$-Borda score. Our result improves on the best known approximation algorithm for this problem. We show that these bounds are almost tight. For the dissatisfaction (or minimization) version of the problem, we show that the score of $frac{m+1}{k+1}$ can be viewed as an optimal benchmark for the CC rule, as it is essentially the best achievable score of any polynomial-time algorithm even when the optimal score is a polynomial factor smaller (under standard computational complexity assumptions). We show that another well-studied algorithm for this problem, called the Banzhaf rule, attains this benchmark. We finally show that for the $s$-Borda rule, when the optimal value is small, these algorithms can be improved by a factor of $tilde Omega(sqrt{s})$ via LP rounding. Our upper and lower bounds are a significant improvement over previous results, and taken together, not only enable us to perform a finer comparison of greedy algorithms for these problems, but also provide analytic justification for using such algorithms in practice.
This letter considers the design of an auction mechanism to sell the object of a seller when the buyers quantize their private value estimates regarding the object prior to communicating them to the seller. The designed auction mechanism maximizes the utility of the seller (i.e., the auction is optimal), prevents buyers from communicating falsified quantized bids (i.e., the auction is incentive-compatible), and ensures that buyers will participate in the auction (i.e., the auction is individually-rational). The letter also investigates the design of the optimal quantization thresholds using which buyers quantize their private value estimates. Numerical results provide insights regarding the influence of the quantization thresholds on the auction mechanism.
We consider revenue-optimal mechanism design in the interdimensional setting, where one dimension is the value of the buyer, and one is a type that captures some auxiliary information. One setting is the FedEx Problem, for which FGKK [2016] characterize the optimal mechanism for a single agent. We ask: how far can such characterizations go? In particular, we consider single-minded agents. A seller has heterogenous items. A buyer has a value v for a specific subset of items S, and obtains value v iff he gets (at least) all the items in S. We show: 1. Deterministic mechanisms are optimal for distributions that satisfy the declining marginal revenue (DMR) property; we give an explicit construction of the optimal mechanism. 2. Without DMR, the result depends on the structure of the directed acyclic graph (DAG) representing the partial order among types. When the DAG has out-degree at most 1, we characterize the optimal mechanism a la FedEx. 3. Without DMR, when the DAG has some node with out-degree at least 2, we show that in this case the menu complexity is unbounded: for any M, there exist distributions over (v,S) pairs such that the menu complexity of the optimal mechanism is at least M. 4. For the case of 3 types, we show that for all distributions there exists an optimal mechanism of finite menu complexity. This is in contrast to 2 additive heterogenous items or which the menu complexity could be uncountable [MV07; DDT15]. In addition, we prove that optimal mechanisms for Multi-Unit Pricing (without DMR) can have unbounded menu complexity. We also propose an extension where the menu complexity of optimal mechanisms can be countable but not uncountable. Together these results establish that optimal mechanisms in interdimensional settings are both much richer than single-dimensional settings, yet also vastly more structured than multi-dimensional settings.
This paper introduces an objective for optimizing proper scoring rules. The objective is to maximize the increase in payoff of a forecaster who exerts a binary level of effort to refine a posterior belief from a prior belief. In this framework we characterize optimal scoring rules in simple settings, give efficient algorithms for computing optimal scoring rules in complex settings, and identify simple scoring rules that are approximately optimal. In comparison, standard scoring rules in theory and practice -- for example the quadratic rule, scoring rules for the expectation, and scoring rules for multiple tasks that are averages of single-task scoring rules -- can be very far from optimal.