ترغب بنشر مسار تعليمي؟ اضغط هنا

Beyond Pigouvian Taxes: A Worst Case Analysis

71   0   0.0 ( 0 )
 نشر من قبل Ruty Mundel
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In the early $20^{th}$ century, Pigou observed that imposing a marginal cost tax on the usage of a public good induces a socially efficient level of use as an equilibrium. Unfortunately, such a Pigouvian tax may also induce other, socially inefficient, equilibria. We observe that this social inefficiency may be unbounded, and study whether alternative tax structures may lead to milder losses in the worst case, i.e. to a lower price of anarchy. We show that no tax structure leads to bounded losses in the worst case. However, we do find a tax scheme that has a lower price of anarchy than the Pigouvian tax, obtaining tight lower and upper bounds in terms of a crucial parameter that we identify. We generalize our results to various scenarios that each offers an alternative to the use of a public road by private cars, such as ride sharing, or using a bus or a train.

قيم البحث

اقرأ أيضاً

128 - Tim Roughgarden 2020
One of the primary goals of the mathematical analysis of algorithms is to provide guidance about which algorithm is the best for solving a given computational problem. Worst-case analysis summarizes the performance profile of an algorithm by its wors t performance on any input of a given size, implicitly advocating for the algorithm with the best-possible worst-case performance. Strong worst-case guarantees are the holy grail of algorithm design, providing an application-agnostic certification of an algorithms robustly good performance. However, for many fundamental problems and performance measures, such guarantees are impossible and a more nuanced analysis approach is called for. This chapter surveys several alternatives to worst-case analysis that are discussed in detail later in the book.
A rich class of mechanism design problems can be understood as incomplete-information games between a principal who commits to a policy and an agent who responds, with payoffs determined by an unknown state of the world. Traditionally, these models r equire strong and often-impractical assumptions about beliefs (a common prior over the state). In this paper, we dispense with the common prior. Instead, we consider a repeated interaction where both the principal and the agent may learn over time from the state history. We reformulate mechanism design as a reinforcement learning problem and develop mechanisms that attain natural benchmarks without any assumptions on the state-generating process. Our results make use of novel behavioral assumptions for the agent -- centered around counterfactual internal regret -- that capture the spirit of rationality without relying on beliefs.
In this paper, we consider a network of consumers who are under the combined influence of their neighbors and external influencing entities (the marketers). The consumers opinion follows a hybrid dynamics whose opinion jumps are due to the marketing campaigns. By using the relevant static game model proposed recently in [1], we prove that although the marketers are in competition and therefore create tension in the network, the network reaches a consensus. Exploiting this key result, we propose a coopetition marketing strategy which combines the one-shot Nash equilibrium actions and a policy of no advertising. Under reasonable sufficient conditions, it is proved that the proposed coopetition strategy profile Pareto-dominates the one-shot Nash equilibrium strategy. This is a very encouraging result to tackle the much more challenging problem of designing Pareto-optimal and equilibrium strategies for the considered dynamical marketing game.
Bandits with Knapsacks (BwK) is a general model for multi-armed bandits under supply/budget constraints. While worst-case regret bounds for BwK are well-understood, we present three results that go beyond the worst-case perspective. First, we provide upper and lower bounds which amount to a full characterization for logarithmic, instance-dependent regret rates. Second, we consider simple regret in BwK, which tracks algorithms performance in a given round, and prove that it is small in all but a few rounds. Third, we provide a general reduction from BwK to bandits which takes advantage of some known helpful structure, and apply this reduction to combinatorial semi-bandits, linear contextual bandits, and multinomial-logit bandits. Our results build on the BwK algorithm from citet{AgrawalDevanur-ec14}, providing new analyses thereof.
We introduce a framework for statistical estimation that leverages knowledge of how samples are collected but makes no distributional assumptions on the data values. Specifically, we consider a population of elements $[n]={1,ldots,n}$ with correspond ing data values $x_1,ldots,x_n$. We observe the values for a sample set $A subset [n]$ and wish to estimate some statistic of the values for a target set $B subset [n]$ where $B$ could be the entire set. Crucially, we assume that the sets $A$ and $B$ are drawn according to some known distribution $P$ over pairs of subsets of $[n]$. A given estimation algorithm is evaluated based on its worst-case, expected error where the expectation is with respect to the distribution $P$ from which the sample $A$ and target sets $B$ are drawn, and the worst-case is with respect to the data values $x_1,ldots,x_n$. Within this framework, we give an efficient algorithm for estimating the target mean that returns a weighted combination of the sample values--where the weights are functions of the distribution $P$ and the sample and target sets $A$, $B$--and show that the worst-case expected error achieved by this algorithm is at most a multiplicative $pi/2$ factor worse than the optimal of such algorithms. The algorithm and proof leverage a surprising connection to the Grothendieck problem. This framework, which makes no distributional assumptions on the data values but rather relies on knowledge of the data collection process, is a significant departure from typical estimation and introduces a uniform algorithmic analysis for the many natural settings where membership in a sample may be correlated with data values, such as when sampling probabilities vary as in importance sampling, when individuals are recruited into a sample via a social network as in snowball sampling, or when samples have chronological structure as in selective prediction.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا