Do you want to publish a course? Click here

Better Together? How Externalities of Size Complicate Notions of Solidarity and Actuarial Fairness

424   0   0.0 ( 0 )
 Added by Kate Donahue
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Consider a cost-sharing game with players of different contribution to the total cost: an example might be an insurance company calculating premiums for a population of mixed-risk individuals. Two natural and competing notions of fairness might be to a) charge each individual the same price or b) charge each individual according to the cost that they bring to the pool. In the insurance literature, these general approaches are referred to as solidarity and actuarial fairness and are commonly viewed as opposites. However, in insurance (and many other natural settings), the cost-sharing game also exhibits externalities of size: all else being equal, larger groups have lower average cost. In the insurance case, we analyze a model with externalities of size due to a reduction in the variability of losses. We explore how this complicates traditional understandings of fairness, drawing on literature in cooperative game theory. First, we explore solidarity: we show that it is possible for both groups (high and low risk) to strictly benefit by joining an insurance pool where costs are evenly split, as opposed to being in separate risk pools. We build on this by producing a pricing scheme that maximally subsidizes the high risk group, while maintaining an incentive for lower risk people to stay in the insurance pool. Next, we demonstrate that with this new model, the price charged to each individual has to depend on the risk of other participants, making naive actuarial fairness inefficient. Furthermore, we prove that stable pricing schemes must be ones where players have the anti-social incentive of desiring riskier partners, contradicting motivations for using actuarial fairness. Finally, we describe how these results relate to debates about fairness in machine learning and potential avenues for future research.



rate research

Read More

Sponsored Search Auctions (SSAs) arguably represent the problem at the intersection of computer science and economics with the deepest applications in real life. Within the realm of SSAs, the study of the effects that showing one ad has on the other ads, a.k.a. externalities in economics, is of utmost importance and has so far attracted the attention of much research. However, even the basic question of modeling the problem has so far escaped a definitive answer. The popular cascade model is arguably too idealized to really describe the phenomenon yet it allows a good comprehension of the problem. Other models, instead, describe the setting more adequately but are too complex to permit a satisfactory theoretical analysis. In this work, we attempt to get the best of both approaches: firstly, we define a number of general mathematical formulations for the problem in the attempt to have a rich description of externalities in SSAs and, secondly, prove a host of results drawing a nearly complete picture about the computational complexity of the problem. We complement these approximability results with some considerations about mechanism design in our context.
65 - Jason Breck 2020
This paper is the confluence of two streams of ideas in the literature on generating numerical invariants, namely: (1) template-based methods, and (2) recurrence-based methods. A template-based method begins with a template that contains unknown quantities, and finds invariants that match the template by extracting and solving constraints on the unknowns. A disadvantage of template-based methods is that they require fixing the set of terms that may appear in an invariant in advance. This disadvantage is particularly prominent for non-linear invariant generation, because the user must supply maximum degrees on polynomials, bases for exponents, etc. On the other hand, recurrence-based methods are able to find sophisticated non-linear mathematical relations, including polynomials, exponentials, and logarithms, because such relations arise as the solutions to recurrences. However, a disadvantage of past recurrence-based invariant-generation methods is that they are primarily loop-based analyses: they use recurrences to relate the pre-state and post-state of a loop, so it is not obvious how to apply them to a recursive procedure, especially if the procedure is non-linearly recursive (e.g., a tree-traversal algorithm). In this paper, we combine these two approaches and obtain a technique that uses templates in which the unknowns are functions rather than numbers, and the constraints on the unknowns are recurrences. The technique synthesizes invariants involving polynomials, exponentials, and logarithms, even in the presence of arbitrary control-flow, including any combination of loops, branches, and (possibly non-linear) recursion. For instance, it is able to show that (i) the time taken by merge-sort is $O(n log(n))$, and (ii) the time taken by Strassens algorithm is $O(n^{log_2(7)})$.
ML-based predictive systems are increasingly used to support decisions with a critical impact on individuals lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of which notion of fairness is most suited to a given real-world scenario and why?. Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of fairness notions.
We study the optimal pricing strategies of a monopolist selling a divisible good (service) to consumers that are embedded in a social network. A key feature of our model is that consumers experience a (positive) local network effect. In particular, each consumers usage level depends directly on the usage of her neighbors in the social network structure. Thus, the monopolists optimal pricing strategy may involve offering discounts to certain agents, who have a central position in the underlying network. First, we consider a setting where the monopolist can offer individualized prices and derive an explicit characterization of the optimal price for each consumer as a function of her network position. In particular, we show that it is optimal for the monopolist to charge each agent a price that is proportional to her Bonacich centrality in the social network. In the second part of the paper, we discuss the optimal strategy of a monopolist that can only choose a single uniform price for the good and derive an algorithm polynomial in the number of agents to compute such a price. Thirdly, we assume that the monopolist can offer the good in two prices, full and discounted, and study the problem of determining which set of consumers should be given the discount. We show that the problem is NP-hard, however we provide an explicit characterization of the set of agents that should be offered the discounted price. Next, we describe an approximation algorithm for finding the optimal set of agents. We show that if the profit is nonnegative under any feasible price allocation, the algorithm guarantees at least 88% of the optimal profit. Finally, we highlight the value of network information by comparing the profits of a monopolist that does not take into account the network effects when choosing her pricing policy to those of a monopolist that uses this information optimally.
84 - Pierre E. Jacob 2017
In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference, prediction, or decision problem. In such circumstances, it is convenient to use a graphical model to represent the statistical dependencies, via a set of connected modules, each relating to a specific data modality, and drawing on specific domain expertise in their development. In principle, given data, the conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of any module can contaminate the estimate and update of others, often in unpredictable ways. In various settings, particularly when certain modules are trusted more than others, practitioners have preferred to avoid learning with the full model in favor of approaches that restrict the information propagation between modules, for example by restricting propagation to only particular directions along the edges of the graph. In this article, we investigate why these modular approaches might be preferable to the full model in misspecified settings. We propose principled criteria to choose between modular and full-model approaches. The question arises in many applied settings, including large stochastic dynamical systems, meta-analysis, epidemiological models, air pollution models, pharmacokinetics-pharmacodynamics, and causal inference with propensity scores.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا