Do you want to publish a course? Click here

Power in Liquid Democracy

60   0   0.0 ( 0 )
 Added by Yuzhe Zhang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The paper develops a theory of power for delegable proxy voting systems. We define a power index able to measure the influence of both voters and delegators. Using this index, which we characterize axiomatically, we extend an earlier game-theoretic model by incorporating power-seeking behavior by agents. We analytically study the existence of pure strategy Nash equilibria in such a model. Finally, by means of simulations, we study the effect of relevant parameters on the emergence of power inequalities in the model.



rate research

Read More

We study wisdom of the crowd effects in liquid democracy when agents are allowed to apportion weights to proxies by mixing their delegations. We show that in this setting -- unlike in the standard one where votes are always delegated in full to one proxy -- it becomes possible to identify delegation structures that optimize the truth-tracking accuracy of the group. We contrast this centralized solution with the group accuracy obtained in equilibrium when agents interact by greedily trying to maximize their own individual accuracy through mixed delegations, and study the price of anarchy of these games. While equilibria with mixed delegations may be as bad as in the standard delegations setting, they are never worse and may sometimes be better.
Liquid democracy is a proxy voting method where proxies are delegable. We propose and study a game-theoretic model of liquid democracy to address the following question: when is it rational for a voter to delegate her vote? We study the existence of pure-strategy Nash equilibria in this model, and how group accuracy is affected by them. We complement these theoretical results by means of agent-based simulations to study the effects of delegations on groups accuracy on variously structured social networks.
Most modern recommendation systems use the approach of collaborative filtering: users that are believed to behave alike are used to produce recommendations. In this work we describe an application (Liquid FM) taking a completely different approach. Liquid FM is a music recommendation system that makes the user responsible for the recommended items. Suggestions are the result of a voting scheme, employing the idea of viscous democracy. Liquid FM can also be thought of as the first testbed for this voting system. In this paper we outline the design and architecture of the application, both from the theoretical and from the implementation viewpoints.
Network congestion games are a well-understood model of multi-agent strategic interactions. Despite their ubiquitous applications, it is not clear whether it is possible to design information structures to ameliorate the overall experience of the network users. We focus on Bayesian games with atomic players, where network vagaries are modeled via a (random) state of nature which determines the costs incurred by the players. A third-party entity---the sender---can observe the realized state of the network and exploit this additional information to send a signal to each player. A natural question is the following: is it possible for an informed sender to reduce the overall social cost via the strategic provision of information to players who update their beliefs rationally? The paper focuses on the problem of computing optimal ex ante persuasive signaling schemes, showing that symmetry is a crucial property for its solution. Indeed, we show that an optimal ex ante persuasive signaling scheme can be computed in polynomial time when players are symmetric and have affine cost functions. Moreover, the problem becomes NP-hard when players are asymmetric, even in non-Bayesian settings.
Many machine learning systems make extensive use of large amounts of data regarding human behaviors. Several researchers have found various discriminatory practices related to the use of human-related machine learning systems, for example in the field of criminal justice, credit scoring and advertising. Fair machine learning is therefore emerging as a new field of study to mitigate biases that are inadvertently incorporated into algorithms. Data scientists and computer engineers are making various efforts to provide definitions of fairness. In this paper, we provide an overview of the most widespread definitions of fairness in the field of machine learning, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in our culture. This work intends to analyze the definitions of fairness that have been proposed to date to interpret the underlying criteria and to relate them to different ideas of democracy.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا