No Arabic abstract
Voting rules may fail to implement the will of the society when only some voters actively participate, and/or in the presence of sybil (fake or duplicate) voters. Here we aim to address social choice in the presence of sybils and voter abstention. To do so we assume the status-quo (Reality) as an ever-present distinguished alternative, and study Reality Enforcing voting rules, which add virtual votes in support of the status-quo. We measure the tradeoff between safety and liveness (the ability of active honest voters to maintain/change the status-quo, respectively) in a variety of domains, and show that the Reality Enforcing voting rule is optimal in this respect.
Any community in which membership is optional may eventually break apart, or fork. For example, forks may occur in political parties, business partnerships, social groups, cryptocurrencies, and federated governing bodies. Forking is typically the product of informal social processes or the organized action of an aggrieved minority, and it is not always amicable. Forks usually come at a cost, and can be seen as consequences of collective decisions that destabilize the community. Here, we provide a social choice setting in which agents can report preferences not only over a set of alternatives, but also over the possible forks that may occur in the face of disagreement. We study this social choice setting, concentrating on stability issues and concerns of strategic agent behavior.
We discuss the connection between computational social choice (comsoc) and computational complexity. We stress the work so far on, and urge continued focus on, two less-recognized aspects of this connection. Firstly, this is very much a two-way street: Everyone knows complexity classification is used in comsoc, but we also highlight benefits to complexity that have arisen from its use in comsoc. Secondly, more subtle, less-known complexity tools often can be very productively used in comsoc.
How should one combine noisy information from diverse sources to make an inference about an objective ground truth? This frequently recurring, normative question lies at the core of statistics, machine learning, policy-making, and everyday life. It has been called combining forecasts, meta-analysis, ensembling, and the MLE approach to voting, among other names. Past studies typically assume that noisy votes are identically and independently distributed (i.i.d.), but this assumption is often unrealistic. Instead, we assume that votes are independent but not necessarily identically distributed and that our ensembling algorithm has access to certain auxiliary information related to the underlying model governing the noise in each vote. In our present work, we: (1) define our problem and argue that it reflects common and socially relevant real world scenarios, (2) propose a multi-arm bandit noise model and count-based auxiliary information set, (3) derive maximum likelihood aggregation rules for ranked and cardinal votes under our noise model, (4) propose, alternatively, to learn an aggregation rule using an order-invariant neural network, and (5) empirically compare our rules to common voting rules and naive experience-weighted modifications. We find that our rules successfully use auxiliary information to outperform the naive baselines.
During the COVID-19 crisis there have been many difficult decisions governments and other decision makers had to make. E.g. do we go for a total lock down or keep schools open? How many people and which people should be tested? Although there are many good models from e.g. epidemiologists on the spread of the virus under certain conditions, these models do not directly translate into the interventions that can be taken by government. Neither can these models contribute to understand the economic and/or social consequences of the interventions. However, effective and sustainable solutions need to take into account this combination of factors. In this paper, we propose an agent-based social simulation tool, ASSOCC, that supports decision makers understand possible consequences of policy interventions, bu exploring the combined social, health and economic consequences of these interventions.
Detecting and suspending fake accounts (Sybils) in online social networking (OSN) services protects both OSN operators and OSN users from illegal exploitation. Existing social-graph-based defense schemes effectively bound the accepted Sybils to the total number of social connections between Sybils and non-Sybil users. However, Sybils may still evade the defenses by soliciting many social connections to real users. We propose SybilFence, a system that improves over social-graph-based Sybil defenses to further thwart Sybils. SybilFence is based on the observation that even well-maintained fake accounts inevitably receive a significant number of user negative feedback, such as the rejections to their friend requests. Our key idea is to discount the social edges on users that have received negative feedback, thereby limiting the impact of Sybils social edges. The preliminary simulation results show that our proposal is more resilient to attacks where fake accounts continuously solicit social connections over time.