No Arabic abstract
In 1998 a long-lost proposal for an election law by Gottlob Frege (1848--1925) was rediscovered in the Thuringer Universitats- und Landesbibliothek in Jena, Germany. The method that Frege proposed for the election of representatives of a constituency features a remarkable concern for the representation of minorities. Its core idea is that votes cast for unelected candidates are carried over to the next election, while elected candidates incur a cost of winning. We prove that this sensitivity to past elections guarantees a proportional representation of political opinions in the long run. We find that through a slight modification of Freges original method even stronger proportionality guarantees can be achieved. This modified version of Freges method also provides a novel solution to the apportionment problem, which is distinct from all of the best-known apportionment methods, while still possessing noteworthy proportionality properties.
The advent of machine learning tools has led to the rise of data markets. These data markets are characterized by multiple data purchasers interacting with a set of data sources. Data sources have more information about the quality of data than the data purchasers; additionally, data itself is a non-rivalrous good that can be shared with multiple parties at negligible marginal cost. In this paper, we study the multiple-principal, multiple-agent problem with non-rivalrous goods. Under the assumption that the principals payoff is quasilinear in the payments given to agents, we show that there is a fundamental degeneracy in the market of non-rivalrous goods. Specifically, for a general class of payment contracts, there will be an infinite set of generalized Nash equilibria. This multiplicity of equilibria also affects common refinements of equilibrium definitions intended to uniquely select an equilibrium: both variational equilibria and normalized equilibria will be non-unique in general. This implies that most existing equilibrium concepts cannot provide predictions on the outcomes of data markets emerging today. The results support the idea that modifications to payment contracts themselves are unlikely to yield a unique equilibrium, and either changes to the models of study or new equilibrium concepts will be required to determine unique equilibria in settings with multiple principals and a non-rivalrous good.
Integrity of elections is vital to democratic systems, but it is frequently threatened by malicious actors. The study of algorithmic complexity of the problem of manipulating election outcomes by changing its structural features is known as election control. One means of election control that has been proposed is to select a subset of issues that determine voter preferences over candidates. We study a variation of this model in which voters have judgments about relative importance of issues, and a malicious actor can manipulate these judgments. We show that computing effective manipulations in this model is NP-hard even with two candidates or binary issues. However, we demonstrate that the problem is tractable with a constant number of voters or issues. Additionally, while it remains intractable when voters can vote stochastically, we exhibit an important special case in which stochastic voting enables tractable manipulation.
Mechanism design has traditionally assumed that the set of participants are fixed and known to the mechanism (the market owner) in advance. However, in practice, the market owner can only directly reach a small number of participants (her neighbours). Hence the owner often needs costly promotions to recruit more participants in order to get desirable outcomes such as social welfare or revenue maximization. In this paper, we propose to incentivize existing participants to invite their neighbours to attract more participants. However, they would not invite each other if they are competitors. We discuss how to utilize the conflict of interest between the participants to incentivize them to invite each other to form larger markets. We will highlight the early solutions and open the floor for discussing the fundamental open questions in the settings of auctions, coalitional games, matching and voting.
Many facts are learned through the intermediation of individuals with special access to information, such as law enforcement officers, officials with a security clearance, or experts with specific knowledge. This paper considers whether societies can learn about such facts when information is cheap to manipulate, produced sequentially, and these individuals are devoid of ethical motive. The answer depends on an information attrition condition pertaining to the amount of evidence available which distinguishes, for example, between reproducible scientific evidence and the evidence generated in a crime. Applications to institution enforcement, social cohesion, scientific progress, and historical revisionism are discussed.
We use the data of tenured and tenure-track faculty at ten public and private math departments of various tiered rankings in the United States, as a case study to demonstrate the statistical and mathematical relationships among several variables, e.g., the number of publications and citations, the rank of professorship and AMS fellow status. At first we do an exploratory data analysis of the math departments. Then various statistical tools, including regression, artificial neural network, and unsupervised learning, are applied and the results obtained from different methods are compared. We conclude that with more advanced models, it may be possible to design an automatic promotion algorithm that has the potential to be fairer, more efficient and more consistent than human approach.