No Arabic abstract
Due to the Covid-19 pandemic, more than 500 US-based colleges and universities went test-optional for admissions and promised that they would not penalize applicants for not submitting test scores, part of a longer trend to rethink the role of testing in college admissions. However, it remains unclear how (and whether) a college can simultaneously use test scores for those who submit them, while not penalizing those who do not--and what that promise even means. We formalize these questions, and study how a college can overcome two challenges with optional testing: $textit{strategic applicants}$ (when those with low test scores can pretend to not have taken the test), and $textit{informational gaps}$ (it has more information on those who submit a test score than those who do not). We find that colleges can indeed do so, if and only if they are able to use information on who has test access and are willing to randomize admissions.
It is common in multiagent systems to make a distinction between strategic behavior and other forms of intentional but nonstrategic behavior: typically, that strategic agents model other agents while nonstrategic agents do not. However, a crisp boundary between these concepts has proven elusive. This problem is pervasive throughout the game theoretic literature on bounded rationality and particularly critical in parts of the behavioral game theory literature that make an explicit distinction between the behavior of nonstrategic level-0 agents and strategic higher-level agents (e.g., the level-k and cognitive hierarchy models). Overall, work discussing bounded rationality rarely gives clear guidance on how the rationality of nonstrategic agents must be bounded, instead typically just singling out specific decision rules and informally asserting them to be nonstrategic (e.g., truthfully revealing private information; randomizing uniformly). In this work, we propose a new, formal characterization of nonstrategic behavior. Our main contribution is to show that it satisfies two properties: (1) it is general enough to capture all purportedly nonstrategic decision rules of which we are aware in the behavioral game theory literature; (2) behavior that obeys our characterization is distinct from strategic behavior in a precise sense.
We study the dynamic pricing problem faced by a monopolistic retailer who sells a storable product to forward-looking consumers. In this framework, the two major pricing policies (or mechanisms) studied in the literature are the preannounced (commitment) pricing policy and the contingent (threat or history dependent) pricing policy. We analyse and compare these pricing policies in the setting where the good can be purchased along a finite time horizon in indivisible atomic quantities. First, we show that, given linear storage costs, the retailer can compute an optimal preannounced pricing policy in polynomial time by solving a dynamic program. Moreover, under such a policy, we show that consumers do not need to store units in order to anticipate price rises. Second, under the contingent pricing policy rather than the preannounced pricing mechanism, (i) prices could be lower, (ii) retailer revenues could be higher, and (iii) consumer surplus could be higher. This result is surprising, in that these three facts are in complete contrast to the case of a retailer selling divisible storable goods Dudine et al. (2006). Third, we quantify exactly how much more profitable a contingent policy could be with respect to a preannounced policy. Specifically, for a market with $N$ consumers, a contingent policy can produce a multiplicative factor of $Omega(log N)$ more revenues than a preannounced policy, and this bound is tight.
The question of how people vote strategically under uncertainty has attracted much attention in several disciplines. Theoretical decision models have been proposed which vary in their assumptions on the sophistication of the voters and on the information made available to them about others preferences and their voting behavior. This work focuses on modeling strategic voting behavior under poll information. It proposes a new heuristic for voting behavior that weighs the success of each candidate according to the poll score with the utility of the candidate given the voters preferences. The model weights can be tuned individually for each voter. We compared this model with other relevant voting models from the literature on data obtained from a recently released large scale study. We show that the new heuristic outperforms all other tested models. The prediction errors of the model can be partly explained due to inconsistent voters that vote for (weakly) dominated candidates.
It is known that there are uncoupled learning heuristics leading to Nash equilibrium in all finite games. Why should players use such learning heuristics and where could they come from? We show that there is no uncoupled learning heuristic leading to Nash equilibrium in all finite games that a player has an incentive to adopt, that would be evolutionary stable or that could learn itself. Rather, a player has an incentive to strategically teach such a learning opponent in order secure at least the Stackelberg leader payoff. The impossibility result remains intact when restricted to the classes of generic games, two-player games, potential games, games with strategic complements or 2x2 games, in which learning is known to be nice. More generally, it also applies to uncoupled learning heuristics leading to correlated equilibria, rationalizable outcomes, iterated admissible outcomes, or minimal curb sets. A possibility result restricted to strategically trivial games fails if some generic games outside this class are considered as well.
The phenomenon of residential segregation was captured by Schellings famous segregation model where two types of agents are placed on a grid and an agent is content with her location if the fraction of her neighbors which have the same type as her is at least $tau$, for some $0<tau<1$. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty cell. We analyze a generalized game-theoretic model of Schelling segregation which allows more than two agent types and more general underlying graphs modeling the residential area. For this we show that both aspects heavily influence the dynamic properties and the tractability of finding an optimal placement. We map the boundary of when improving response dynamics (IRD), i.e., the natural approach for finding equilibrium states, are guaranteed to converge. For this we prove several sharp threshold results where guaranteed IRD convergence suddenly turns into the strongest possible non-convergence result: a violation of weak acyclicity. In particular, we show such threshold results also for Schellings original model, which is in contrast to the standard assumption in many empirical papers. Furthermore, we show that in case of convergence, IRD find an equilibrium in $mathcal{O}(m)$ steps, where $m$ is the number of edges in the underlying graph and show that this bound is met in empirical simulations starting from random initial agent placements.