No Arabic abstract
The classic paper of Shapley and Shubik cite{Shapley1971assignment} characterized the core of the assignment game using ideas from matching theory and LP-duality theory and their highly non-trivial interplay. Whereas the core of this game is always non-empty, that of the general graph matching game can be empty. This paper salvages the situation by giving an imputation in the $2/3$-approximate core for the latter. This bound is best possible, since it is the integrality gap of the natural underlying LP. Our profit allocation method goes further: the multiplier on the profit of an agent is often better than ${2 over 3}$ and lies in the interval $[{2 over 3}, 1]$, depending on how severely constrained the agent is. Next, we provide new insights showing how discerning core imputations of an assignment games are by studying them via the lens of complementary slackness. We present a relationship between the competitiveness of individuals and teams of agents and the amount of profit they accrue in imputations that lie in the core, where by {em competitiveness} we mean whether an individual or a team is matched in every/some/no maximum matching. This also sheds light on the phenomenon of degeneracy in assignment games, i.e., when the maximum weight matching is not unique. The core is a quintessential solution concept in cooperative game theory. It contains all ways of distributing the total worth of a game among agents in such a way that no sub-coalition has incentive to secede from the grand coalition. Our imputation, in the $2/3$-approximate core, implies that a sub-coalition will gain at most a $3/2$ factor by seceding, and less in typical cases.
We describe our experience with designing and running a matching market for the Israeli Mechinot gap-year programs. The main conceptual challenge in the design of this market was the rich set of diversity considerations, which necessitated the development of an appropriate preference-specification language along with corresponding choice-function semantics, which we also theoretically analyze. Our contribution extends the existing toolbox for two-sided matching with soft constraints. This market was run for the first time in January 2018 and matched 1,607 candidates (out of a total of 3,120 candidates) to 35 different programs, has been run twice more since, and has been adopted by the Joint Council of the Mechinot gap-year programs for the foreseeable future.
The Arrow-Debreu extension of the classic Hylland-Zeckhauser scheme for a one-sided matching market -- called ADHZ in this paper -- has natural applications but has instances which do not admit equilibria. By introducing approximation, we define the $epsilon$-approximate ADHZ model, and we give the following results. * Existence of equilibrium under linear utility functions. We prove that the equilibrium satisfies Pareto optimality, approximate envy-freeness, and approximate weak core stability. * A combinatorial polynomial-time algorithm for an $epsilon$-approximate ADHZ equilibrium for the case of dichotomous, and more generally bi-valued, utilities. * An instance of ADHZ, with dichotomous utilities and a strongly connected demand graph, which does not admit an equilibrium. Since computing an equilibrium for HZ is likely to be highly intractable and because of the difficulty of extending HZ to more general utility functions, Hosseini and Vazirani proposed (a rich collection of) Nash-bargaining-based matching market models. For the dichotomous-utilities case of their model linear Arrow-Debreu Nash bargaining one-sided matching market (1LAD), we give a combinatorial, strongly polynomial-time algorithm and show that it admits a rational convex program.
The problem of matching a query string to a directed graph, whose vertices are labeled by strings, has application in different fields, from data mining to computational biology. Several variants of the problem have been considered, depending on the fact that the match is exact or approximate and, in this latter case, which edit operations are considered and where are allowed. In this paper we present results on the complexity of the approximate matching problem, where edit operations are symbol substitutions and are allowed only on the graph labels or both on the graph labels and the query string. We introduce a variant of the problem that asks whether there exists a path in a graph that represents a query string with any number of edit operations and we show that is is NP-complete, even when labels have length one and in the case the alphabet is binary. Moreover, when it is parameterized by the length of the input string and graph labels have length one, we show that the problem is fixed-parameter tractable and it is unlikely to admit a polynomial kernel. The NP-completeness of this problem leads to the inapproximability (within any factor) of the approximate matching when edit operations are allowed only on the graph labels. Moreover, we show that the variants of approximate string matching to graph we consider are not fixed-parameter tractable, when the parameter is the number of edit operations, even for graphs that have distance one from a DAG. The reduction for this latter result allows us to prove the inapproximability of the variant where edit operations can be applied both on the query string and on graph labels.
The attribution problem, that is the problem of attributing a models prediction to its base features, is well-studied. We extend the notion of attribution to also apply to feature interactions. The Shapley value is a commonly used method to attribute a models prediction to its base features. We propose a generalization of the Shapley value called Shapley-Taylor index that attributes the models prediction to interactions of subsets of features up to some size k. The method is analogous to how the truncated Taylor Series decomposes the function value at a certain point using its derivatives at a different point. In fact, we show that the Shapley Taylor index is equal to the Taylor Series of the multilinear extension of the set-theoretic behavior of the model. We axiomatize this method using the standard Shapley axioms -- linearity, dummy, symmetry and efficiency -- and an additional axiom that we call the interaction distribution axiom. This new axiom explicitly characterizes how interactions are distributed for a class of functions that model pure interaction. We contrast the Shapley-Taylor index against the previously proposed Shapley Interaction index (cf. [9]) from the cooperative game theory literature. We also apply the Shapley Taylor index to three models and identify interesting qualitative insights.
Recent advances in multi-task peer prediction have greatly expanded our knowledge about the power of multi-task peer prediction mechanisms. Various mechanisms have been proposed in different settings to elicit different types of information. But we still lack understanding about when desirable mechanisms will exist for a multi-task peer prediction problem. In this work, we study the elicitability of multi-task peer prediction problems. We consider a designer who has certain knowledge about the underlying information structure and wants to elicit certain information from a group of participants. Our goal is to infer the possibility of having a desirable mechanism based on the primitives of the problem. Our contribution is twofold. First, we provide a characterization of the elicitable multi-task peer prediction problems, assuming that the designer only uses scoring mechanisms. Scoring mechanisms are the mechanisms that reward participants reports for different tasks separately. The characterization uses a geometric approach based on the power diagram characterization in the single-task setting ([Lambert and Shoham, 2009, Frongillo and Witkowski, 2017]). For general mechanisms, we also give a necessary condition for a multi-task problem to be elicitable. Second, we consider the case when the designer aims to elicit some properties that are linear in the participants posterior about the state of the world. We first show that in some cases, the designer basically can only elicit the posterior itself. We then look into the case when the designer aims to elicit the participants posteriors. We give a necessary condition for the posterior to be elicitable. This condition implies that the mechanisms proposed by Kong and Schoenebeck are already the best we can hope for in their setting, in the sense that their mechanisms can solve any problem instance that can possibly be elicitable.