ترغب بنشر مسار تعليمي؟ اضغط هنا

Finding Stable Matchings that are Robust to Errors in the Input

314   0   0.0 ( 0 )
 نشر من قبل Tung Mai
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of finding solutions to the stable matching problem that are robust to errors in the input and we obtain a polynomial time algorithm for a special class of errors. In the process, we also initiate work on a new structural question concerning the stable matching problem, namely finding relationships between the lattices of solutions of two nearby instances. Our main algorithmic result is the following: We identify a polynomially large class of errors, $D$, that can be introduced in a stable matching instance. Given an instance $A$ of stable matching, let $B$ be the random variable that represents the instance that results after introducing {em one} error from $D$, chosen via a given discrete probability distribution. The problem is to find a stable matching for $A$ that maximizes the probability of being stable for $B$ as well. Via new structural properties of the type described in the question stated above, we give a combinatorial polynomial time algorithm for this problem. We also show that the set of robust stable matchings for instance $A$, under probability distribution $p$, forms a sublattice of the lattice of stable matchings for $A$. We give an efficient algorithm for finding a succinct representation for this set; this representation has the property that any member of the set can be efficiently retrieved from it.



قيم البحث

اقرأ أيضاً

We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose--accept rounds executed by the Gale--Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost s table matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. This holds even if ties are present in the preference lists. We apply our results to give a distributed $(2+epsilon)$-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
110 - Moran Feldman , Ariel Szarf 2021
The problem of finding a maximum size matching in a graph (known as the maximum matching problem) is one of the most classical problems in computer science. Despite a significant body of work dedicated to the study of this problem in the data stream model, the state-of-the-art single-pass semi-streaming algorithm for it is still a simple greedy algorithm that computes a maximal matching, and this way obtains 1/2-approximation. Some previous works described two/three-pass algorithms that improve over this approximation ratio by using their second and third passes to improve the above mentioned maximal matching. One contribution of this paper continuous this line of work by presenting new three-pass semi-streaming algorithms that work along these lines and obtain improved approximation ratios of 0.6111 and 0.5694 for triangle-free and general graphs, respectively. Unfortunately, a recent work (Konrad and Naidu, 2021) shows that the strategy of constructing a maximal matching in the first pass and then improving it in further passes has limitations. Additionally, this technique is unlikely to get us closer to single-pass semi-streaming algorithms obtaining a better than 1/2-approximation. Therefore, it is interesting to come up with algorithms that do something else with their first pass (we term such algorithms non-maximal-matching-first algorithms). No such algorithms are currently known (to the best of our knowledge), and the main contribution of this paper is describing such algorithms that obtain approximation ratios of 0.5384 and 0.5555 in two and three passes, respectively, for general graphs (the result for three passes improves over the previous state-of-the-art, but is worse than the result of this paper mentioned in the previous paragraph for general graphs).
We show fully polynomial time randomized approximation schemes (FPRAS) for counting matchings of a given size, or more generally sampling/counting monomer-dimer systems in planar, not-necessarily-bipartite, graphs. While perfect matchings on planar g raphs can be counted exactly in polynomial time, counting non-perfect matchings was shown by [Jer87] to be #P-hard, who also raised the question of whether efficient approximate counting is possible. We answer this affirmatively by showing that the multi-site Glauber dynamics on the set of monomers in a monomer-dimer system always mixes rapidly, and that this dynamics can be implemented efficiently on downward-closed families of graphs where counting perfect matchings is tractable. As further applications of our results, we show how to sample efficiently using multi-site Glauber dynamics from partition-constrained strongly Rayleigh distributions, and nonsymmetric determinantal point processes. In order to analyze mixing properties of the multi-site Glauber dynamics, we establish two notions for generating polynomials of discrete set-valued distributions: sector-stability and fractional log-concavity. These notions generalize well-studied properties like real-stability and log-concavity, but unlike them robustly degrade under useful transformations applied to the distribution. We relate these notions to pairwise correlations in the underlying distribution and the notion of spectral independence introduced by [ALO20], providing a new tool for establishing spectral independence based on geometry of polynomials. As a byproduct of our techniques, we show that polynomials avoiding roots in a sector of the complex plane must satisfy what we call fractional log-concavity; this extends a classic result established by [Gar59] who showed homogeneous polynomials that have no roots in a half-plane must be log-concave over the positive orthant.
We provide necessary and sufficient conditions on the preferences of market participants for a unique stable matching in models of two-sided matching with non-transferable utility. We use the process of iterated deletion of unattractive alternatives (IDUA), a formalisation of the reduction procedure in Balinski and Ratier (1997), and we show that an instance of the matching problem possesses a unique stable matching if and only if IDUA collapses each participant preference list to a singleton. (This is in a sense the matching problem analog of a strategic game being dominance solvable.)
For classification tasks, deep neural networks are prone to overfitting in the presence of label noise. Although existing methods are able to alleviate this problem at low noise levels, they encounter significant performance reduction at high noise l evels, or even at medium noise levels when the label noise is asymmetric. To train classifiers that are universally robust to all noise levels, and that are not sensitive to any variation in the noise model, we propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning. In particular, we shall assume that a small subset of any given noisy dataset is known to have correct labels, which we treat as positive, while the remaining noisy subset is treated as unlabeled. Our framework consists of the following two components: (1) We shall generate, via iterative updates, an augmented clean subset with additional reliable positive samples filtered from unlabeled samples; (2) We shall train a teacher model on this larger augmented clean set. With the guidance of the teacher model, we then train a student model on the whole dataset. Experiments were conducted on the CIFAR-10 dataset with synthetic label noise at multiple noise levels for both symmetric and asymmetric noise. The results show that our framework generally outperforms at medium to high noise levels. We also evaluated our framework on Clothing1M, a real-world noisy dataset, and we achieved 2.94% improvement in accuracy over existing state-of-the-art methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا