ترغب بنشر مسار تعليمي؟ اضغط هنا

The multi-returning secretary problem

89   0   0.0 ( 0 )
 نشر من قبل Jose Maria Grau
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we consider the so-called Multi-returning secretary problem, a version of the Secretary problem in which each candidate has $m$ identical copies. The case $m=2$ has already been completely solved by several authors using different methods both the case $m>2$ had not been satisfactorily solved yet. Here, we provide and efficient algorithm to compute the optimal threshold and the probability of success for every $m$. Moreover, we give a method to determine their asymtoptic values based on the solution of a system of $m$ ODEs.



قيم البحث

اقرأ أيضاً

In the classical secretary problem, one attempts to find the maximum of an unknown and unlearnable distribution through sequential search. In many real-world searches, however, distributions are not entirely unknown and can be learned through experie nce. To investigate learning in such a repeated secretary problem we conduct a large-scale behavioral experiment in which people search repeatedly from fixed distributions. In contrast to prior investigations that find no evidence for learning in the classical scenario, in the repeated setting we observe substantial learning resulting in near-optimal stopping behavior. We conduct a Bayesian comparison of multiple behavioral models which shows that participants behavior is best described by a class of threshold-based models that contains the theoretically optimal strategy. Fitting such a threshold-based model to data reveals players estimated thresholds to be surprisingly close to the optimal thresholds after only a small number of games.
In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value . The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. However, by varying the available information, new interesting problems arise. Also the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature. In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability $p$. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with $p$-sampling (ROS$p$ for short) and the adversarial order secretary problem with $p$-sampling (AOS$p$ for short). Our main result is to obtain best possible algorithms for both problems and all values of $p$. As $p$ grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of $p$ only. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements.
We consider the online problem in which an intermediary trades identical items with a sequence of n buyers and n sellers, each of unit demand. We assume that the values of the traders are selected by an adversary and the sequence is randomly permuted . We give competitive algorithms for two objectives: welfare and gain-from-trade.
In the ordinal Matroid Secretary Problem (MSP), elements from a weighted matroid are presented in random order to an algorithm that must incrementally select a large weight independent set. However, the algorithm can only compare pairs of revealed el ements without using its numerical value. An algorithm is $alpha$ probability-competitive if every element from the optimum appears with probability $1/alpha$ in the output. We present a technique to design algorithms with strong probability-competitive ratios, improving the guarantees for almost every matroid class considered in the literature: e.g., we get ratios of 4 for graphic matroids (improving on $2e$ by Korula and Pal [ICALP 2009]) and of 5.19 for laminar matroids (improving on 9.6 by Ma et al. [THEOR COMPUT SYST 2016]). We also obtain new results for superclasses of $k$ column sparse matroids, for hypergraphic matroids, certain gammoids and graph packing matroids, and a $1+O(sqrt{log rho/rho})$ probability-competitive algorithm for uniform matroids of rank $rho$ based on Kleinbergs $1+O(sqrt{1/rho})$ utility-competitive algorithm [SODA 2005] for that class. Our second contribution are algorithms for the ordinal MSP on arbitrary matroids of rank $rho$. We devise an $O(log rho)$ probability-competitive algorithm and an $O(loglog rho)$ ordinal-competitive algorithm, a weaker notion of competitiveness but stronger than the utility variant. These are based on the $O(loglog rho)$ utility-competitive algorithm by Feldman et al.~[SODA 2015].
We study the success probability for a variant of the secretary problem, with noisy observations and multiple offline selection. Our formulation emulates, and is motivated by, problems involving noisy selection arising in the disciplines of stochasti c simulation and simulation-based optimisation. In addition, we employ the philosophy of ordinal optimisation - involving an ordinal selection rule, and a percentile notion of goal softening for the success probability. As a result, it is shown that the success probability only depends on the underlying copula of the problem. Other general properties for the success probability are also presented. Specialising to the case of Gaussian copulas, we also derive an analytic lower bound for the success probability, which may then be inverted to find sufficiently large sample sizes that guarantee a high success probability arbitrarily close to one.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا