ترغب بنشر مسار تعليمي؟ اضغط هنا

How Best to Handle a Dicey Situation

98   0   0.0 ( 0 )
 نشر من قبل Rishab Nithyanand
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce the {Destructive Object Handling} (DOH) problem, which models aspects of many real-world allocation problems, such as shipping explosive munitions, scheduling processes in a cluster with fragile nodes, re-using passwords across multiple websites, and quarantining patients during a disease outbreak. In these problems, objects must be assigned to handlers, but each object has a probability of destroying itself and all the other objects allocated to the same handler. The goal is to maximize the expected value of the objects handled successfully. We show that finding the optimal allocation is $mathsf{NP}$-$mathsf{complete}$, even if all the handlers are identical. We present an FPTAS when the number of handlers is constant. We note in passing that the same technique also yields a first FPTAS for the weapons-target allocation problem cite{manne_wta} with a constant number of targets. We study the structure of DOH problems and find that they have a sort of phase transition -- in some instances it is better to spread risk evenly among the handlers, in others, one handler should be used as a ``sacrificial lamb. We show that the problem is solvable in polynomial time if the destruction probabilities depend only on the handler to which an object is assigned; if all the handlers are identical and the objects all have the same value; or if each handler can be assigned at most one object. Finally, we empirically evaluate several heuristics based on a combination of greedy and genetic algorithms. The proposed heuristics return fairly high quality solutions to very large problem instances (upto 250 objects and 100 handlers) in tens of seconds.



قيم البحث

اقرأ أيضاً

Many enhanced sampling techniques rely on the identification of a number of collective variables that describe all the slow modes of the system. By constructing a bias potential in this reduced space one is then able to sample efficiently and reconst ruct the free energy landscape. In methods like metadynamics, the quality of these collective variables plays a key role in convergence efficiency. Unfortunately in many systems of interest it is not possible to identify an optimal collective variable, and one must deal with the non-ideal situation of a system in which some slow modes are not accelerated. We propose a two-step approach in which, by taking into account the residual multiscale nature of the problem, one is able to significantly speed up convergence. To do so, we combine an exploratory metadynamics run with an optimization of the free energy difference between metastable states, based on the recently proposed variationally enhanced sampling method. This new method is well parallelizable and is especially suited for complex systems, because of its simplicity and clear underlying physical picture.
Motivated by storage applications, we study the following data structure problem: An encoder wishes to store a collection of jointly-distributed files $overline{X}:=(X_1,X_2,ldots, X_n) sim mu$ which are emph{correlated} ($H_mu(overline{X}) ll sum_i H_mu(X_i)$), using as little (expected) memory as possible, such that each individual file $X_i$ can be recovered quickly with few (ideally constant) memory accesses. In the case of independent random files, a dramatic result by Pat (FOCS08) and subsequently by Dodis, Pat and Thorup (STOC10) shows that it is possible to store $overline{X}$ using just a emph{constant} number of extra bits beyond the information-theoretic minimum space, while at the same time decoding each $X_i$ in constant time. However, in the (realistic) case where the files are correlated, much weaker results are known, requiring at least $Omega(n/polylg n)$ extra bits for constant decoding time, even for simple joint distributions $mu$. We focus on the natural case of compressingemph{Markov chains}, i.e., storing a length-$n$ random walk on any (possibly directed) graph $G$. Denoting by $kappa(G,n)$ the number of length-$n$ walks on $G$, we show that there is a succinct data structure storing a random walk using $lg_2 kappa(G,n) + O(lg n)$ bits of space, such that any vertex along the walk can be decoded in $O(1)$ time on a word-RAM. For the harder task of matching the emph{point-wise} optimal space of the walk, i.e., the empirical entropy $sum_{i=1}^{n-1} lg (deg(v_i))$, we present a data structure with $O(1)$ extra bits at the price of $O(lg n)$ decoding time, and show that any improvement on this would lead to an improved solution on the long-standing Dictionary problem. All of our data structures support the emph{online} version of the problem with constant update and query time.
Best match graphs (BMGs) are vertex-colored digraphs that naturally arise in mathematical phylogenetics to formalize the notion of evolutionary closest genes w.r.t. an a priori unknown phylogenetic tree. BMGs are explained by unique least resolved tr ees. We prove that the property of a rooted, leaf-colored tree to be least resolved for some BMG is preserved by the contraction of inner edges. For the special case of two-colored BMGs, this leads to a characterization of the least resolved trees (LRTs) of binary-explainable trees and a simple, polynomial-time algorithm for the minimum cardinality completion of the arc set of a BMG to reach a BMG that can be explained by a binary tree.
Camera motion deblurring is an important low-level vision task for achieving better imaging quality. When a scene has outliers such as saturated pixels, the captured blurred image becomes more difficult to restore. In this paper, we propose a novel m ethod to handle camera motion blur with outliers. We first propose an edge-aware scale-recurrent network (EASRN) to conduct deblurring. EASRN has a separate deblurring module that removes blur at multiple scales and an upsampling module that fuses different input scales. Then a salient edge detection network is proposed to supervise the training process and constraint the edges restoration. By simulating camera motion and adding various light sources, we can generate blurred images with saturation cutoff. Using the proposed data generation method, our network can learn to deal with outliers effectively. We evaluate our method on public test datasets including the GoPro dataset, Kohlers dataset and Lais dataset. Both objective evaluation indexes and subjective visualization show that our method results in better deblurring quality than other state-of-the-art approaches.
In this paper, we study PUSH-PULL style rumor spreading algorithms in the mobile telephone model, a variant of the classical telephone model in which each node can participate in at most one connection per round; i.e., you can no longer have multiple nodes pull information from the same source in a single round. Our model also includes two new parameterized generalizations: (1) the network topology can undergo a bounded rate of change (for a parameterized rate that spans from no changes to changes in every round); and (2) in each round, each node can advertise a bounded amount of information to all of its neighbors before connection decisions are made (for a parameterized number of bits that spans from no advertisement to large advertisements). We prove that in the mobile telephone model with no advertisements and no topology changes, PUSH-PULL style algorithms perform poorly with respect to a graphs vertex expansion and graph conductance as compared to the known tight results in the classical telephone model. We then prove, however, that if nodes are allowed to advertise a single bit in each round, a natural variation of PUSH-PULL terminates in time that matches (within logarithmic factors) this strategys performance in the classical telephone model---even in the presence of frequent topology changes. We also analyze how the performance of this algorithm degrades as the rate of change increases toward the maximum possible amount. We argue that our model matches well the properties of emerging peer-to-peer communication standards for mobile devices, and that our efficient PUSH-PULL variation that leverages small advertisements and adapts well to topology changes is a good choice for rumor spreading in this increasingly important setting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا