ترغب بنشر مسار تعليمي؟ اضغط هنا

Deleting edges to restrict the size of an epidemic

117   0   0.0 ( 0 )
 نشر من قبل Kitty Meeks
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Motivated by applications in network epidemiology, we consider the problem of determining whether it is possible to delete at most $k$ edges from a given input graph (of small treewidth) so that the resulting graph avoids a set $mathcal{F}$ of forbidden subgraphs; of particular interest is the problem of determining whether it is possible to delete at most $k$ edges so that the resulting graph has no connected component of more than $h$ vertices, as this bounds the worst-case size of an epidemic. While even this special case of the problem is NP-complete in general (even when $h=3$), we provide evidence that many of the real-world networks of interest are likely to have small treewidth, and we describe an algorithm which solves the general problem in time genruntime ~on an input graph having $n$ vertices and whose treewidth is bounded by a fixed constant $w$, if each of the subgraphs we wish to avoid has at most $r$ vertices. For the special case in which we wish only to ensure that no component has more than $h$ vertices, we improve on this to give an algorithm running in time $O((wh)^{2w}n)$, which we have implemented and tested on real datasets based on cattle movements.



قيم البحث

اقرأ أيضاً

Spreading processes on graphs are a natural model for a wide variety of real-world phenomena, including information spread over social networks and biological diseases spreading over contact networks. Often, the networks over which these processes sp read are dynamic in nature, and can be modeled with temporal graphs. Here, we study the problem of deleting edges from a given temporal graph in order to reduce the number of vertices (temporally) reachable from a given starting point. This could be used to control the spread of a disease, rumour, etc. in a temporal graph. In particular, our aim is to find a temporal subgraph in which a process starting at any single vertex can be transferred to only a limited number of other vertices using a temporally-feasible path. We introduce a natural edge-deletion problem for temporal graphs and provide positive and negative results on its computational complexity and approximability.
We consider a natural variant of the well-known Feedback Vertex Set problem, namely the problem of deleting a small subset of vertices or edges to a full binary tree. This version of the problem is motivated by real-world scenarios that are best modeled by full binary trees. We establish that bo
We present a sublinear time algorithm that allows one to sample multiple edges from a distribution that is pointwise $epsilon$-close to the uniform distribution, in an emph{amortized-efficient} fashion. We consider the adjacency list query model, whe re access to a graph $G$ is given via degree and neighbor queries. The problem of sampling a single edge in this model has been raised by Eden and Rosenbaum (SOSA 18). Let $n$ and $m$ denote the number of vertices and edges of $G$, respectively. Eden and Rosenbaum provided upper and lower bounds of $Theta^*(n/sqrt m)$ for sampling a single edge in general graphs (where $O^*(cdot)$ suppresses $textrm{poly}(1/epsilon)$ and $textrm{poly}(log n)$ dependencies). We ask whether the query complexity lower bound for sampling a single edge can be circumvented when multiple samples are required. That is, can we get an improved amortized per-sample cost if we allow a preprocessing phase? We answer in the affirmative. We present an algorithm that, if one knows the number of required samples $q$ in advance, has an overall cost that is sublinear in $q$, namely, $O^*(sqrt q cdot(n/sqrt m))$, which is strictly preferable to $O^*(qcdot (n/sqrt m))$ cost resulting from $q$ invocations of the algorithm by Eden and Rosenbaum. Subsequent to a preliminary version of this work, Tv{e}tek and Thorup (arXiv, preprint) proved that this bound is essentially optimal.
We study the problem of approximating the largest root of a real-rooted polynomial of degree $n$ using its top $k$ coefficients and give nearly matching upper and lower bounds. We present algorithms with running time polynomial in $k$ that use the to p $k$ coefficients to approximate the maximum root within a factor of $n^{1/k}$ and $1+O(tfrac{log n}{k})^2$ when $kleq log n$ and $k>log n$ respectively. We also prove corresponding information-theoretic lower bounds of $n^{Omega(1/k)}$ and $1+Omegaleft(frac{log frac{2n}{k}}{k}right)^2$, and show strong lower bounds for noisy version of the problem in which one is given access to approximate coefficients. This problem has applications in the context of the method of interlacing families of polynomials, which was used for proving the existence of Ramanujan graphs of all degrees, the solution of the Kadison-Singer problem, and bounding the integrality gap of the asymmetric traveling salesman problem. All of these involve computing the maximum root of certain real-rooted polynomials for which the top few coefficients are accessible in subexponential time. Our results yield an algorithm with the running time of $2^{tilde O(sqrt[3]n)}$ for all of them.
We develop a new methodology for the efficient computation of epidemic final size distributions for a broad class of Markovian models. We exploit a particular representation of the stochastic epidemic process to derive a method which is both computat ionally efficient and numerically stable. The algorithms we present are also physically transparent and so allow us to extend this method from the basic SIR model to a model with a phase-type infectious period and another with waning immunity. The underlying theory is applicable to many Markovian models where we wish to efficiently calculate hitting probabilities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا