Do you want to publish a course? Click here

A Linear Time Algorithm for Finding Minimum Spanning Tree Replacement Edges

62   0   0.0 ( 0 )
 Added by David Bader
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Given an undirected, weighted graph, the minimum spanning tree (MST) is a tree that connects all of the vertices of the graph with minimum sum of edge weights. In real world applications, network designers often seek to quickly find a replacement edge for each edge in the MST. For example, when a traffic accident closes a road in a transportation network, or a line goes down in a communication network, the replacement edge may reconnect the MST at lowest cost. In the paper, we consider the case of finding the lowest cost replacement edge for each edge of the MST. A previous algorithm by Tarjan takes $O(m alpha(m, n))$ time, where $alpha(m, n)$ is the inverse Ackermanns function. Given the MST and sorted non-tree edges, our algorithm is the first that runs in $O(m+n)$ time and $O(m+n)$ space to find all replacement edges. Moreover, it is easy to implement and our experimental study demonstrates fast performance on several types of graphs. Additionally, since the most vital edge is the tree edge whose removal causes the highest cost, our algorithm finds it in linear time.

rate research

Read More

Given a graph $G=(V,E)$ and an integer $k ge 1$, a $k$-hop dominating set $D$ of $G$ is a subset of $V$, such that, for every vertex $v in V$, there exists a node $u in D$ whose hop-distance from $v$ is at most $k$. A $k$-hop dominating set of minimum cardinality is called a minimum $k$-hop dominating set. In this paper, we present linear-time algorithms that find a minimum $k$-hop dominating set in unicyclic and cactus graphs. To achieve this, we show that the $k$-dominating set problem on unicycle graph reduces to the piercing circular arcs problem, and show a linear-time algorithm for piercing sorted circular arcs, which improves the best known $O(nlog n)$-time algorithm.
213 - Lelia Blin 2009
The minimum spanning tree (MST) construction is a classical problem in Distributed Computing for creating a globally minimized structure distributedly. Self-stabilization is versatile technique for forward recovery that permits to handle any kind of transient faults in a unified manner. The loop-free property provides interesting safety assurance in dynamic networks where edge-cost changes during operation of the protocol. We present a new self-stabilizing MST protocol that improves on previous known ap- proaches in several ways. First, it makes fewer system hypotheses as the size of the network (or an upper bound on the size) need not be known to the participants. Second, it is loop-free in the sense that it guarantees that a spanning tree structure is always preserved while edge costs change dynamically and the protocol adjusts to a new MST. Finally, time complexity matches the best known results, while space complexity results show that this protocol is the most efficient to date.
The minimum spanning tree clustering algorithm is capable of detecting clusters with irregular boundaries. In this paper we propose two minimum spanning trees based clustering algorithm. The first algorithm produces k clusters with center and guaranteed intra-cluster similarity. The radius and diameter of k clusters are computed to find the tightness of k clusters. The variance of the k clusters are also computed to find the compactness of the clusters. The second algorithm is proposed to create a dendrogram using the k clusters as objects with guaranteed inter-cluster similarity. The algorithm is also finds central cluster from the k number of clusters. The first algorithm uses divisive approach, where as the second algorithm uses agglomerative approach. In this paper we used both the approaches to find Informative Meta similarity clusters.
123 - Monika Henzinger , Pan Peng 2020
We give two fully dynamic algorithms that maintain a $(1+varepsilon)$-approximation of the weight $M$ of the minimum spanning forest of an $n$-node graph $G$ with edges weights in $[1,W]$, for any $varepsilon>0$. (1) Our deterministic algorithm takes $O({W^2 log W}/{varepsilon^3})$ worst-case update time, which is $O(1)$ if both $W$ and $varepsilon$ are constants. Note that there is a lower bound by Patrascu and Demaine (SIAM J. Comput. 2006) shows that it takes $Omega(log n)$ time per operation to maintain the exact weight of the MSF that holds even in the unweighted case, i.e. for $W=1$. We further show that any deterministic data structure that dynamically maintains the $(1+varepsilon)$-approximate weight of the MSF requires super constant time per operation, if $Wgeq (log n)^{omega_n(1)}$. (2) Our randomized (Monte-Carlo style) algorithm works with high probability and runs in worst-case $O(frac{1}{varepsilon^4}log^3(frac{1}{varepsilon}))$ update time if $W$ is not too large, more specifically, if $W= O({(m^*)^{1/6}}/{log n})$, where $m^*$ is the minimum number of edges in the graph throughout all the updates. It works even against an adaptive adversary. We complement this result by showing that for any constant $varepsilon,alpha>0$ and $W=n^{alpha}$, any (randomized) data structure that dynamically maintains the weight of the MSF of a graph $G$ with edge weights in $[1,W]$ and $W = Omega(varepsilon m^*)$ within a multiplicative factor of $(1+varepsilon)$ takes $Omega(log n)$ time per operation.
231 - Sarwan Ali 2021
Cache replacement algorithms are used to optimize the time taken by processor to process the information by storing the information needed by processor at that time and possibly in future so that if processor needs that information, it can be provided immediately. There are a number of techniques (LIFO, FIFO, LRU, MRU, Hybrid) used to organize information in such a way that processor remains busy almost all the time. But there are some limitations of every technique. We tried to overcome those limitations. We used Probabilistic Graphical Model(PGM), which gives conditional dependency between random variables using directed or undirected graph. In our research, we exploited the Bayesian network technique to predict the future request by processor. The main goal of the research was to increase the cache hit rate but not by increasing the size of cache and also reducing or maintaining the overhead. We achieved 7% more cache hits in best case scenario than those classical algorithms by using PGM technique. This proves the success of our technique as far as cache hits are concerned. Also, pre-eviction proves to be a better technique to get more cache hits. Combining both pre-eviction and pre-fetching using PGM gives us the results which were intended to achieve as the sole purpose of this research.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا