Do you want to publish a course? Click here

On the Configuration Space of Steiner Minimal Trees

64   0   0.0 ( 0 )
 Added by Nataliya Strelkova
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

Among other results, we prove the following theorem about Steiner minimal trees in $d$-dimensional Euclidean space: if two finite sets in $mathbb{R}^d$ have unique and combinatorially equivalent Steiner minimal trees, then there is a homotopy between the two sets that maintains the uniqueness and the combinatorial structure of the Steiner minimal tree throughout the homotopy.



rate research

Read More

In the classical Steiner tree problem, given an undirected, connected graph $G=(V,E)$ with non-negative edge costs and a set of emph{terminals} $Tsubseteq V$, the objective is to find a minimum-cost tree $E subseteq E$ that spans the terminals. The problem is APX-hard; the best known approximation algorithm has a ratio of $rho = ln(4)+varepsilon < 1.39$. In this paper, we study a natural generalization, the emph{multi-level Steiner tree} (MLST) problem: given a nested sequence of terminals $T_{ell} subset dots subset T_1 subseteq V$, compute nested trees $E_{ell}subseteq dots subseteq E_1subseteq E$ that span the corresponding terminal sets with minimum total cost. The MLST problem and variants thereof have been studied under various names including Multi-level Network Design, Quality-of-Service Multicast tree, Grade-of-Service Steiner tree, and Multi-Tier tree. Several approximation results are known. We first present two simple $O(ell)$-approximation heuristics. Based on these, we introduce a rudimentary composite algorithm that generalizes the above heuristics, and determine its approximation ratio by solving a linear program. We then present a method that guarantees the same approximation ratio using at most $2ell$ Steiner tree computations. We compare these heuristics experimentally on various instances of up to 500 vertices using three different network generation models. We also present various integer linear programming (ILP) formulations for the MLST problem, and compare their running times on these instances. To our knowledge, the composite algorithm achieves the best approximation ratio for up to $ell=100$ levels, which is sufficient for most applications such as network visualization or designing multi-level infrastructure.
133 - Benoit Kloeckner 2008
We study the Lipschitz structures on the geodesic compactification of a regular tree, that are preserved by the automorphism group. They are shown to be similar to the compactifications introduced by William Floyd, and a complete description is given.
We study configuration space integral formulas for Milnors homotopy link invariants, showing that they are in correspondence with certain linear combinations of trivalent trees. Our proof is essentially a combinatorial analysis of a certain space of trivalent homotopy link diagrams which corresponds to all finite type homotopy link invariants via configuration space integrals. An important ingredient is the fact that configuration space integrals take the shuffle product of diagrams to the product of invariants. We ultimately deduce a partial recipe for writing explicit integral formulas for products of Milnor invariants from trivalent forests. We also obtain cohomology classes in spaces of link maps from the same data.
Graph neural networks have been successful in many learning problems and real-world applications. A recent line of research explores the power of graph neural networks to solve combinatorial and graph algorithmic problems such as subgraph isomorphism, detecting cliques, and the traveling salesman problem. However, many NP-complete problems are as of yet unexplored using this method. In this paper, we tackle the Steiner Tree Problem. We employ four learning frameworks to compute low cost Steiner trees: feed-forward neural networks, graph neural networks, graph convolutional networks, and a graph attention model. We use these frameworks in two fundamentally different ways: 1) to train the models to learn the actual Steiner tree nodes, 2) to train the model to learn good Steiner point candidates to be connected to the constructed tree using a shortest path in a greedy fashion. We illustrate the robustness of our heuristics on several random graph generation models as well as the SteinLib data library. Our finding suggests that the out-of-the-box application of GNN methods does worse than the classic 2-approximation method. However, when combined with a greedy shortest path construction, it even does slightly better than the 2-approximation algorithm. This result sheds light on the fundamental capabilities and limitations of graph learning techniques on classical NP-complete problems.
In this paper we prove upper and lower bounds on the minimal spherical dispersion. In particular, we see that the inverse $N(varepsilon,d)$ of the minimal spherical dispersion is, for fixed $varepsilon>0$, up to logarithmic terms linear in the dimension $d$. We also derive upper and lower bounds on the expected dispersion for points chosen independently and uniformly at random from the Euclidean unit sphere.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا