An enumeration kernel as defined by Creignou et al. [Theory Comput. Syst. 2017] for a parameterized enumeration problem consists of an algorithm that transforms each instance into one whose size is bounded by the parameter plus a solution-lifting algorithm that efficiently enumerates all solutions from the set of the solutions of the kernel. We propose to consider two n
A tree decomposition of a graph facilitates computations by grouping vertices into bags that are interconnected in an acyclic structure, hence their importance in a plethora of problems such as query evaluation over databases and inference over proba
bilistic graphical models. The relative benefit from different tree decompositions is measured by diverse (sometime complex) cost functions that vary from one application to another. For generic cost functions like width and fill-in, an optimal tree decomposition can be efficiently computed in some cases, notably when the number of minimal separators is bounded by a polynomial (due to Bouchitte and Todinca), we refer to this assumption as poly-MS. To cover the variety of cost functions in need, it has recently been proposed to devise algorithms for enumerating many decomposition candidates for applications to choose from using specialized, or even machine-learned, cost functions. We explore the ability to produce a large collection of high quality tree decompositions. We present the first algorithm for ranked enumeration of the proper (non-redundant) tree decompositions, or equivalently minimal triangulations, under a wide class of cost functions that substantially generalizes the above generic ones. On the theoretical side, we establish the guarantee of polynomial delay if poly-MS is assumed, or if we are interested in tree decompositions of a width bounded by a constant. We describe an experimental evaluation on graphs of various domains (including join queries, Bayesian networks, treewidth benchmarks and random), and explore both the applicability of the poly-MS assumption and the performance of our algorithm relative to the state of the art.
This paper considers enumerating answers to similarity-join queries under dynamic updates: Given two sets of $n$ points $A,B$ in $mathbb{R}^d$, a metric $phi(cdot)$, and a distance threshold $r > 0$, report all pairs of points $(a, b) in A times B$ w
ith $phi(a,b) le r$. Our goal is to store $A,B$ into a dynamic data structure that, whenever asked, can enumerate all result pairs with worst-case delay guarantee, i.e., the time between enumerating two consecutive pairs is bounded. Furthermore, the data structure can be efficiently updated when a point is inserted into or deleted from $A$ or $B$. We propose several efficient data structures for answering similarity-join queries in low dimension. For exact enumeration of similarity join, we present near-linear-size data structures for $ell_1, ell_infty$ metrics with $log^{O(1)} n$ update time and delay. We show that such a data structure is not feasible for the $ell_2$ metric for $d ge 4$. For approximate enumeration of similarity join, where the distance threshold is a soft constraint, we obtain a unified linear-size data structure for $ell_p$ metric, with $log^{O(1)} n$ delay and update time. In high dimensions, we present an efficient data structure with worst-case delay-guarantee using locality sensitive hashing (LSH).
Let $mathcal{T}^{(p)}_n$ be the set of $p$-ary labeled trees on ${1,2,dots,n}$. A maximal decreasing subtree of an $p$-ary labeled tree is defined by the maximal $p$-ary subtree from the root with all edges being decreasing. In this paper, we study a
new refinement $mathcal{T}^{(p)}_{n,k}$ of $mathcal{T}^{(p)}_n$, which is the set of $p$-ary labeled trees whose maximal decreasing subtree has $k$ vertices.
We investigate the single source shortest distance (SSSD) and all pairs shortest distance (APSD) problems as enumeration problems (on unweighted and integer weighted graphs), meaning that the elements $(u, v, d(u, v))$ -- where $u$ and $v$ are vertic
es with shortest distance $d(u, v)$ -- are produced and listed one by one without repetition. The performance is measured in the RAM model of computation with respect to preprocessing time and delay, i.e., the maximum time that elapses between two consecutive outputs. This point of view reveals that specific types of output (e.g., excluding the non-reachable pairs $(u, v, infty)$, or excluding the self-distances $(u, u, 0)$) and the order of enumeration (e.g., sorted by distance, sorted row-wise with respect to the distance matrix) have a huge impact on the complexity of APSD while they appear to have no effect on SSSD. In particular, we show for APSD that enumeration without output restrictions is possible with delay in the order of the average degree. Excluding non-reachable pairs, or requesting the output to be sorted by distance, increases this delay to the order of the maximum degree. Further, for weighted graphs, a delay in the order of the average degree is also not possible without preprocessing or considering self-distances as output. In contrast, for SSSD we find that a delay in the order of the maximum degree without preprocessing is attainable and unavoidable for any of these requirements.
We present an algorithm that enumerates all the minimal triangulations of a graph in incremental polynomial time. Consequently, we get an algorithm for enumerating all the proper tree decompositions, in incremental polynomial time, where proper means
that the tree decomposition cannot be improved by removing or splitting a bag.
Petr A. Golovach
,Christian Komusiewicz
,Dieter Kratsch
.
(2021)
.
"Refined Notions of Parameterized Enumeration Kernels with Applications to Matching Cut Enumeration"
.
Petr Golovach
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا