ترغب بنشر مسار تعليمي؟ اضغط هنا

Angle Covers: Algorithms and Complexity

281   0   0.0 ( 0 )
 نشر من قبل Alexander Wolff
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Consider a graph with a rotation system, namely, for every vertex, a circular ordering of the incident edges. Given such a graph, an angle cover maps every vertex to a pair of consecutive edges in the ordering -- an angle -- such that each edge participates in at least one such pair. We show that any graph of maximum degree 4 admits an angle cover, give a poly-time algorithm for deciding if a graph with no degree-3 vertices has an angle-cover, and prove that, given a graph of maximum degree 5, it is NP-hard to decide whether it admits an angle cover. We also consider extensions of the angle cover problem where every vertex selects a fixed number $a>1$ of angles or where an angle consists of more than two consecutive edges. We show an application of angle covers to the problem of deciding if the 2-blowup of a planar graph has isomorphic thickness 2.



قيم البحث

اقرأ أيضاً

The Turaev-Viro invariants are a powerful family of topological invariants for distinguishing between different 3-manifolds. They are invaluable for mathematical software, but current algorithms to compute them require exponential time. The invaria nts are parameterised by an integer $r geq 3$. We resolve the question of complexity for $r=3$ and $r=4$, giving simple proofs that computing Turaev-Viro invariants for $r=3$ is polynomial time, but for $r=4$ is #P-hard. Moreover, we give an explicit fixed-parameter tractable algorithm for arbitrary $r$, and show through concrete implementation and experimentation that this algorithm is practical---and indeed preferable---to the prior state of the art for real computation.
Ailon et al. [SICOMP11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances $x_1,cdots,x_n$ follow some unknown emph{product distribution}. That is, $x_i$ comes from a fixed unknown distribution $ma thsf{D}_i$, and the $x_i$s are drawn independently. After spending $O(n^{1+varepsilon})$ time in a learning phase, the subsequent expected running time is $O((n+ H)/varepsilon)$, where $H in {H_mathrm{S},H_mathrm{DT}}$, and $H_mathrm{S}$ and $H_mathrm{DT}$ are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the $x_i$s under the emph{group product distribution}. There is a hidden partition of $[1,n]$ into groups; the $x_i$s in the $k$-th group are fixed unknown functions of the same hidden variable $u_k$; and the $u_k$s are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map $u_k$ to $x_i$s are well-behaved. After an $O(mathrm{poly}(n))$-time training phase, we achieve $O(n + H_mathrm{S})$ and $O(nalpha(n) + H_mathrm{DT})$ expected running times for sorting and DT, respectively, where $alpha(cdot)$ is the inverse Ackermann function.
The classic Ham-Sandwich theorem states that for any $d$ measurable sets in $mathbb{R}^d$, there is a hyperplane that bisects them simultaneously. An extension by Barany, Hubard, and Jeronimo [DCG 2008] states that if the sets are convex and emph{wel l-separated}, then for any given $alpha_1, dots, alpha_d in [0, 1]$, there is a unique oriented hyperplane that cuts off a respective fraction $alpha_1, dots, alpha_d$ from each set. Steiger and Zhao [DCG 2010] proved a discrete analogue of this theorem, which we call the emph{$alpha$-Ham-Sandwich theorem}. They gave an algorithm to find the hyperplane in time $O(n (log n)^{d-3})$, where $n$ is the total number of input points. The computational complexity of this search problem in high dimensions is open, quite unlike the complexity of the Ham-Sandwich problem, which is now known to be PPA-complete (Filos-Ratsikas and Goldberg [STOC 2019]). Recently, Fearley, Gordon, Mehta, and Savani [ICALP 2019] introduced a new sub-class of CLS (Continuous Local Search) called emph{Unique End-of-Potential Line} (UEOPL). This class captures problems in CLS that have unique solutions. We show that for the $alpha$-Ham-Sandwich theorem, the search problem of finding the dividing hyperplane lies in UEOPL. This gives the first non-trivial containment of the problem in a complexity class and places it in the company of classic search problems such as finding the fixed point of a contraction map, the unique sink orientation problem and the $P$-matrix linear complementarity problem.
We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We g ive a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.
$ ewcommand{eps}{varepsilon} $In learning theory, the VC dimension of a concept class $C$ is the most common way to measure its richness. In the PAC model $$ ThetaBig(frac{d}{eps} + frac{log(1/delta)}{eps}Big) $$ examples are necessary and sufficien t for a learner to output, with probability $1-delta$, a hypothesis $h$ that is $eps$-close to the target concept $c$. In the related agnostic model, where the samples need not come from a $cin C$, we know that $$ ThetaBig(frac{d}{eps^2} + frac{log(1/delta)}{eps^2}Big) $$ examples are necessary and sufficient to output an hypothesis $hin C$ whose error is at most $eps$ worse than the best concept in $C$. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is $$ OmegaBig(frac{d^{1-eta}}{eps} + d + frac{log(1/delta)}{eps}Big)mbox{ for all }eta> 0. $$ Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a $log(d/eps)$ factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the Pretty Good Measurement on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا