ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Complexity of Minimum Labeling Alignment of Two Genomes

158   0   0.0 ( 0 )
 نشر من قبل Riccardo Dondi
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this note we investigate the complexity of the Minimum Label Alignment problem and we show that such a problem is APX-hard.

قيم البحث

اقرأ أيضاً

Bobkov, Houdre, and the last author introduced a Poincare-type functional parameter, $lambda_infty$, of a graph $G$. They related $lambda_infty$ to the {em vertex expansion} of the graph via a Cheeger-type inequality, analogous to the inequality rela ting the spectral gap of the graph, $lambda_2$, to its {em edge expansion}. While $lambda_2$ can be computed efficiently, the computational complexity of $lambda_infty$ has remained an open question. Following the work of the second author with Raghavendra and Vempala, wherein the complexity of $lambda_infty$ was related to the so-called small-set expansion (SSE) problem, it has been believed that computing $lambda_infty$ is a hard problem. We confirm this conjecture by proving that computing $lambda_infty$ is indeed NP-hard, even for weighted trees. Our gadget further proves NP-hardness of computing emph{spread constant} of a weighted tree; i.e., a geometric measure of the graph, introduced by Alon, Boppana, and Spencer, in the context of deriving an asymptotic isoperimetric inequality of Cartesian products of graphs. We conclude this case by providing a fully polynomial time approximation scheme. We further study a generalization of spread constant in machine learning literature, namely the {em maximum variance embedding} problem. For trees, we provide fast combinatorial algorithms that avoid solving a semidefinite relaxation of the problem. On the other hand, for general graphs, we propose a randomized projection method that can outperform the optimal orthogonal projection, i.e., PCA, classically used for rounding of the optimum lifted solution (to SDP relaxation) of the problem.
Motivation: The ability to generate massive amounts of sequencing data continues to overwhelm the processing capability of existing algorithms and compute infrastructures. In this work, we explore the use of hardware/software co-design and hardware a cceleration to significantly reduce the execution time of short sequence alignment, a crucial step in analyzing sequenced genomes. We introduce Shouji, a highly-parallel and accurate pre-alignment filter that remarkably reduces the need for computationally-costly dynamic programming algorithms. The first key idea of our proposed pre-alignment filter is to provide high filtering accuracy by correctly detecting all common subsequences shared between two given sequences. The second key idea is to design a hardware accelerator that adopts modern FPGA (Field-Programmable Gate Array) architectures to further boost the performance of our algorithm. Results: Shouji significantly improves the accuracy of pre-alignment filtering by up to two orders of magnitude compared to the state-of-the-art pre-alignment filters, GateKeeper and SHD. Our FPGA-based accelerator is up to three orders of magnitude faster than the equivalent CPU implementation of Shouji. Using a single FPGA chip, we benchmark the benefits of integrating Shouji with five state-of-the-art sequence aligners, designed for different computing platforms. The addition of Shouji as a pre-alignment step reduces the execution time of the five state-of-the-art sequence aligners by up to 18.8x. Shouji can be adapted for any bioinformatics pipeline that performs sequence alignment for verification. Unlike most existing methods that aim to accelerate sequence alignment, Shouji does not sacrifice any of the aligner capabilities, as it does not modify or replace the alignment step. Availability: https://github.com/CMU-SAFARI/Shouji
208 - Simon Apers , Troy Lee 2020
The minimum cut problem in an undirected and weighted graph $G$ is to find the minimum total weight of a set of edges whose removal disconnects $G$. We completely characterize the quantum query and time complexity of the minimum cut problem in the ad jacency matrix model. If $G$ has $n$ vertices and edge weights at least $1$ and at most $tau$, we give a quantum algorithm to solve the minimum cut problem using $tilde O(n^{3/2}sqrt{tau})$ queries and time. Moreover, for every integer $1 le tau le n$ we give an example of a graph $G$ with edge weights $1$ and $tau$ such that solving the minimum cut problem on $G$ requires $Omega(n^{3/2}sqrt{tau})$ many queries to the adjacency matrix of $G$. These results contrast with the classical randomized case where $Omega(n^2)$ queries to the adjacency matrix are needed in the worst case even to decide if an unweighted graph is connected or not. In the adjacency array model, when $G$ has $m$ edges the classical randomized complexity of the minimum cut problem is $tilde Theta(m)$. We show that the quantum query and time complexity are $tilde O(sqrt{mntau})$ and $tilde O(sqrt{mntau} + n^{3/2})$, respectively, where again the edge weights are between $1$ and $tau$. For dense graphs we give lower bounds on the quantum query complexity of $Omega(n^{3/2})$ for $tau > 1$ and $Omega(tau n)$ for any $1 leq tau leq n$. Our query algorithm uses a quantum algorithm for graph sparsification by Apers and de Wolf (FOCS 2020) and results on the structure of near-minimum cuts by Kawarabayashi and Thorup (STOC 2015) and Rubinstein, Schramm and Weinberg (ITCS 2018). Our time efficient implementation builds on Kargers tree packing technique (STOC 1996).
We revisit the complexity of online computation in the cell probe model. We consider a class of problems where we are first given a fixed pattern or vector $F$ of $n$ symbols and then one symbol arrives at a time in a stream. After each symbol has ar rived we must output some function of $F$ and the $n$-length suffix of the arriving stream. Cell probe bounds of $Omega(deltalg{n}/w)$ have previously been shown for both convolution and Hamming distance in this setting, where $delta$ is the size of a symbol in bits and $winOmega(lg{n})$ is the cell size in bits. However, when $delta$ is a constant, as it is in many natural situations, these previous results no longer give us non-trivial bounds. We introduce a new lop-sided information transfer proof technique which enables us to prove meaningful lower bounds even for constant size input alphabets. We use our new framework to prove an amortised cell probe lower bound of $Omega(lg^2 n/(wcdot lg lg n))$ time per arriving bit for an online version of a well studied problem known as pattern matching with address errors. This is the first non-trivial cell probe lower bound for any online problem on bit streams that still holds when the cell sizes are large. We also show the same bound for online convolution conditioned on a new combinatorial conjecture related to Toeplitz matrices.
75 - Michael Lampis 2021
A stable cut of a graph is a cut whose weight cannot be increased by changing the side of a single vertex. Equivalently, a cut is stable if all vertices have the (weighted) majority of their neighbors on the other side. In this paper we study Min Sta ble Cut, the problem of finding a stable cut of minimum weight, which is closely related to the Price of Anarchy of the Max Cut game. Since this problem is NP-hard, we study its complexity on graphs of low treewidth, low degree, or both. We show that the problem is weakly NP-hard on severely restricted trees, so bounding treewidth alone cannot make it tractable. We match this with a pseudo-polynomial DP algorithm running in time $(Deltacdot W)^{O(tw)}n^{O(1)}$, where $tw$ is the treewidth, $Delta$ the maximum degree, and $W$ the maximum weight. On the other hand, bounding $Delta$ is also not enough, as the problem is NP-hard for unweighted graphs of bounded degree. We therefore parameterize Min Stable Cut by both $tw+Delta$ and obtain an FPT algorithm running in time $2^{O(Delta tw)}(n+log W)^{O(1)}$. Our main result is to provide a reduction showing that both aforementioned algorithms are essentially optimal, even if we replace treewidth by pathwidth: if there exists an algorithm running in $(nW)^{o(pw)}$ or $2^{o(Delta pw)}(n+log W)^{O(1)}$, then the ETH is false. Complementing this, we show that we can obtain an FPT approximation scheme parameterized by treewidth, if we consider almost-stable solutions. Motivated by these mostly negative results, we consider Unweighted Min Stable Cut. Here our results already imply a much faster exact algorithm running in time $Delta^{O(tw)}n^{O(1)}$. We show that this is also probably essentially optimal: an algorithm running in $n^{o(pw)}$ would contradict the ETH.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا