No Arabic abstract
It is an open question whether there exists a polynomial-time algorithm for computing the rotation distances between pairs of extended ordered binary trees. The problem of computing the rotation distance between an arbitrary pair of trees, (S, T), can be efficiently reduced to the problem of computing the rotation distance between a difficult pair of trees (S, T), where there is no known first step which is guaranteed to be the beginning of a minimal length path. Of interest, therefore, is how to sample such difficult pairs of trees of a fixed size. We show that it is possible to do so efficiently, and present such an algorithm that runs in time $O(n^4)$.
We consider the problem of partial order production: arrange the elements of an unknown totally ordered set T into a target partially ordered set S, by comparing a minimum number of pairs in T. Special cases include sorting by comparisons, selection, multiple selection, and heap construction. We give an algorithm performing ITLB + o(ITLB) + O(n) comparisons in the worst case. Here, n denotes the size of the ground sets, and ITLB denotes a natural information-theoretic lower bound on the number of comparisons needed to produce the target partial order. Our approach is to replace the target partial order by a weak order (that is, a partial order with a layered structure) extending it, without increasing the information theoretic lower bound too much. We then solve the problem by applying an efficient multiple selection algorithm. The overall complexity of our algorithm is polynomial. This answers a question of Yao (SIAM J. Comput. 18, 1989). We base our analysis on the entropy of the target partial order, a quantity that can be efficiently computed and provides a good estimate of the information-theoretic lower bound.
Manachers algorithm has been shown to be optimal to the longest palindromic substring problem. Many of the existing implementations of this algorithm, however, unanimously required in-memory construction of an augmented string that is twice as long as the original string. Although it has found widespread use, we found that this preprocessing is neither economic nor necessary. We present a more efficient implementation of Manachers algorithm based on index mapping that makes the string augmentation process obsolete.
We consider the problem of sampling from a density of the form $p(x) propto exp(-f(x)- g(x))$, where $f: mathbb{R}^d rightarrow mathbb{R}$ is a smooth and strongly convex function and $g: mathbb{R}^d rightarrow mathbb{R}$ is a convex and Lipschitz function. We propose a new algorithm based on the Metropolis-Hastings framework, and prove that it mixes to within TV distance $varepsilon$ of the target density in at most $O(d log (d/varepsilon))$ iterations. This guarantee extends previous results on sampling from distributions with smooth log densities ($g = 0$) to the more general composite non-smooth case, with the same mixing time up to a multiple of the condition number. Our method is based on a novel proximal-based proposal distribution that can be efficiently computed for a large class of non-smooth functions $g$.
Motivated by the development of computer theory, the sorting algorithm is emerging in an endless stream. Inspired by decrease and conquer method, we propose a brand new sorting algorithmUltimately Heapsort. The algorithm consists of two parts: building a heap and adjusting a heap. Through the asymptotic analysis and experimental analysis of the algorithm, the time complexity of our algorithm can reach O(nlogn) under any condition. Moreover, its space complexity is only O(1). It can be seen that our algorithm is superior to all previous algorithms.
The log-concave maximum likelihood estimator (MLE) problem answers: for a set of points $X_1,...X_n in mathbb R^d$, which log-concave density maximizes their likelihood? We present a characterization of the log-concave MLE that leads to an algorithm with runtime $poly(n,d, frac 1 epsilon,r)$ to compute a log-concave distribution whose log-likelihood is at most $epsilon$ less than that of the MLE, and $r$ is parameter of the problem that is bounded by the $ell_2$ norm of the vector of log-likelihoods the MLE evaluated at $X_1,...,X_n$.