Do you want to publish a course? Click here

An efficient sorting algorithm - Ultimate Heapsort(UHS)

116   0   0.0 ( 0 )
 Added by Feiyang Chen
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Motivated by the development of computer theory, the sorting algorithm is emerging in an endless stream. Inspired by decrease and conquer method, we propose a brand new sorting algorithmUltimately Heapsort. The algorithm consists of two parts: building a heap and adjusting a heap. Through the asymptotic analysis and experimental analysis of the algorithm, the time complexity of our algorithm can reach O(nlogn) under any condition. Moreover, its space complexity is only O(1). It can be seen that our algorithm is superior to all previous algorithms.



rate research

Read More

384 - Shoupu Wan 2020
Manachers algorithm has been shown to be optimal to the longest palindromic substring problem. Many of the existing implementations of this algorithm, however, unanimously required in-memory construction of an augmented string that is twice as long as the original string. Although it has found widespread use, we found that this preprocessing is neither economic nor necessary. We present a more efficient implementation of Manachers algorithm based on index mapping that makes the string augmentation process obsolete.
We consider the problem of partial order production: arrange the elements of an unknown totally ordered set T into a target partially ordered set S, by comparing a minimum number of pairs in T. Special cases include sorting by comparisons, selection, multiple selection, and heap construction. We give an algorithm performing ITLB + o(ITLB) + O(n) comparisons in the worst case. Here, n denotes the size of the ground sets, and ITLB denotes a natural information-theoretic lower bound on the number of comparisons needed to produce the target partial order. Our approach is to replace the target partial order by a weak order (that is, a partial order with a layered structure) extending it, without increasing the information theoretic lower bound too much. We then solve the problem by applying an efficient multiple selection algorithm. The overall complexity of our algorithm is polynomial. This answers a question of Yao (SIAM J. Comput. 18, 1989). We base our analysis on the entropy of the target partial order, a quantity that can be efficiently computed and provides a good estimate of the information-theoretic lower bound.
91 - Sean Cleary , Roland Maio 2020
It is an open question whether there exists a polynomial-time algorithm for computing the rotation distances between pairs of extended ordered binary trees. The problem of computing the rotation distance between an arbitrary pair of trees, (S, T), can be efficiently reduced to the problem of computing the rotation distance between a difficult pair of trees (S, T), where there is no known first step which is guaranteed to be the beginning of a minimal length path. Of interest, therefore, is how to sample such difficult pairs of trees of a fixed size. We show that it is possible to do so efficiently, and present such an algorithm that runs in time $O(n^4)$.
The log-concave maximum likelihood estimator (MLE) problem answers: for a set of points $X_1,...X_n in mathbb R^d$, which log-concave density maximizes their likelihood? We present a characterization of the log-concave MLE that leads to an algorithm with runtime $poly(n,d, frac 1 epsilon,r)$ to compute a log-concave distribution whose log-likelihood is at most $epsilon$ less than that of the MLE, and $r$ is parameter of the problem that is bounded by the $ell_2$ norm of the vector of log-likelihoods the MLE evaluated at $X_1,...,X_n$.
In this paper we study the problem of maintaining the strongly connected components of a graph in the presence of failures. In particular, we show that given a directed graph $G=(V,E)$ with $n=|V|$ and $m=|E|$, and an integer value $kgeq 1$, there is an algorithm that computes in $O(2^{k}nlog^2 n)$ time for any set $F$ of size at most $k$ the strongly connected components of the graph $Gsetminus F$. The running time of our algorithm is almost optimal since the time for outputting the SCCs of $Gsetminus F$ is at least $Omega(n)$. The algorithm uses a data structure that is computed in a preprocessing phase in polynomial time and is of size $O(2^{k} n^2)$. Our result is obtained using a new observation on the relation between strongly connected components (SCCs) and reachability. More specifically, one of the main building blocks in our result is a restricted variant of the problem in which we only compute strongly connected components that intersect a certain path. Restricting our attention to a path allows us to implicitly compute reachability between the path vertices and the rest of the graph in time that depends logarithmically rather than linearly in the size of the path. This new observation alone, however, is not enough, since we need to find an efficient way to represent the strongly connected components using paths. For this purpose we use a mixture of old and classical techniques such as the heavy path decomposition of Sleator and Tarjan and the classical Depth-First-Search algorithm. Although, these are by now standard techniques, we are not aware of any usage of them in the context of dynamic maintenance of SCCs. Therefore, we expect that our new insights and mixture of new and old techniques will be of independent interest.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا