Do you want to publish a course? Click here

Average-Case Analysis of Online Topological Ordering

156   0   0.0 ( 0 )
 Added by Tobias Friedrich
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated experimentally on random DAGs. We present the first average-case analysis of online topological ordering algorithms. We prove an expected runtime of O(n^2 polylog(n)) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (SODA, 1990), Katriel and Bodlaender (TALG, 2006), and Pearce and Kelly (JEA, 2006). This is much less than the best known worst-case bound O(n^{2.75}) for this problem.



rate research

Read More

We study the satisfiability of ordering constraint satisfaction problems (CSPs) above average. We prove the conjecture of Gutin, van Iersel, Mnich, and Yeo that the satisfiability above average of ordering CSPs of arity $k$ is fixed-parameter tractable for every $k$. Previously, this was only known for $k=2$ and $k=3$. We also generalize this result to more general classes of CSPs, including CSPs with predicates defined by linear inequalities. To obtain our results, we prove a new Bonami-type inequality for the Efron-Stein decomposition. The inequality applies to functions defined on arbitrary product probability spaces. In contrast to other variants of the Bonami Inequality, it does not depend on the mass of the smallest atom in the probability space. We believe that this inequality is of independent interest.
313 - Anthony Kleerekoper 2016
The majority problem is a special case of the heavy hitters problem. Given a collection of coloured balls, the task is to identify the majority colour or state that no such colour exists. Whilst the special case of two-colours has been well studied, the average-case performance for arbitrarily many colours has not. In this paper, we present heuristic analysis of the average-case performance of three deterministic algorithms that appear in the literature. We empirically validate our analysis with large scale simulations.
Though competitive analysis is often a very good tool for the analysis of online algorithms, sometimes it does not give any insight and sometimes it gives counter-intuitive results. Much work has gone into exploring other performance measures, in particular targeted at what seems to be the core problem with competitive analysis: the comparison of the performance of an online algorithm is made to a too powerful adversary. We consider a new approach to restricting the power of the adversary, by requiring that when judging a given online algorithm, the optimal offline algorithm must perform as well as the online algorithm, not just on the entire final request sequence, but also on any prefix of that sequence. This is limiting the adversarys usual advantage of being able to exploit that it knows the sequence is continuing beyond the current request. Through a collection of online problems, including machine scheduling, bin packing, dual bin packing, and seat reservation, we investigate the significance of this particular offline advantage.
We consider an emph{approximate} version of the trace reconstruction problem, where the goal is to recover an unknown string $sin{0,1}^n$ from $m$ traces (each trace is generated independently by passing $s$ through a probabilistic insertion-deletion channel with rate $p$). We present a deterministic near-linear time algorithm for the average-case model, where $s$ is random, that uses only emph{three} traces. It runs in near-linear time $tilde O(n)$ and with high probability reports a string within edit distance $O(epsilon p n)$ from $s$ for $epsilon=tilde O(p)$, which significantly improves over the straightforward bound of $O(pn)$. Technically, our algorithm computes a $(1+epsilon)$-approximate median of the three input traces. To prove its correctness, our probabilistic analysis shows that an approximate median is indeed close to the unknown $s$. To achieve a near-linear time bound, we have to bypass the well-known dynamic programming algorithm that computes an optimal median in time $O(n^3)$.
We study the problems of testing isomorphism of polynomials, algebras, and multilinear forms. Our first main results are average-case algorithms for these problems. For example, we develop an algorithm that takes two cubic forms $f, gin mathbb{F}_q[x_1,dots, x_n]$, and decides whether $f$ and $g$ are isomorphic in time $q^{O(n)}$ for most $f$. This average-case setting has direct practical implications, having been studied in multivariate cryptography since the 1990s. Our second result concerns the complexity of testing equivalence of alternating trilinear forms. This problem is of interest in both mathematics and cryptography. We show that this problem is polynomial-time equivalent to testing equivalence of symmetric trilinear forms, by showing that they are both Tensor Isomorphism-complete (Grochow-Qiao, ITCS, 2021), therefore is equivalent to testing isomorphism of cubic forms over most fields.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا