ترغب بنشر مسار تعليمي؟ اضغط هنا

Revisiting the Majority Problem: Average-Case Analysis with Arbitrarily Many Colours

314   0   0.0 ( 0 )
 نشر من قبل Anthony Kleerekoper
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The majority problem is a special case of the heavy hitters problem. Given a collection of coloured balls, the task is to identify the majority colour or state that no such colour exists. Whilst the special case of two-colours has been well studied, the average-case performance for arbitrarily many colours has not. In this paper, we present heuristic analysis of the average-case performance of three deterministic algorithms that appear in the literature. We empirically validate our analysis with large scale simulations.



قيم البحث

اقرأ أيضاً

Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for wo rst-case insertion sequences or only evaluated experimentally on random DAGs. We present the first average-case analysis of online topological ordering algorithms. We prove an expected runtime of O(n^2 polylog(n)) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (SODA, 1990), Katriel and Bodlaender (TALG, 2006), and Pearce and Kelly (JEA, 2006). This is much less than the best known worst-case bound O(n^{2.75}) for this problem.
We consider an emph{approximate} version of the trace reconstruction problem, where the goal is to recover an unknown string $sin{0,1}^n$ from $m$ traces (each trace is generated independently by passing $s$ through a probabilistic insertion-deletion channel with rate $p$). We present a deterministic near-linear time algorithm for the average-case model, where $s$ is random, that uses only emph{three} traces. It runs in near-linear time $tilde O(n)$ and with high probability reports a string within edit distance $O(epsilon p n)$ from $s$ for $epsilon=tilde O(p)$, which significantly improves over the straightforward bound of $O(pn)$. Technically, our algorithm computes a $(1+epsilon)$-approximate median of the three input traces. To prove its correctness, our probabilistic analysis shows that an approximate median is indeed close to the unknown $s$. To achieve a near-linear time bound, we have to bypass the well-known dynamic programming algorithm that computes an optimal median in time $O(n^3)$.
We consider the sensitivity of algorithms for the maximum matching problem against edge and vertex modifications. Algorithms with low sensitivity are desirable because they are robust to edge failure or attack. In this work, we show a randomized $(1- epsilon)$-approximation algorithm with worst-case sensitivity $O_{epsilon}(1)$, which substantially improves upon the $(1-epsilon)$-approximation algorithm of Varma and Yoshida (arXiv 2020) that obtains average sensitivity $n^{O(1/(1+epsilon^2))}$ sensitivity algorithm, and show a deterministic $1/2$-approximation algorithm with sensitivity $exp(O(log^*n))$ for bounded-degree graphs. We show that any deterministic constant-factor approximation algorithm must have sensitivity $Omega(log^* n)$. Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of $n$ whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. As an application of our results, we give an algorithm for the online maximum matching with $O_{epsilon}(n)$ total replacements in the vertex-arrival model. By comparison, Bernstein et al. (J. ACM 2019) gave an online algorithm that always outputs the maximum matching, but only for bipartite graphs and with $O(nlog n)$ total replacements. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter $alpha>2$, there exists an algorithm that outputs a $frac{1}{4alpha}$-approximation to the maximum weighted matching in $O(mlog_{alpha} n)$ time, with normalized weighted sensitivity $O(1)$. See paper for full abstract.
We study the problems of testing isomorphism of polynomials, algebras, and multilinear forms. Our first main results are average-case algorithms for these problems. For example, we develop an algorithm that takes two cubic forms $f, gin mathbb{F}_q[x _1,dots, x_n]$, and decides whether $f$ and $g$ are isomorphic in time $q^{O(n)}$ for most $f$. This average-case setting has direct practical implications, having been studied in multivariate cryptography since the 1990s. Our second result concerns the complexity of testing equivalence of alternating trilinear forms. This problem is of interest in both mathematics and cryptography. We show that this problem is polynomial-time equivalent to testing equivalence of symmetric trilinear forms, by showing that they are both Tensor Isomorphism-complete (Grochow-Qiao, ITCS, 2021), therefore is equivalent to testing isomorphism of cubic forms over most fields.
Digital Elevation Models (DEMs) are important datasets for modelling the line of sight, such as radio signals, sound waves and human vision. These are commonly analyzed using rotational sweep algorithms. However, such algorithms require large numbers of memory accesses to 2D arrays which, despite being regular, result in poor data locality in memory. Here, we propose a new methodology called skewed Digital Elevation Model (sDEM), which substantially improves the locality of memory accesses and increases the inherent parallelism involved in the computation of rotational sweep-based algorithms. In particular, sDEM applies a data restructuring technique before accessing the memory and performing the computation. To demonstrate the high efficiency of sDEM, we use the problem of total viewshed computation as a case study considering different implementations for single-core, multi-core, single-GPU and multi-GPU platforms. We conducted two experiments to compare sDEM with (i) the most commonly used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm. In the first experiment, sDEM is on average 8.8x faster than current GIS software despite being able to consider only few points because of their limitations. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm in the best case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا