ﻻ يوجد ملخص باللغة العربية
Ailon et al. [SICOMP11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances $x_1,cdots,x_n$ follow some unknown emph{product distribution}. That is, $x_i$ comes from a fixed unknown distribution $mathsf{D}_i$, and the $x_i$s are drawn independently. After spending $O(n^{1+varepsilon})$ time in a learning phase, the subsequent expected running time is $O((n+ H)/varepsilon)$, where $H in {H_mathrm{S},H_mathrm{DT}}$, and $H_mathrm{S}$ and $H_mathrm{DT}$ are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the $x_i$s under the emph{group product distribution}. There is a hidden partition of $[1,n]$ into groups; the $x_i$s in the $k$-th group are fixed unknown functions of the same hidden variable $u_k$; and the $u_k$s are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map $u_k$ to $x_i$s are well-behaved. After an $O(mathrm{poly}(n))$-time training phase, we achieve $O(n + H_mathrm{S})$ and $O(nalpha(n) + H_mathrm{DT})$ expected running times for sorting and DT, respectively, where $alpha(cdot)$ is the inverse Ackermann function.
Consider a graph with a rotation system, namely, for every vertex, a circular ordering of the incident edges. Given such a graph, an angle cover maps every vertex to a pair of consecutive edges in the ordering -- an angle -- such that each edge parti
We study self-improving sorting with hidden partitions. Our result is an optimal algorithm which runs in expected time O(H(pi(I)) + n), where I is the given input which contains n elements to be sorted, pi(I) is the output which are the ranks of all
This short survey of recent work in tile self-assembly discusses the use of simulation to classify and separate the computational and expressive power of self-assembly models. The journey begins with the result that there is a single universal tile s
Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness. We show that features obtained using self-supervised learning are comparable to, or better than, superv
In a nutshell, we show that polynomials and nested polytopes are topological, algebraic and algorithmically equivalent. Given two polytops $Asubseteq B$ and a number $k$, the Nested Polytope Problem (NPP) asks, if there exists a polytope $X$ on $k$ v