People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.
We introduce the emph{idemetric} property, which formalises the idea that most nodes in a graph have similar distances between them, and which turns out to be quite standard amongst small-world network models. Modulo reasonable sparsity assumptions, we are then able to show that a strong form of idemetricity is actually equivalent to a very weak expander condition (PUMP). This provides a direct way of providing short proofs that small-world network models such as the Watts-Strogatz model are strongly idemetric (for a wide range of parameters), and also provides further evidence that being idemetric is a common property. We then consider how satisfaction of the idemetric property is relevant to algorithm design. For idemetric graphs we observe, for example, that a single breadth-first search provides a solution to the all-pairs shortest paths problem, so long as one is prepared to accept paths which are of stretch close to 2 with high probability. Since we are able to show that Kleinbergs model is idemetric, these results contrast nicely with the well known negative results of Kleinberg concerning efficient decentralised algorithms for finding short paths: for precisely the same model as Kleinbergs negative results hold, we are able to show that very efficient (and decentralised) algorithms exist if one allows for reasonable preprocessing. For deterministic distributed routing algorithms we are also able to obtain results proving that less routing information is required for idemetric graphs than the worst case in order to achieve stretch less than 3 with high probability: while $Omega(n^2)$ routing information is required in the worst case for stretch strictly less than 3 on almost all pairs, for idemetric graphs the total routing information required is $O(nlog(n))$.
Trotter and Erdos found conditions for when a directed $m times n$ grid graph on a torus is Hamiltonian. We consider the analogous graphs on a two-holed torus, and study their Hamiltonicity. We find an $mathcal{O}(n^4)$ algorithm to determine the Hamiltonicity of one of these graphs and an $mathcal{O}(log(n))$ algorithm to find the number of diagonals, which are sets of vertices that force the directions of edges in any Hamiltonian cycle. We also show that there is a periodicity pattern in the graphs Hamiltonicities if one of the sides of the grid is fixed; and we completely classify which graphs are Hamiltonian in the cases where $n=m$, $n=2$, the $m times n$ graph has $1$ diagonal, or the $frac{m}{2} times frac{n}{2}$ graph has $1$ diagonal.
Two steps phase shifting interferometry has been a hot topic in the recent years. We present a comparison study of 12 representative self--tunning algorithms based on two-steps phase shifting interferometry. We evaluate the performance of such algorithms by estimating the phase step of synthetic and experimental fringe patterns using 3 different normalizing processes: Gabor Filters Bank (GFB), Deep Neural Networks (DNNs) and Hilbert Huang Transform (HHT); in order to retrieve the background, the amplitude modulation and noise. We present the variants of state-of-the-art phase step estimation algorithms by using the GFB and DNNs as normalization preprocesses, as well as the use of a robust estimator such as the median to estimate the phase step. We present experimental results comparing the combinations of the normalization processes and the two steps phase shifting algorithms. Our study demonstrates that the quality of the retrieved phase from of two-step interferograms is more dependent of the normalizing process than the phase step estimation method.
In this paper, extending the works of Milena Radnovic and Serge Tabachnikov, we establish conditions for two different non-symmetric norms to define the same billiard reflection law.
Due to their highly luminous nature, gamma-ray bursts (GRBs) are useful tools in studying the early Universe (up to z = 10). We consider whether the available subset of Swift high redshift GRBs are unusual when compared to analogous simulations of a bright low redshift sample. By simulating data from the Burst Alert Telescope (BAT; Barthelmy et al. 2005) the light curves of these bright bursts are obtained over an extensive range of redshifts, revealing complicated evolution in properties of the prompt emission such as T90.