ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper studies new properties of the front and back ends of a sorting network, and illustrates the utility of these in the search for new bounds on optimal sorting networks. Search focuses first on the outsides of the network and then on the inne r part. All previous works focus only on properties of the front end of networks and on how to apply these to break symmetries in the search. The new, out-side-in, properties help shed understanding on how sorting networks sort, and facilitate the computation of new bounds on optimal sorting networks. We present new parallel sorting networks for 17 to 20 inputs. For 17, 19, and 20 inputs these networks are faster than the previously known best networks. For 17 inputs, the new sorting network is shown optimal in the sense that no sorting network using less layers exists.
This paper shows an application of the theory of sorting networks to facilitate the synthesis of optimized general purpose sorting libraries. Standard sorting libraries are often based on combinations of the classic Quicksort algorithm with insertion sort applied as the base case for small fixed numbers of inputs. Unrolling the code for the base case by ignoring loop conditions eliminates branching and results in code which is equivalent to a sorting network. This enables the application of further program transformations based on sorting network optimizations, and eventually the synthesis of code from sorting networks. We show that if considering the number of comparisons and swaps then theory predicts no real advantage of this approach. However, significant speed-ups are obtained when taking advantage of instruction level parallelism and non-branching conditional assignment instructions, both of which are common in modern CPU architectures. We provide empirical evidence that using code synthesized from efficient sorting networks as the base case for Quicksort libraries results in significant real-world speed-ups.
In recent work, we formalized the theory of optimal-size sorting networks with the goal of extracting a verified checker for the large-scale computer-generated proof that 25 comparisons are optimal when sorting 9 inputs, which required more than a de cade of CPU time and produced 27 GB of proof witnesses. The checker uses an untrusted oracle based on these witnesses and is able to verify the smaller case of 8 inputs within a couple of days, but it did not scale to the full proof for 9 inputs. In this paper, we describe several non-trivial optimizations of the algorithm in the checker, obtained by appropriately changing the formalization and capitalizing on the symbiosis with an adequate implementation of the oracle. We provide experimental evidence of orders of magnitude improvements to both runtime and memory footprint for 8 inputs, and actually manage to check the full proof for 9 inputs.
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validi ty of such proofs. In this paper, we describe a formalized theory of size-optimal sorting networks. From this formalization we extract a certified checker that successfully verifies computer-generated proofs of optimality on up to 8 inputs. The checker relies on an untrusted oracle to shortcut the search for witnesses on more than 1.6 million NP-complete subproblems.
111 - Peter Schneider 2014
We consider several aspects of the generalized multi-plane gravitational lens theory, in which light rays from a distant source are affected by several main deflectors, and in addition by the tidal gravitational field of the large-scale matter distri bution in the Universe when propagating between the main deflectors. Specifically, we derive a simple expression for the time-delay function in this case, making use of the general formalism for treating light propagation in inhomogeneous spacetimes which leads to the characterization of distance matrices between main lens planes. Applying Fermats principle, an alternative form of the corresponding lens equation is derived, which connects the impact vectors in three consecutive main lens planes, and we show that this form of the lens equation is equivalent to the more standard one. For this, some general relations for cosmological distance matrices are derived. The generalized multi-plane lens situation admits a generalized mass-sheet transformation, which corresponds to uniform isotropic scaling in each lens plane, a corresponding scaling of the deflection angle, and the addition of a tidal matrix (mass sheet plus external shear) to each main lens. We show that the time delay for sources in all lens planes scale with the same factor under this generalized mass-sheet transformation, thus precluding the use of time-delay ratios to break the mass-sheet transformation.
52 - Peter Schneider 2014
Strong gravitational lensing of sources with different redshifts has been used to determine cosmological distance ratios, which in turn depend on the expansion history. Hence, such systems are viewed as potential tools for constraining cosmological p arameters. Here we show that in lens systems with two distinct source redshifts, of which the nearest one contributes to the light deflection towards the more distant one, there exists an invariance transformation which leaves all strong lensing observables unchanged (except the product of time delay and Hubble constant), generalizing the well-known mass-sheet transformation in single plane lens systems. The transformation preserves the relative distribution of mass and light, so that a `mass-follows-light assumption does not fix the MST. All time delays (from sources on both planes) scale with the same factor -- time-delay ratios are therefore invariant under the MST. Changing cosmological parameters, and thus distance ratios, is essentially equivalent to such a mass-sheet transformation. As an example, we discuss the double source plane system SDSSJ0946+1006, which has been recently studied by Collett and Auger, and show that variations of cosmological parameters within reasonable ranges lead to only a small mass-sheet transformation in both lens planes. Hence, the ability to extract cosmological information from such systems depends heavily on the ability to break the mass-sheet degeneracy.
This paper describes a computer-assisted non-existence proof of nine-input sorting networks consisting of 24 comparators, hence showing that the 25-comparator sorting network found by Floyd in 1964 is optimal. As a corollary, we obtain that the 29-co mparator network found by Waksman in 1969 is optimal when sorting ten inputs. This closes the two smallest open instances of the optimal size sorting network problem, which have been open since the results of Floyd and Knuth from 1966 proving optimality for sorting networks of up to eight inputs. The proof involves a combination of two methodologies: one based on exploiting the abundance of symmetries in sorting networks, and the other, based on an encoding of the problem to that of satisfiability of propositional logic. We illustrate that, while each of these can single handed solve smaller instances of the problem, it is their combination which leads to an efficient solution for nine inputs.
Previous work identifying depth-optimal $n$-channel sorting networks for $9leq n leq 16$ is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits th e problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until $n=40$.
66 - Peter Schneider 2007
In weak gravitational lensing, the image distortion caused by shear measures the projected tidal gravitational field of the deflecting mass distribution. To lowest order, the shear is proportional to the mean image ellipticity. If the image sizes are not small compared to the scale over which the shear varies, higher-order distortions occur, called flexion. For ordinary weak lensing, the observable quantity is not the shear, but the reduced shear, owing to the mass-sheet degeneracy. Likewise, the flexion itself is unobservable. Rather, higher-order image distortions measure the reduced flexion, i.e., derivatives of the reduced shear. We derive the corresponding lens equation in terms of the reduced flexion and calculate the resulting relation between brightness moments of source and image. Assuming an isotropic distribution of source orientations, estimates for the reduced shear and flexion are obtained; these are then tested with simulations. In particular, the presence of flexion affects the determination of the reduced shear. The results of these simulations yield the amount of bias of the estimators, as a function of the shear and flexion. We point out and quantify a fundamental limitation of the flexion formalism, in terms of the product of reduced flexion and source size. If this product increases above the derived threshold, multiple images of the source are formed locally, and the formalism breaks down. Finally, we show how a general (reduced) flexion field can be decomposed into its four components: two of them are due to a shear field, carrying an E- and B-mode in general. The other two components do not correspond to a shear field; they can also be split up into corresponding E- and B-modes.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا