No Arabic abstract
The computation of Feynman integrals often involves square roots. One way to obtain a solution in terms of multiple polylogarithms is to rationalize these square roots by a suitable variable change. We present a program that can be used to find such transformations. After an introduction to the theoretical background, we explain in detail how to use the program in practice.
We present OGRe, a modern Mathematica package for tensor calculus, designed to be both powerful and user-friendly. The package can be used in a variety of contexts where tensor calculations are needed, in both mathematics and physics, but it is especially suitable for general relativity. By implementing an object-oriented design paradigm, OGRe allows calculating arbitrarily complicated tensor formulas easily, and automatically transforms between index configurations and coordinate systems behind the scenes as needed, eliminating user errors by making it impossible for the user to combine tensors in inconsistent ways. Other features include displaying tensors in various forms, automatic calculation of curvature tensors and geodesic equations, easy importing and exporting of tensors between sessions, optimized algorithms and parallelization for improved performance, and more.
Our goal is compression of massive-scale grid-structured data, such as the multi-terabyte output of a high-fidelity computational simulation. For such data sets, we have developed a new software package called TuckerMPI, a parallel C++/MPI software package for compressing distributed data. The approach is based on treating the data as a tensor, i.e., a multidimensional array, and computing its truncated Tucker decomposition, a higher-order analogue to the truncated singular value decomposition of a matrix. The result is a low-rank approximation of the original tensor-structured data. Compression efficiency is achieved by detecting latent global structure within the data, which we contrast to most compression methods that are focused on local structure. In this work, we describe TuckerMPI, our implementation of the truncated Tucker decomposition, including details of the data distribution and in-memory layouts, the parallel and serial implementations of the key kernels, and analysis of the storage, communication, and computational costs. We test the software on 4.5 terabyte and 6.7 terabyte data sets distributed across 100s of nodes (1000s of MPI processes), achieving compression rates between 100-200,000$times$ which equates to 99-99.999% compression (depending on the desired accuracy) in substantially less time than it would take to even read the same dataset from a parallel filesystem. Moreover, we show that our method also allows for reconstruction of partial or down-sampled data on a single node, without a parallel computer so long as the reconstructed portion is small enough to fit on a single machine, e.g., in the instance of reconstructing/visualizing a single down-sampled time step or computing summary statistics.
Feynman integral computations in theoretical high energy particle physics frequently involve square roots in the kinematic variables. Physicists often want to solve Feynman integrals in terms of multiple polylogarithms. One way to obtain a solution in terms of these functions is to rationalize all occurring square roots by a suitable variable change. In this paper, we give a rigorous definition of rationalizability for square roots of ratios of polynomials. We show that the problem of deciding whether a single square root is rationalizable can be reformulated in geometrical terms. Using this approach, we give easy criteria to decide rationalizability in most cases of square roots in one and two variables. We also give partial results and strategies to prove or disprove rationalizability of sets of square roots. We apply the results to many examples from actual computations in high energy particle physics.
We describe in this paper new design techniques used in the cpp exact linear algebra library linbox, intended to make the library safer and easier to use, while keeping it generic and efficient. First, we review the new simplified structure for containers, based on our emph{founding scope allocation} model. We explain design choices and their impact on coding: unification of our matrix classes, clearer model for matrices and submatrices, etc Then we present a variation of the emph{strategy} design pattern that is comprised of a controller--plugin system: the controller (solution) chooses among plug-ins (algorithms) that always call back the controllers for subtasks. We give examples using the solution mul. Finally we present a benchmark architecture that serves two purposes: Providing the user with easier ways to produce graphs; Creating a framework for automatically tuning the library and supporting regression testing.
Exascale computing will feature novel and potentially disruptive hardware architectures. Exploiting these to their full potential is non-trivial. Numerical modelling frameworks involving finite difference methods are currently limited by the static nature of the hand-coded discretisation schemes and repeatedly may have to be re-written to run efficiently on new hardware. In contrast, OpenSBLI uses code generation to derive the models code from a high-level specification. Users focus on the equations to solve, whilst not concerning themselves with the detailed implementation. Source-to-source translation is used to tailor the code and enable its execution on a variety of hardware.