No Arabic abstract
As we rapidly approach the frontiers of ultra large computing resources, software optimization is becoming of paramount interest to scientific application developers interested in efficiently leveraging all available on-Node computing capabilities and thereby improving a requisite science per watt metric. The scientific application of interest here is the Basic Math Library (BML) that provides a singular interface for linear algebra operation frequently used in the Quantum Molecular Dynamics (QMD) community. The provisioning of a singular interface indicates the presence of an abstraction layer which in-turn suggests commonalities in the code-base and therefore any optimization or tuning introduced in the core of code-base has the ability to positively affect the performance of the aforementioned library as a whole. With that in mind, we proceed with this investigation by performing a survey of the entirety of the BML code-base, and extract, in form of micro-kernels, common snippets of code. We introduce several optimization strategies into these micro-kernels including 1.) Strength Reduction 2.) Memory Alignment for large arrays 3.) Non Uniform Memory Access (NUMA) aware allocations to enforce data locality and 4.) appropriate thread affinity and bindings to enhance the overall multi-threaded performance. After introducing these optimizations, we benchmark the micro-kernels and compare the run-time before and after optimization for several target architectures. Finally we use the results as a guide to propagating the optimization strategies into the BML code-base. As a demonstration, herein, we test the efficacy of these optimization strategies by comparing the benchmark and optimiz
The life-cycle of a partial differential equation (PDE) solver is often characterized by three development phases: the development of a stable numerical discretization, development of a correct (verified) implementation, and the optimization of the implementation for different computer architectures. Often it is only after significant time and effort has been invested that the performance bottlenecks of a PDE solver are fully understood, and the precise details varies between different computer architectures. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop. We show how discretization of a wave equation can be theoretically studied to understand the performance limitations of the method on modern computer architectures. We focus on the roofline model, now broadly used in the high-performance computing community, which considers the achievable performance in terms of the peak memory bandwidth and peak floating point performance of a computer with respect to algorithmic choices. A first principles analysis of operational intensity for key time-stepping finite-difference algorithms is presented. With this information available at the time of algorithm design, the expected performance on target computer systems can be used as a driver for algorithm design.
Performance tests and analyses are critical to effective HPC software development and are central components in the design and implementation of computational algorithms for achieving faster simulations on existing and future computing architectures for large-scale application problems. In this paper, we explore performance and space-time trade-offs for important compute-intensive kernels of large-scale numerical solvers for PDEs that govern a wide range of physical applications. We consider a sequence of PDE- motivated bake-off problems designed to establish best practices for efficient high-order simulations across a variety of codes and platforms. We measure peak performance (degrees of freedom per second) on a fixed number of nodes and identify effective code optimization strategies for each architecture. In addition to peak performance, we identify the minimum time to solution at 80% parallel efficiency. The performance analysis is based on spectral and p-type finite elements but is equally applicable to a broad spectrum of numerical PDE discretizations, including finite difference, finite volume, and h-type finite elements.
We describe a strategy for code modernisation of Gadget, a widely used community code for computational astrophysics. The focus of this work is on node-level performance optimisation, targeting current multi/many-core IntelR architectures. We identify and isolate a sample code kernel, which is representative of a typical Smoothed Particle Hydrodynamics (SPH) algorithm. The code modifications include threading parallelism optimisation, change of the data layout into Structure of Arrays (SoA), auto-vectorisation and algorithmic improvements in the particle sorting. We obtain shorter execution time and improved threading scalability both on Intel XeonR ($2.6 times$ on Ivy Bridge) and Xeon PhiTM ($13.7 times$ on Knights Corner) systems. First few tests of the optimised code result in $19.1 times$ faster execution on second generation Xeon Phi (Knights Landing), thus demonstrating the portability of the devised optimisation solutions to upcoming architectures.
Many cloud service providers (CSPs) provide on-demand service at a price with a small delay. We propose a QoS-differentiated model where multiple SLAs deliver both on-demand service for latency-critical users and delayed services for delay-tolerant users at lower prices. Two architectures are considered to fulfill SLAs. The first is based on priority queues. The second simply separates servers into multiple modules, each for one SLA. As an ecosystem, we show that the proposed framework is dominant-strategy incentive compatible. Although the first architecture appears more prevalent in the literature, we prove the superiority of the second architecture, under which we further leverage queueing theory to determine the optimal SLA delays and prices. Finally, the viability of the proposed framework is validated through numerical comparison with the on-demand service and it exhibits a revenue improvement in excess of 200%. Our results can help CSPs design optimal delay-differentiated services and choose appropriate serving architectures.
Currently available quantum computing hardware platforms have limited 2-qubit connectivity among their addressable qubits. In order to run a generic quantum algorithm on such a platform, one has to transform the initial logical quantum circuit describing the algorithm into an equivalent one that obeys the connectivity restrictions. In this work we construct a circuit synthesis scheme that takes as input the qubit connectivity graph and a quantum circuit over the gate set generated by ${text{CNOT},R_{Z}}$ and outputs a circuit that respects the connectivity of the device. As a concrete application, we apply our techniques to Googles Bristlecone 72-qubit quantum chip connectivity, IBMs Tokyo 20-qubit quantum chip connectivity, and Rigettis Acorn 19-qubit quantum chip connectivity. In addition, we also compare the performance of our scheme as a function of sparseness of randomly generated quantum circuits. Note: Recently, the authors of arXiv:1904.00633 independently presented a similar optimization scheme. Our work is independent of arXiv:1904.00633, being a longer version of the seminar presented by Beatrice Nash at the Dagstuhl Seminar 18381: Quantum Programming Languages, pg. 120, September 2018, Dagstuhl, Germany, slide deck available online at https://materials.dagstuhl.de/files/18/18381/18381.BeatriceNash.Slides.pdf.