ترغب بنشر مسار تعليمي؟ اضغط هنا

Cell processor implementation of a MILC lattice QCD application

67   0   0.0 ( 0 )
 نشر من قبل Steven Gottlieb
 تاريخ النشر 2009
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We present results of the implementation of one MILC lattice QCD application-simulation with dynamical clover fermions using the hybrid-molecular dynamics R algorithm-on the Cell Broadband Engine processor. Fifty-four individual computational kernels responsible for 98.8% of the overall execution time were ported to the Cells Synergistic Processing Elements (SPEs). The remaining application framework, including MPI-based distributed code execution, was left to the Cells PowerPC processor. We observe that we only infrequently achieve more than 10 GFLOPS with any of the kernels, which is just over 4% of the Cells peak performance. At the same time, many of the kernels are sustaining a bandwidth close to 20 GB/s, which is 78% of the Cells peak. This indicates that the application performance is limited by the bandwidth between the main memory and the SPEs. In spite of this limitation, speedups of 8.7x (for 8x8x16x16 lattice) and 9.6x (for 16x16x16x16 lattice) were achieved when comparing a 3.2 GHz Cell processor to a single core of a 2.33 GHz Intel Xeon processor. When comparing the code scaled up to execute on a dual-Cell blade and a quad-core dual-chip Intel Xeon blade, the speedups are 1.5x (8x8x16x16 lattice) and 4.1x (16x16x16x16 lattice).

قيم البحث

اقرأ أيضاً

We report an implementation of a code for SU(3) matrix multiplication on Cell/B.E., which is a part of our project, Lattice Tool Kit on Cell/B.E.. On QS20, the speed of the matrix multiplication on SPE in single precision is 227GFLOPS and it becomes 20GFLOPS {this vaule was remeasured and corrcted.} together with data transfer from main memory by DNA transfer, which is 4.6% of the hardware peak speed (460GFLOPS), and is 7.4% of the theoretical peak speed of this calculation (268.77GFLOPS). We briefly describe our tuning procedure.
We investigate implementation of lattice Quantum Chromodynamics (QCD) code on the Intel AVX-512 architecture. The most time consuming part of the numerical simulations of lattice QCD is a solver of linear equation for a large sparse matrix that repre sents the strong interaction among quarks. To establish widely applicable prescriptions, we examine rather general methods for the SIMD architecture of AVX-512, such as using intrinsics and manual prefetching, for the matrix multiplication. Based on experience on the Oakforest-PACS system, a large scale cluster composed of Intel Xeon Phi Knights Landing, we discuss the performance tuning exploiting AVX-512 and code design on the SIMD architecture and massively parallel machines. We observe that the same code runs efficiently on an Intel Xeon Skylake-SP machine.
173 - Kristen Marsh , Randy Lewis 2013
Proposals for physics beyond the standard model often include new colored particles at or beyond the scale of electroweak symmetry breaking. Any new particle with a sufficient lifetime will bind with standard model gluons and quarks to form a spectru m of new hadrons. Here we focus on colored particles in the octet, decuplet, 27-plet, 28-plet and 35-plet representations of SU(3) color because these can form hadrons without valence quarks. In every case, lattice creation operators are constructed for all angular momentum, parity and charge conjugation quantum numbers. Computations with fully-dynamical lattice QCD configurations produce numerical results for mass splittings within this new hadron spectrum. A previous quenched lattice study explored the octet case for certain quantum number choices, and our findings provide a reassessment of those early results.
Our knowledge about the QCD phase diagram at finite baryon chemical potential $mu_{B}$ is limited by the well known sign problem. The path integral measure, in the standard determinantal approach, becomes complex at finite $mu_{B}$ so that standard M onte Carlo techniques cannot be directly applied. As the sign problem is representation dependent, by a suitable choice of the fundamental degrees of freedom that parameterize the partition function, it can get mild enough so that reweighting techniques can be used. A successful formulation, capable to tame the sign problem, is known since decades in the limiting case $betato 0$, where performing the gauge integration first, gives rise to a dual formulation in terms of color singlets (MDP formulation). Going beyond the strong coupling limit represents a serious challenge as the gauge integrals involved in the computation are only partially known analytically and become strongly coupled for $beta>0$. We will present explict formulae for all the integral relevant for ${rm SU}(N)$ gauge theories discretised `a la Wilson, and will discuss how they can be used to obtain a positive dual formulation, valid for all $beta$, for pure Yang Mills theory.
QPACE is a novel parallel computer which has been developed to be primarily used for lattice QCD simulations. The compute power is provided by the IBM PowerXCell 8i processor, an enhanced version of the Cell processor that is used in the Playstation 3. The QPACE nodes are interconnected by a custom, application optimized 3-dimensional torus network implemented on an FPGA. To achieve the very high packaging density of 26 TFlops per rack a new water cooling concept has been developed and successfully realized. In this paper we give an overview of the architecture and highlight some important technical details of the system. Furthermore, we provide initial performance results and report on the installation of 8 QPACE racks providing an aggregate peak performance of 200 TFlops.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا