ترغب بنشر مسار تعليمي؟ اضغط هنا

A Comparative Study of 2D Numerical Methods with GPU Computing

89   0   0.0 ( 0 )
 نشر من قبل Jonathan Regele
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Graphics Processing Unit (GPU) computing is becoming an alternate computing platform for numerical simulations. However, it is not clear which numerical scheme will provide the highest computational efficiency for different types of problems. To this end, numerical accuracies and computational work of several numerical methods are compared using a GPU computing implementation. The Correction Procedure via Reconstruction (CPR), Discontinuous Galerkin (DG), Nodal Discontinuous Galerkin (NDG), Spectral Difference (SD), and Finite Volume (FV) methods are investigated using various reconstruction orders. Both smooth and discontinuous cases are considered for two-dimensional simulations. For discontinuous problems, MUSCL schemes are employed with FV, while CPR, DG, NDG, and SD use slope limiting. The computation time to reach a set error criteria and total time to complete solutions are compared across the methods. It is shown that while FV methods can produce solutions with low computational times, they produce larger errors than high-order methods for smooth problems at the same order of accuracy. For discontinuous problems, the methods show good agreement with one another in terms of solution profiles, and the total computational times between FV, CPR, and SD are comparable.

قيم البحث

اقرأ أيضاً

We implement exact triangle counting in graphs on the GPU using three different methodologies: subgraph matching to a triangle pattern; programmable graph analytics, with a set-intersection approach; and a matrix formulation based on sparse matrix-ma trix multiplies. All three deliver best-of-class performance over CPU implementations and over comparable GPU implementations, with the graph-analytic approach achieving the best performance due to its ability to exploit efficient filtering steps to remove unnecessary work and its high-performance set-intersection core.
Multisplit is a broadly useful parallel primitive that permutes its input data into contiguous buckets or bins, where the function that categorizes an element into a bucket is provided by the programmer. Due to the lack of an efficient multisplit on GPUs, programmers often choose to implement multisplit with a sort. One way is to first generate an auxiliary array of bucket IDs and then sort input data based on it. In case smaller indexed buckets possess smaller valued keys, another way for multisplit is to directly sort input data. Both methods are inefficient and require more work than necessary: the former requires more expensive data movements while the latter spends unnecessary effort in sorting elements within each bucket. In this work, we provide a parallel model and multiple implementations for the multisplit problem. Our principal focus is multisplit for a small (up to 256) number of buckets. We use warp-synchronous programming models and emphasize warp-wide communications to avoid branch divergence and reduce memory usage. We also hierarchically reorder input elements to achieve better coalescing of global memory accesses. On a GeForce GTX 1080 GPU, we can reach a peak throughput of 18.93 Gkeys/s (or 11.68 Gpairs/s) for a key-only (or key-value) multisplit. Finally, we demonstrate how multisplit can be used as a building block for radix sort. In our multisplit-based sort implementation, we achieve comparable performance to the fastest GPU sort routines, sorting 32-bit keys (and key-value pairs) with a throughput of 3.0 G keys/s (and 2.1 Gpair/s).
We propose a Hermite spectral method for the spatially inhomogeneous Boltzmann equation. For the inverse-power-law model, we generalize an approximate quadratic collision operator defined in the normalized and dimensionless setting to an operator for arbitrary distribution functions. An efficient algorithm with a fast transform is introduced to discretize this new collision operator. The method is tested for one-dimensional benchmark microflow problems.
We review several parallel tempering schemes and examine their main ingredients for accuracy and efficiency. The present study covers two selection methods of temperatures and several choices for the exchange of replicas, including a recent novel all -pair exchange method. We compare the resulting schemes and measure specific heat errors and efficiency using the two-dimensional (2D) Ising model. Our tests suggest that, an earlier proposal for using numbers of local moves related to the canonical correlation times is one of the key ingredients for increasing efficiency, and protocols using cluster algorithms are found to be very effective. Some of the protocols are also tested for efficiency and ground state production in 3D spin glass models where we find that, a simple nearest-neighbor approach using a local n-fold way algorithm is the most effective. Finally, we present evidence that the asymptotic limits of the ground state energy for the isotropic case and that of an anisotropic case of the 3D spin-glass model are very close and may even coincide.
Due to the surge in the volume of data generated and rapid advancement in Artificial Intelligence (AI) techniques like machine learning and deep learning, the existing traditional computing models have become inadequate to process an enormous volume of data and the complex application logic for extracting intrinsic information. Computing accelerators such as Graphics processing units (GPUs) have become de facto SIMD computing system for many big data and machine learning applications. On the other hand, the traditional computing model has gradually switched from conventional ownership-based computing to subscription-based cloud computing model. However, the lack of programming models and frameworks to develop cloud-native applications in a seamless manner to utilize both CPU and GPU resources in the cloud has become a bottleneck for rapid application development. To support this application demand for simultaneous heterogeneous resource usage, programming models and new frameworks are needed to manage the underlying resources effectively. Aneka is emerged as a popular PaaS computing model for the development of Cloud applications using multiple programming models like Thread, Task, and MapReduce in a single container .NET platform. Since, Aneka addresses MIMD application development that uses CPU based resources and GPU programming like CUDA is designed for SIMD application development, here, the chapter discusses GPU PaaS computing model for Aneka Clouds for rapid cloud application development for .NET platforms. The popular opensource GPU libraries are utilized and integrated it into the existing Aneka task programming model. The scheduling policies are extended that automatically identify GPU machines and schedule respective tasks accordingly. A case study on image processing is discussed to demonstrate the system, which has been built using PaaS Aneka SDKs and CUDA library.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا