Do you want to publish a course? Click here

Implementation of 3D degridding algorithm on the NVIDIA GPUs using CUDA

56   0   0.0 ( 0 )
 Added by Karel Ad\\'amek
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Practical aperture synthesis imaging algorithms work by iterating between estimating the sky brightness distribution and a comparison of a prediction based on this estimate with the measured data (visibilities). Accuracy in the latter step is crucial but is made difficult by irregular and non-planar sampling of data by the telescope. In this work we present a GPU implementation of 3d de-gridding which accurately deals with these two difficulties and is designed for distributed operation. We address the load balancing issues caused by large variation in visibilities that need to be computed. Using CUDA and NVidia GPUs we measure performance up to 1.2 billion visibilities per second.



rate research

Read More

Conventional GPU implementations of Strassens algorithm (Strassen) typically rely on the existing high-performance matrix multiplication (GEMM), trading space for time. As a result, such approaches can only achieve practical speedup for relatively large, squarish matrices due to the extra memory overhead, and their usages are limited due to the considerable workspace. We present novel Strassen primitives for GPUs that can be composed to generate a family of Strassen algorithms. Our algorithms utilize both the memory and thread hierarchies on GPUs, reusing shared memory and register files inherited from GEMM, fusing additional operations, and avoiding extra workspace. We further exploit intra- and inter-kernel parallelism by batching, streaming, and employing atomic operations. We also develop a performance model for NVIDIA Volta GPUs to select the appropriate blocking parameters and predict the performance for GEMM and Strassen. Overall, our 1-level Strassen can achieve up to 1.11x speedup with a crossover point as small as 1,536 compared to cublasSgemm on a NVIDIA Tesla V100 GPU. With additional workspace, our 2-level Strassen can achieve 1.19x speedup with a crossover point at 7,680.
While most robotics simulation libraries are built for low-dimensional and intrinsically serial tasks, soft-body and multi-agent robotics have created a demand for simulation environments that can model many interacting bodies in parallel. Despite the increasing interest in these fields, no existing simulation library addresses the challenge of providing a unified, highly-parallelized, GPU-accelerated interface for simulating large robotic systems. Titan is a versatile CUDA-based C++ robotics simulation library that employs a novel asynchronous computing model for GPU-accelerated simulations of robotics primitives. The innovative GPU architecture design permits simultaneous optimization and control on the CPU while the GPU runs asynchronously, enabling rapid topology optimization and reinforcement learning iterations. Kinematics are solved with a massively parallel integration scheme that incorporates constraints and environmental forces. We report dramatically improved performance over CPU-based baselines, simulating as many as 300 million primitive updates per second, while allowing flexibility for a wide range of research applications. We present several applications of Titan to high-performance simulations of soft-body and multi-agent robots.
61 - Hamid Tabani 2021
Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) bring unprecedented performance requirements for automotive systems. Graphic Processing Unit (GPU) based platforms have been deployed with the aim of meeting these requirements, being NVIDIA Jetson TX2 and its high-performance successor, NVIDIA AGX Xavier, relevant representatives. However, to what extent high-performance GPU configurations are appropriate for ADAS and AD workloads remains as an open question. This paper analyzes this concern and provides valuable insights on this question by modeling two recent automotive NVIDIA GPU-based platforms, namely TX2 and AGX Xavier. In particular, our work assesses their microarchitectural parameters against relevant benchmarks, identifying GPU setups delivering increased performance within a similar cost envelope, or decreasing hardware costs while preserving original performance levels. Overall, our analysis identifies opportunities for the optimization of automotive GPUs to further increase system efficiency.
With AMD reinforcing their ambition in the scientific high performance computing ecosystem, we extend the hardware scope of the Ginkgo linear algebra package to feature a HIP backend for AMD GPUs. In this paper, we report and discuss the porting effort from CUDA, the extension of the HIP framework to add missing features such as cooperative groups, the performance price of compiling HIP code for AMD architectures, and the design of a library providing native backends for NVIDIA and AMD GPUs while minimizing code duplication by using a shared code base.
We introduce hankl, a lightweight Python implementation of the FFTLog algorithm for Cosmology. The FFTLog algorithm is an extension of the Fast Fourier Transform (FFT) for logarithmically spaced periodic sequences. It can be used to efficiently compute Hankel transformations, which are paramount for many modern cosmological analyses that are based on the power spectrum or the 2-point correlation function multipoles. The code is well-tested, open source, and publicly available.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا