Do you want to publish a course? Click here

Accelerating the Rate of Astronomical Discovery with GPU-Powered Clusters

123   0   0.0 ( 0 )
 Added by Christopher Fluke
 Publication date 2011
  fields Physics
and research's language is English




Ask ChatGPT about the research

In recent years, the Graphics Processing Unit (GPU) has emerged as a low-cost alternative for high performance computing, enabling impressive speed-ups for a range of scientific computing applications. Early adopters in astronomy are already benefiting in adapting their codes to take advantage of the GPUs massively parallel processing paradigm. I give an introduction to, and overview of, the use of GPUs in astronomy to date, highlighting the adoption and application trends from the first ~100 GPU-related publications in astronomy. I discuss the opportunities and challenges of utilising GPU computing clusters, such as the new Australian GPU supercomputer, gSTAR, for accelerating the rate of astronomical discovery.

rate research

Read More

Traditional analysis techniques may not be sufficient for astronomers to make the best use of the data sets that current and future instruments, such as the Square Kilometre Array and its Pathfinders, will produce. By utilizing the incredible pattern-recognition ability of the human mind, scientific visualization provides an excellent opportunity for astronomers to gain valuable new insight and understanding of their data, particularly when used interactively in 3D. The goal of our work is to establish the feasibility of a real-time 3D monitoring system for data going into the Australian SKA Pathfinder archive. Based on CUDA, an increasingly popular development tool, our work utilizes the massively parallel architecture of modern graphics processing units (GPUs) to provide astronomers with an interactive 3D volume rendering for multi-spectral data sets. Unlike other approaches, we are targeting real time interactive visualization of datasets larger than GPU memory while giving special attention to data with low signal to noise ratio - two critical aspects for astronomy that are missing from most existing scientific visualization software packages. Our framework enables the astronomer to interact with the geometrical representation of the data, and to control the volume rendering process to generate a better representation of their datasets.
94 - A. H. Hassan , C. J. Fluke , 2011
Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed todays single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a software as a service manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.
161 - A.H. Hassan , C.J. Fluke , 2012
We present a framework to interactively volume-render three-dimensional data cubes using distributed ray-casting and volume bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core CPU. The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128 GPUs. The framework proved to be scalable to render a 204 GB data cube with an average of 30 frames per second. Our performance analyses also compare between using NVIDIA Tesla 1060 and 2050 GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, and the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order 3D data sets is a requirement.
A high fidelity flow simulation for complex geometries for high Reynolds number ($Re$) flow is still very challenging, which requires more powerful computational capability of HPC system. However, the development of HPC with traditional CPU architecture suffers bottlenecks due to its high power consumption and technical difficulties. Heterogeneous architecture computation is raised to be a promising solution of difficulties of HPC development. GPU accelerating technology has been utilized in low order scheme CFD solvers on structured grid and high order scheme solvers on unstructured meshes. The high order finite difference methods on structured grid possess many advantages, e.g. high efficiency, robustness and low storage, however, the strong dependence among points for a high order finite difference scheme still limits its application on GPU platform. In present work, we propose a set of hardware-aware technology to optimize the efficiency of data transfer between CPU and GPU, and efficiency of communication between GPUs. An in-house multi-block structured CFD solver with high order finite difference methods on curvilinear coordinates is ported onto GPU platform, and obtain satisfying performance with speedup maximum around 2000x over a single CPU core. This work provides efficient solution to apply GPU computing in CFD simulation with certain high order finite difference methods on current GPU heterogeneous computers. The test shows that significant accelerating effects can been achieved for different GPUs.
We have performed a new search for radio pulsars in archival data of the intermediate and high Galactic latitude parts of the Southern High Time Resolution Universe pulsar survey. This is the first time the entire dataset has been searched for binary pulsars, an achievement enabled by GPU-accelerated dedispersion and periodicity search codes nearly 50 times faster than the previously used pipeline. Candidate selection was handled entirely by a Machine Learning algorithm, allowing for the assessment of 17.6 million candidates in a few person-days. We have also introduced an outlier detection algorithm for efficient radio-frequency interference (RFI) mitigation on folded data, a new approach that enabled the discovery of pulsars previously masked by RFI. We discuss implications for future searches, particularly the importance of expanding work on RFI mitigation to improve survey completeness. In total we discovered 23 previously unknown sources, including 6 millisecond pulsars and at least 4 pulsars in binary systems. We also found an elusive but credible redback candidate that we have yet to confirm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا