ترغب بنشر مسار تعليمي؟ اضغط هنا

High-resolution imaging on TPUs

92   0   0.0 ( 0 )
 نشر من قبل Fantine Huot
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The rapid evolution of artificial intelligence (AI) is leading to a new generation of hardware accelerators optimized for deep learning. Some of the designs of these accelerators are general enough to allow their use for other computationally intensive tasks beyond AI. Cloud tensor processing units (TPUs) are one such example. Here, we demonstrate a novel approach using TensorFlow on Cloud TPUs to implement a high-resolution imaging technique called full-waveform inversion. Higher-order numerical stencils leverage the efficient matrix multiplication offered by the Cloud TPU, and the halo exchange benefits from the dedicated high-speed interchip connection. The performance is competitive when compared with Tesla V100 graphics processing units and shows promise for future computation- and memory-intensive imaging applications.



قيم البحث

اقرأ أيضاً

The advanced magnetic resonance (MR) image reconstructions such as the compressed sensing and subspace-based imaging are considered as large-scale, iterative, optimization problems. Given the large number of reconstructions required by the practical clinical usage, the computation time of these advanced reconstruction methods is often unacceptable. In this work, we propose using Googles Tensor Processing Units (TPUs) to accelerate the MR image reconstruction. TPU is an application-specific integrated circuit (ASIC) for machine learning applications, which has recently been used to solve large-scale scientific computing problems. As proof-of-concept, we implement the alternating direction method of multipliers (ADMM) in TensorFlow to reconstruct images on TPUs. The reconstruction is based on multi-channel, sparsely sampled, and radial-trajectory $k$-space data with sparsity constraints. The forward and inverse non-uniform Fourier transform operations are formulated in terms of matrix multiplications as in the discrete Fourier transform. The sparsifying transform and its adjoint operations are formulated as convolutions. The data decomposition is applied to the measured $k$-space data such that the aforementioned tensor operations are localized within individual TPU cores. The data decomposition and the inter-core communication strategy are designed in accordance with the TPU interconnect network topology in order to minimize the communication time. The accuracy and the high parallel efficiency of the proposed TPU-based image reconstruction method are demonstrated through numerical examples.
114 - Xiao Zhang 2020
The last ten years have witnessed fast spreading of massively parallel computing clusters, from leading supercomputing facilities down to the average university computing center. Many companies in the private sector have undergone a similar evolution . In this scenario, the seamless integration of software and middleware libraries is a key ingredient to ensure portability of scientific codes and guarantees them an extended lifetime. In this work, we describe the integration of the ChASE library, a modern parallel eigensolver, into an existing legacy code for the first-principles computation of optical properties of materials via solution of the Bethe-Salpeter equation for the optical polarization function. Our numerical tests show that, as a result of integrating ChASE and parallelizing the reading routine, the code experiences a remarkable speedup and greatly improved scaling behavior on both multi- and many-core architectures. We demonstrate that such a modernized BSE code will, by fully exploiting parallel computing architectures and file systems, enable domain scientists to accurately study complex material systems that were not accessible before.
In this work, we present two parallel algorithms for the large-scale discrete Fourier transform (DFT) on Tensor Processing Unit (TPU) clusters. The two parallel algorithms are associated with two formulations of DFT: one is based on the Kronecker pro duct, to be specific, dense matrix multiplications between the input data and the Vandermonde matrix, denoted as KDFT in this work; the other is based on the famous Cooley-Tukey algorithm and phase adjustment, denoted as FFT in this work. Both KDFT and FFT formulations take full advantage of TPUs strength in matrix multiplications. The KDFT formulation allows direct use of nonuniform inputs without additional step. In the two parallel algorithms, the same strategy of data decomposition is applied to the input data. Through the data decomposition, the dense matrix multiplications in KDFT and FFT are kept local within TPU cores, which can be performed completely in parallel. The communication among TPU cores is achieved through the one-shuffle scheme in both parallel algorithms, with which sending and receiving data takes place simultaneously between two neighboring cores and along the same direction on the interconnect network. The one-shuffle scheme is designed for the interconnect topology of TPU clusters, minimizing the time required by the communication among TPU cores. Both KDFT and FFT are implemented in TensorFlow. The three-dimensional complex DFT is performed on an example of dimension $8192 times 8192 times 8192$ with a full TPU Pod: the run time of KDFT is 12.66 seconds and that of FFT is 8.3 seconds. Scaling analysis is provided to demonstrate the high parallel efficiency of the two DFT implementations on TPUs.
The Jaccard similarity index is an important measure of the overlap of two sets, widely used in machine learning, computational genomics, information retrieval, and many other areas. We design and implement SimilarityAtScale, the first communication- efficient distributed algorithm for computing the Jaccard similarity among pairs of large datasets. Our algorithm provides an efficient encoding of this problem into a multiplication of sparse matrices. Both the encoding and sparse matrix product are performed in a way that minimizes data movement in terms of communication and synchronization costs. We apply our algorithm to obtain similarity among all pairs of a set of large samples of genomes. This task is a key part of modern metagenomics analysis and an evergrowing need due to the increasing availability of high-throughput DNA sequencing data. The resulting scheme is the first to enable accurate Jaccard distance derivations for massive datasets, using largescale distributed-memory systems. We package our routines in a tool, called GenomeAtScale, that combines the proposed algorithm with tools for processing input sequences. Our evaluation on real data illustrates that one can use GenomeAtScale to effectively employ tens of thousands of processors to reach new frontiers in large-scale genomic and metagenomic analysis. While GenomeAtScale can be used to foster DNA research, the more general underlying SimilarityAtScale algorithm may be used for high-performance distributed similarity computations in other data analytics application domains.
Commercial graphics processors (GPUs) have high compute capacity at very low cost, which makes them attractive for general purpose scientific computing. In this paper we show how graphics processors can be used for N-body simulations to obtain improv ements in performance over current generation CPUs. We have developed a highly optimized algorithm for performing the O(N^2) force calculations that constitute the major part of stellar and molecular dynamics simulations. In some of the calculations, we achieve sustained performance of nearly 100 GFlops on an ATI X1900XTX. The performance on GPUs is comparable to specialized processors such as GRAPE-6A and MDGRAPE-3, but at a fraction of the cost. Furthermore, the wide availability of GPUs has significant implications for cluster computing and distributed computing efforts like Folding@Home.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا