No Arabic abstract
In this work, we present two parallel algorithms for the large-scale discrete Fourier transform (DFT) on Tensor Processing Unit (TPU) clusters. The two parallel algorithms are associated with two formulations of DFT: one is based on the Kronecker product, to be specific, dense matrix multiplications between the input data and the Vandermonde matrix, denoted as KDFT in this work; the other is based on the famous Cooley-Tukey algorithm and phase adjustment, denoted as FFT in this work. Both KDFT and FFT formulations take full advantage of TPUs strength in matrix multiplications. The KDFT formulation allows direct use of nonuniform inputs without additional step. In the two parallel algorithms, the same strategy of data decomposition is applied to the input data. Through the data decomposition, the dense matrix multiplications in KDFT and FFT are kept local within TPU cores, which can be performed completely in parallel. The communication among TPU cores is achieved through the one-shuffle scheme in both parallel algorithms, with which sending and receiving data takes place simultaneously between two neighboring cores and along the same direction on the interconnect network. The one-shuffle scheme is designed for the interconnect topology of TPU clusters, minimizing the time required by the communication among TPU cores. Both KDFT and FFT are implemented in TensorFlow. The three-dimensional complex DFT is performed on an example of dimension $8192 times 8192 times 8192$ with a full TPU Pod: the run time of KDFT is 12.66 seconds and that of FFT is 8.3 seconds. Scaling analysis is provided to demonstrate the high parallel efficiency of the two DFT implementations on TPUs.
In this paper we demonstrate the methodology for parallelizing the computation of large one-dimensional discrete fast Fourier transforms (DFFTs) on multi-core Intel Xeon processors. DFFTs based on the recursive Cooley-Tukey method have to control cache utilization, memory bandwidth and vector hardware usage, and at the same time scale across multiple threads or compute nodes. Our method builds on single-threaded Intel Math Kernel Library (MKL) implementation of DFFT, and uses the Intel Cilk Plus framework for thread parallelism. We demonstrate the ability of Intel Cilk Plus to handle parallel recursion with nested loop-centric parallelism without tuning the code to the number of cores or cache metrics. The result of our work is a library called EFFT that performs 1D DFTs of size 2^N for N>=21 faster than the corresponding Intel MKL parallel DFT implementation by up to 1.5x, and faster than FFTW by up to 2.5x. The code of EFFT is available for free download under the GPLv3 license. This work provides a new efficient DFFT implementation, and at the same time demonstrates an educational example of how computer science problems with complex parallel patterns can be optimized for high performance using the Intel Cilk Plus framework.
Study of general purpose computation by GPU (Graphics Processing Unit) can improve the image processing capability of micro-computer system. This paper studies the parallelism of the different stages of decimation in time radix 2 FFT algorithm, designs the butterfly and scramble kernels and implements 2D FFT on GPU. The experiment result demonstrates the validity and advantage over general CPU, especially in the condition of large input size. The approach can also be generalized to other transforms alike.
The rapid evolution of artificial intelligence (AI) is leading to a new generation of hardware accelerators optimized for deep learning. Some of the designs of these accelerators are general enough to allow their use for other computationally intensive tasks beyond AI. Cloud tensor processing units (TPUs) are one such example. Here, we demonstrate a novel approach using TensorFlow on Cloud TPUs to implement a high-resolution imaging technique called full-waveform inversion. Higher-order numerical stencils leverage the efficient matrix multiplication offered by the Cloud TPU, and the halo exchange benefits from the dedicated high-speed interchip connection. The performance is competitive when compared with Tesla V100 graphics processing units and shows promise for future computation- and memory-intensive imaging applications.
The discrete Fourier transform (DFT) is an important operator which acts on the Hilbert space of complex valued functions on the ring Z/NZ. In the case where N=p is an odd prime number, we exhibit a canonical basis of eigenvectors for the DFT. The transition matrix from the standard basis to the canonical basis defines a novel transform which we call the discrete oscillator transform (DOT for short). Finally, we describe a fast algorithm for computing the discrete oscillator transform in certain cases.
Imaging data from upcoming radio telescopes requires distributing processing at large scales. This paper presents a distributed Fourier transform algorithm for radio interferometry processing. It generates arbitrary grid chunks with full non-coplanarity corrections while minimising memory residency, data transfer and compute work. We utilise window functions to isolate the influence between regions of grid and image space. This allows us to distribute image data between nodes and construct parts of grid space exactly when and where needed. The developed prototype easily handles image data terabytes in size, while generating visibilities at great throughput and accuracy. Scaling is demonstrated to be better than cubic in baseline length, reducing the risk involved in growing radio astronomy processing to the Square Kilometre Array and similar telescopes.