ترغب بنشر مسار تعليمي؟ اضغط هنا

A compression scheme for radio data in high performance computing

443   0   0.0 ( 0 )
 نشر من قبل Kiyoshi Masui
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a procedure for efficiently compressing astronomical radio data for high performance applications. Integrated, post-correlation data are first passed through a nearly lossless rounding step which compares the precision of the data to a generalized and calibration-independent form of the radiometer equation. This allows the precision of the data to be reduced in a way that has an insignificant impact on the data. The newly developed Bitshuffle lossless compression algorithm is subsequently applied. When the algorithm is used in conjunction with the HDF5 library and data format, data produced by the CHIME Pathfinder telescope is compressed to 28% of its original size and decompression throughputs in excess of 1 GB/s are obtained on a single core.



قيم البحث

اقرأ أيضاً

A large-N correlator that makes use of Field Programmable Gate Arrays and Graphics Processing Units has been deployed as the digital signal processing system for the Long Wavelength Array station at Owens Valley Radio Observatory (LWA-OV), to enable the Large Aperture Experiment to Detect the Dark Ages (LEDA). The system samples a ~100MHz baseband and processes signals from 512 antennas (256 dual polarization) over a ~58MHz instantaneous sub-band, achieving 16.8Tops/s and 0.236 Tbit/s throughput in a 9kW envelope and single rack footprint. The output data rate is 260MB/s for 9 second time averaging of cross-power and 1 second averaging of total-power data. At deployment, the LWA-OV correlator was the largest in production in terms of N and is the third largest in terms of complex multiply accumulations, after the Very Large Array and Atacama Large Millimeter Array. The correlators comparatively fast development time and low cost establish a practical foundation for the scalability of a modular, heterogeneous, computing architecture.
RF system-on-chip (RFSoC) devices provide the potential for implementing a complete radio astronomy receiver on a single board, but performance of the integrated analogue-to-digital converters is critical. We have evaluated the performance of the dat a converters in the Xilinx ZU28DR RFSoC, which are 12-bit, 8-fold interleaved converters with a maximum sample speed of 4.096 Giga-sample per second (GSPS). We measured the spurious-free dynamic range (SFDR), signal-to-noise and distortion (SINAD), effective number of bits (ENOB), intermodulation distortion (IMD) and cross-talk between adjacent channels over the bandwidth of 2.048 GHz. We both captured data for off-line analysis with floating-point arithmetic, and implemented a real-time integer arithmetic spectrometer on the RFSoC. The performance of the ADCs is sufficient for radio astronomy applications and close to the vendor specifications in most of the scenarios. We have carried out spectral integrations of up to 100 s and stability tests over tens of hours and find thermal noise-limited performance over these timescales.
109 - S. Bilir , E. Gogus , O. Onal Tas 2015
We propose a new performance indicator to evaluate the productivity of research institutions by their disseminated scientific papers. The new quality measure includes two principle components: the normalized impact factor of the journal in which pape r was published, and the number of citations received per year since it was published. In both components, the scientific impacts are weighted by the contribution of authors from the evaluated institution. As a whole, our new metric, namely, the institutional performance score takes into account both journal based impact and articles specific impacts. We apply this new scheme to evaluate research output performance of Turkish institutions specialized in astronomy and astrophysics in the period of 1998-2012. We discuss the implications of the new metric, and emphasize the benefits of it along with comparison to other proposed institutional performance indicators.
Power-spectrum analysis is an important tool providing critical information about a signal. The range of applications includes communication-systems to DNA-sequencing. If there is interference present on a transmitted signal, it could be due to a nat ural cause or superimposed forcefully. In the latter case, its early detection and analysis becomes important. In such situations having a small observation window, a quick look at power-spectrum can reveal a great deal of information, including frequency and source of interference. In this paper, we present our design of a FPGA based reconfigurable platform for high performance power-spectrum analysis. This allows for the real-time data-acquisition and processing of samples of the incoming signal in a small time frame. The processing consists of computation of power, its average and peak, over a set of input values. This platform sustains simultaneous data streams on each of the four input channels.
With the upcoming generation of telescopes, cluster scale strong gravitational lenses will act as an increasingly relevant probe of cosmology and dark matter. The better resolved data produced by current and future facilities requires faster and more efficient lens modeling software. Consequently, we present Lenstool-HPC, a strong gravitational lens modeling and map generation tool based on High Performance Computing (HPC) techniques and the renowned Lenstool software. We also showcase the HPC concepts needed for astronomers to increase computation speed through massively parallel execution on supercomputers. Lenstool-HPC was developed using lens modelling algorithms with high amounts of parallelism. Each algorithm was implemented as a highly optimised CPU, GPU and Hybrid CPU-GPU version. The software was deployed and tested on the Piz Daint cluster of the Swiss National Supercomputing Centre (CSCS). Lenstool-HPC perfectly parallel lens map generation and derivative computation achieves a factor 30 speed-up using only 1 GPUs compared to Lenstool. Lenstool-HPC hybrid Lens-model fit generation tested at Hubble Space Telescope precision is scalable up to 200 CPU-GPU nodes and is faster than Lenstool using only 4 CPU-GPU nodes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا