Do you want to publish a course? Click here

Fireball streak detection with minimal CPU processing requirements for the Desert Fireball Network data processing pipeline

62   0   0.0 ( 0 )
 Added by Martin Towner
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The detection of fireballs streaks in astronomical imagery can be carried out by a variety of methods. The Desert Fireball Network--DFN--uses a network of cameras to track and triangulate incoming fireballs to recover meteorites with orbits. Fireball detection is done on-camera, but due to the design constraints imposed by remote deployment, the cameras are limited in processing power and time. We describe the processing software used for fireball detection under these constrained circumstances. A cascading approach was implemented, whereby computationally simple filters are used to discard uninteresting portions of the images, allowing for more computationally expensive analysis of the remainder. This allows a full nights worth of data; over 1000 36 megapixel images to be processed each day using a low power single board computer. The algorithms chosen give a single camera successful detection large fireball rate of better than 96 percent, when compared to manual inspection, although significant numbers of false positives are generated. The overall network detection rate for triangulated large fireballs is estimated to be better than 99.8 percent, by ensuring that there are multiple double stations chances to detect one fireball.



rate research

Read More

{Context}. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. {Aims}. The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. {Methods}. A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. {Results}. The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. {Conclusions}. Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness.
121 - Shifan Zuo , Jixia Li , Yichao Li 2020
The Tianlai project is a 21cm intensity mapping experiment aimed at detecting dark energy by measuring the baryon acoustic oscillation (BAO) features in the large scale structure power spectrum. This experiment provides an opportunity to test the data processing methods for cosmological 21cm signal extraction, which is still a great challenge in current radio astronomy research. The 21cm signal is much weaker than the foregrounds and easily affected by the imperfections in the instrumental responses. Furthermore, processing the large volumes of interferometer data poses a practical challenge. We have developed a data processing pipeline software called {tt tlpipe} to process the drift scan survey data from the Tianlai experiment. It performs offline data processing tasks such as radio frequency interference (RFI) flagging, array calibration, binning, and map-making, etc. It also includes utility functions needed for the data analysis, such as data selection, transformation, visualization and others. A number of new algorithms are implemented, for example the eigenvector decomposition method for array calibration and the Tikhonov regularization for $m$-mode analysis. In this paper we describe the design and implementation of the {tt tlpipe} and illustrate its functions with some analysis of real data. Finally, we outline directions for future development of this publicly code.
We describe the processing of the PHANGS-ALMA survey and present the PHANGS-ALMA pipeline, a public software package that processes calibrated interferometric and total power data into science-ready data products. PHANGS-ALMA is a large, high-resolution survey of CO J=2-1 emission from nearby galaxies. The observations combine ALMAs main 12-m array, the 7-m array, and total power observations and use mosaics of dozens to hundreds of individual pointings. We describe the processing of the u-v data, imaging and deconvolution, linear mosaicking, combining interferometer and total power data, noise estimation, masking, data product creation, and quality assurance. Our pipeline has a general design and can also be applied to VLA and ALMA observations of other spectral lines and continuum emission. We highlight our recipe for deconvolution of complex spectral line observations, which combines multiscale clean, single scale clean, and automatic mask generation in a way that appears robust and effective. We also emphasize our two-track approach to masking and data product creation. We construct one set of broadly masked data products, which have high completeness but significant contamination by noise, and another set of strictly masked data products, which have high confidence but exclude faint, low signal-to-noise emission. Our quality assurance tests, supported by simulations, demonstrate that 12-m+7-m deconvolved data recover a total flux that is significantly closer to the total power flux than the 7-m deconvolved data alone. In the appendices, we measure the stability of the ALMA total power calibration in PHANGS--ALMA and test the performance of popular short-spacing correction algorithms.
The Earth is impacted by 35-40 metre-scale objects every year. These meteoroids are the low mass end of impactors that can do damage on the ground. Despite this they are very poorly surveyed and characterised, too infrequent for ground based fireball bservation efforts, and too small to be efficiently detected by NEO telescopic surveys whilst still in interplanetary space. We want to evaluate the suitability of different instruments for characterising metre-scale impactors and where they come from. We use data collected over the first 3 years of operation of the continent-scale Desert Fireball Network, and compare results with other published results as well as orbital sensors. We find that although the orbital sensors have the advantage of using the entire planet as collecting area, there are several serious problems with the accuracy of the data, notably the reported velocity vector, which is key to getting an accurate pre-impact orbit and calculating meteorite fall positions. We also outline dynamic range issues that fireball networks face when observing large meteoroid entries.
Processing of raw data from modern astronomical instruments is nowadays often carried out using dedicated software, so-called pipelines which are largely run in automated operation. In this paper we describe the data reduction pipeline of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph operated at ESOs Paranal observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software, a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data gets transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا