Do you want to publish a course? Click here

Observation data pre-processing and scientific data products generation of POLAR

492   0   0.0 ( 0 )
 Added by Zhengheng Li
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

POLAR is a compact space-borne detector initially designed to measure the polarization of hard X-rays emitted from Gamma-Ray Bursts in the energy range 50-500keV. This instrument was launched successfully onboard the Chinese space laboratory Tiangong-2 (TG-2) on 2016 September 15. After being switched on a few days later, tens of gigabytes of raw detection data were produced in-orbit by POLAR and transferred to the ground every day. Before the launch date, a full pipeline and related software were designed and developed for the purpose of quickly pre-processing all the raw data from POLAR, which include both science data and engineering data, then to generate the high level scientific data products that are suitable for later science analysis. This pipeline has been successfully applied for use by the POLAR Science Data Center in the Institute of High Energy Physics (IHEP) after POLAR was launched and switched on. A detailed introduction to the pipeline and some of the core relevant algorithms are presented in this paper.



rate research

Read More

The Zwicky Transient Facility (ZTF) is a new robotic time-domain survey currently in progress using the Palomar 48-inch Schmidt Telescope. ZTF uses a 47 square degree field with a 600 megapixel camera to scan the entire northern visible sky at rates of ~3760 square degrees/hour to median depths of g ~ 20.8 and r ~ 20.6 mag (AB, 5sigma in 30 sec). We describe the Science Data System that is housed at IPAC, Caltech. This comprises the data-processing pipelines, alert production system, data archive, and user interfaces for accessing and analyzing the products. The realtime pipeline employs a novel image-differencing algorithm, optimized for the detection of point source transient events. These events are vetted for reliability using a machine-learned classifier and combined with contextual information to generate data-rich alert packets. The packets become available for distribution typically within 13 minutes (95th percentile) of observation. Detected events are also linked to generate candidate moving-object tracks using a novel algorithm. Objects that move fast enough to streak in the individual exposures are also extracted and vetted. The reconstructed astrometric accuracy per science image with respect to Gaia is typically 45 to 85 milliarcsec. This is the RMS per axis on the sky for sources extracted with photometric S/N >= 10. The derived photometric precision (repeatability) at bright unsaturated fluxes varies between 8 and 25 millimag. Photometric calibration accuracy with respect to Pan-STARRS1 is generally better than 2%. The products support a broad range of scientific applications: fast and young supernovae, rare flux transients, variable stars, eclipsing binaries, variability from active galactic nuclei, counterparts to gravitational wave sources, a more complete census of Type Ia supernovae, and Solar System objects.
The high beam current and sub-angstrom resolution of aberration-corrected scanning transmission electron microscopes has enabled electron energy loss spectroscopic (EELS) mapping with atomic resolution. These spectral maps are often dose-limited and spatially oversampled, leading to low counts/channel and are thus highly sensitive to errors in background estimation. However, by taking advantage of redundancy in the dataset map one can improve background estimation and increase chemical sensitivity. We consider two such approaches- linear combination of power laws and local background averaging-that reduce background error and improve signal extraction. Principal components analysis (PCA) can also be used to analyze spectrum images, but the poor peak-to-background ratio in EELS can lead to serious artifacts if raw EELS data is PCA filtered. We identify common artifacts and discuss alternative approaches. These algorithms are implemented within the Cornell Spectrum Imager, an open source software package for spectroscopic analysis.
Scientists are drawn to synchrotrons and accelerator based light sources because of their brightness, coherence and flux. The rate of improvement in brightness and detector technology has outpaced Moores law growth seen for computers, networks, and storage, and is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality. Here we present an integrated software/algorithmic framework designed to capitalize on high throughput experiments, and describe the streamlined processing pipeline of ptychography data analysis. The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.
We present CosmoHub (https://cosmohub.pic.es), a web application based on Hadoop to perform interactive exploration and distribution of massive cosmological datasets. Recent Cosmology seeks to unveil the nature of both dark matter and dark energy mapping the large-scale structure of the Universe, through the analysis of massive amounts of astronomical data, progressively increasing during the last (and future) decades with the digitization and automation of the experimental techniques. CosmoHub, hosted and developed at the Port dInformacio Cientifica (PIC), provides support to a worldwide community of scientists, without requiring the end user to know any Structured Query Language (SQL). It is serving data of several large international collaborations such as the Euclid space mission, the Dark Energy Survey (DES), the Physics of the Accelerating Universe Survey (PAUS) and the Marenostrum Institut de Ci`encies de lEspai (MICE) numerical simulations. While originally developed as a PostgreSQL relational database web frontend, this work describes the current version of CosmoHub, built on top of Apache Hive, which facilitates scalable reading, writing and managing huge datasets. As CosmoHubs datasets are seldomly modified, Hive it is a better fit. Over 60 TiB of catalogued information and $50 times 10^9$ astronomical objects can be interactively explored using an integrated visualization tool which includes 1D histogram and 2D heatmap plots. In our current implementation, online exploration of datasets of $10^9$ objects can be done in a timescale of tens of seconds. Users can also download customized subsets of data in standard formats generated in few minutes.
The Los Alamos National Laboratory designed and built Mars Odyssey Neutron Spectrometer (MONS) has been in excellent health operating from February 2002 to the present. MONS measures the neutron leakage albedo from galactic cosmic ray bombardment of Mars. These signals can indicate the presence of near-surface water deposits on Mars, and can also be used to study properties of the seasonal polar CO$_2$ ice caps. This work outlines a new analysis of the MONS data that results in new and extended time-series maps of MONS thermal and epithermal neutron data. The new data are compared to previous publications on the MONS instrument. We then present preliminary results studying the inter-annual variability in the polar regions of Mars based on 8 Mars-Years of MONS data from the new dataset.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا