Do you want to publish a course? Click here

Multi-component Decomposition of Astronomical Spectra by Compressed Sensing

171   0   0.0 ( 0 )
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The signal measured by an astronomical spectrometer may be due to radiation from a multi-component mixture of plasmas with a range of physical properties (e.g. temperature, Doppler velocity). Confusion between multiple components may be exacerbated if the spectrometer sensor is illuminated by overlapping spectra dispersed from different slits, with each slit being exposed to radiation from a different portion of an extended astrophysical object. We use a compressed sensing method to robustly retrieve the different components. This method can be adopted for a variety of spectrometer configurations, including single-slit, multi-slit (e.g., the proposed MUlti-slit Solar Explorer mission; MUSE) and slot spectrometers (which produce overlappograms).



rate research

Read More

Multiline techniques assuming similar line profiles have become a standard tool in stellar astronomy for increasing the signal-to-noise ratio (SNR) of spectropolarimetric measurements. However, due to the widely-used weak field approximation their benefits could not so far be used for solar observations, where a large variety of Stokes profiles emerge from local magnetic fields and measuring weak fields in the quiet Sun remains a challenge. The method presented here permits us to analyze many lines with arbitrary Zeeman splitting and to simultaneously deploy Stokes IQUV spectra to determine a common line profile with the SNR increased by orders of magnitude. The latter provides a valuable constraint for determining separately field strengths for each contributing absorber. This method represents an extension of our recently developed technique of Nonlinear Deconvolution with Deblending (NDD, Sennhauser et al. 2009), which accounts for the nonlinearity in blended profiles. Equipped with all those abilities, ZCD is the perfect tool to further increase the informative value of high-precision polarimetric observations.
We present a two-dimensional multi-component photometric decomposition of 404 galaxies from the CALIFA Data Release 3. They represent all possible galaxies with no clear signs of interaction and not strongly inclined in the final CALIFA data release. Galaxies are modelled in the g, r, and i SDSS images including, when appropriate, a nuclear point source, bulge, bar, and an exponential or broken disc component. We use a human-supervised approach to determine the optimal number of structures to be included in the fit. The dataset, including the photometric parameters of the CALIFA sample, is released together with statistical errors and a visual analysis of the quality of each fit. The analysis of the photometric components reveals a clear segregation of the structural composition of galaxies with stellar mass. At high masses (log(Mstar/Msun)>11), the galaxy population is dominated by galaxies modelled with a single Sersic or a bulge+disc with a bulge-to-total (B/T) luminosity ratio B/T>0.2. At intermediate masses (9.5<log(Mstar/Msun)<11), galaxies described with bulge+disc but B/T < 0.2 are preponderant, whereas, at the low mass end (log(Mstar/Msun)<9.5), the prevailing population is constituted by galaxies modelled with either pure discs or nuclear point sources+discs (i.e., no discernible bulge). We obtain that 57% of the volume corrected sample of disc galaxies in the CALIFA sample host a bar. This bar fraction shows a significant drop with increasing galaxy mass in the range 9.5<log(Mstar/Msun)<11.5. The analyses of the extended multi-component radial profile result in a volume-corrected distribution of 62%, 28%, and 10% for the so-called Type I, Type II, and Type III disc profiles, respectively. These fractions are in discordance with previous findings. We argue that the different methodologies used to detect the breaks are the main cause for these differences.
This paper shows that compressed sensing realized by means of regularized deconvolution and the Finite Isotropic Wavelet Transform is effective and reliable in hard X-ray solar imaging. The method utilizes the Finite Isotropic Wavelet Transform with Meyer function as the mother wavelet. Further, compressed sensing is realized by optimizing a sparsity-promoting regularized objective function by means of the Fast Iterative Shrinkage-Thresholding Algorithm. Eventually, the regularization parameter is selected by means of the Miller criterion. The method is applied against both synthetic data mimicking the Spectrometer/Telescope Imaging X-rays (STIX) measurements and experimental observations provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The performances of the method are compared with the results provided by standard visibility-based reconstruction methods. The results show that the application of the sparsity constraint and the use of a continuous, isotropic framework for the wavelet transform provide a notable spatial accuracy and significantly reduce the ringing effects due to the instrument point spread functions.
Helium atom scattering (HAS) is a well established technique for examining the surface structure and dynamics of materials at atomic sized resolution. The HAS technique Helium spin-echo spectroscopy opens up the possibility of compressing the data acquisition process. Compressed sensing (CS) methods demonstrating the compressibility of spin-echo spectra are presented. In addition, wavelet based CS approximations, founded on a new continuous CS approach, are used to construct continuous spectra that are compatible with variable transformations to the energy/momentum transfer domain. Moreover, recent developments on structured multilevel sampling that are empirically and theoretically shown to substantially improve upon the state of the art CS techniques are implemented. These techniques are demonstrated on several examples including phonon spectra from a gold surface.
We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining the next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step estimation procedure is proposed comprising support set detection and signal amplitude estimation. Estimation accuracy is enhanced by a non-linear voting method and averaging estimates over multiple windows. We analyze the computational complexity and estimation error, and show that the normalized error variance asymptotically goes to zero for sublinear sparsity. Our simulation results show speed up of an order of magnitude over traditional CS, while obtaining significantly lower reconstruction error under mild conditions on the signal magnitudes and the noise level.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا