Do you want to publish a course? Click here

An efficient and flexible Abel-inversion method for noisy data

99   0   0.0 ( 0 )
 Added by Igor Antokhin
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose an efficient and flexible method for solving Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonovs regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonovs regularization on itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

rate research

Read More

We present an elegant method of determining the eigensolutions of the induction and the dynamo equation in a fluid embedded in a vacuum. The magnetic field is expanded in a complete set of functions. The new method is based on the biorthogonality of the adjoint electric current and the vector potential with an inner product defined by a volume integral over the fluid domain. The advantage of this method is that the velocity and the dynamo coefficients of the induction and the dynamo equation do not have to be differentiated and thus even numerically determined tabulated values of the coefficients produce reasonable results. We provide test calculations and compare with published results obtained by the classical treatment based on the biorthogonality of the magnetic field and its adjoint. We especially consider dynamos with mean-field coefficients determined from direct numerical simulations of the geodynamo and compare with initial value calculations and the full MHD simulations.
For submillimeter spectroscopy with ground-based single-dish telescopes, removing noise contribution from the Earths atmosphere and the instrument is essential. For this purpose, here we propose a new method based on a data-scientific approach. The key technique is statistical matrix decomposition that automatically separates the signals of astronomical emission lines from the drift noise components in the fast-sampled (1--10 Hz) time-series spectra obtained by a position-switching (PSW) observation. Because the proposed method does not apply subtraction between two sets of noisy data (i.e., on-source and off-source spectra), it improves the observation sensitivity by a factor of $sqrt{2}$. It also reduces artificial signals such as baseline ripples on a spectrum, which may also help to improve the effective sensitivity. We demonstrate this improvement by using the spectroscopic data of emission lines toward a high-redshift galaxy observed with a 2-mm receiver on the 50-m Large Millimeter Telescope (LMT). Since the proposed method is carried out offline and no additional measurements are required, it offers an instant improvement on the spectra reduced so far with the conventional method. It also enables efficient deep spectroscopy driven by the future 50-m class large submillimeter single-dish telescopes, where fast PSW observations by mechanical antenna or mirror drive are difficult to achieve.
We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (lambda, gamma). This transformation consists in the polynomial functions lambda(x,y) and gamma(x,y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength calibrate and rectify the data using just one interpolation step. Furthermore, the gamma(x,y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).
We introduce SoFiA, a flexible software application for the detection and parameterization of sources in 3D spectral-line datasets. SoFiA combines for the first time in a single piece of software a set of new source-finding and parameterization algorithms developed on the way to future HI surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of datasets. The key advantages of SoFiA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SoFiA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SoFiA in order to include additional methods as they become available. The full SoFiA distribution, including a dedicated graphical user interface, is publicly available for download.
We present an error analysis and further numerical investigations of the Parameterized-Background Data-Weak (PBDW) formulation to variational Data Assimilation (state estimation), proposed in [Y Maday, AT Patera, JD Penn, M Yano, Int J Numer Meth Eng, 102(5), 933-965]. The PBDW algorithm is a state estimation method involving reduced models. It aims at approximating an unknown function $u^{rm true}$ living in a high-dimensional Hilbert space from $M$ measurement observations given in the form $y_m = ell_m(u^{rm true}),, m=1,dots,M$, where $ell_m$ are linear functionals. The method approximates $u^{rm true}$ with $hat{u} = hat{z} + hat{eta}$. The emph{background} $hat{z}$ belongs to an $N$-dimensional linear space $mathcal{Z}_N$ built from reduced modelling of a parameterized mathematical model, and the emph{update} $hat{eta}$ belongs to the space $mathcal{U}_M$ spanned by the Riesz representers of $(ell_1,dots, ell_M)$. When the measurements are noisy {--- i.e., $y_m = ell_m(u^{rm true})+epsilon_m$ with $epsilon_m$ being a noise term --- } the classical PBDW formulation is not robust in the sense that, if $N$ increases, the reconstruction accuracy degrades. In this paper, we propose to address this issue with an extension of the classical formulation, {which consists in} searching for the background $hat{z}$ either on the whole $mathcal{Z}_N$ in the noise-free case, or on a well-chosen subset $mathcal{K}_N subset mathcal{Z}_N$ in presence of noise. The restriction to $mathcal{K}_N$ makes the reconstruction be nonlinear and is the key to make the algorithm significantly more robust against noise. We {further} present an emph{a priori} error and stability analysis, and we illustrate the efficiency of the approach on several numerical examples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا