Do you want to publish a course? Click here

Precision in high resolution absorption line modelling, analytic Voigt derivatives, and optimisation methods

81   0   0.0 ( 0 )
 Added by John Webb
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

This paper describes the optimisation theory on which VPFIT, a non-linear least-squares program for modelling absorption spectra, is based. Particular attention is paid to precision. Voigt function derivatives have previously been calculated using numerical finite difference approximations. We show how these can instead be computed analytically using Taylor series expansions and look-up tables. We introduce a new optimisation method for an efficient descent path to the best-fit, combining the principles used in both the Gauss-Newton and Levenberg-Marquardt algorithms. A simple practical fix for ill-conditioning is described, a common problem when modelling quasar absorption systems. We also summarise how unbiased modelling depends on using an appropriate information criterion to guard against over- or under-fitting. The methods and the new implementations introduced in this paper are aimed at optimal usage of future data from facilities such as ESPRESSO/VLT and HIRES/ELT, particularly for the most demanding applications such as searches for spacetime variations in fundamental constants and attempts to detect cosmological redshift drift.



rate research

Read More

textsc{Pykat} is a Python package which extends the popular optical interferometer modelling software textsc{Finesse}. It provides a more modern and efficient user interface for conducting complex numerical simulations, as well as enabling the use of Pythons extensive scientific software ecosystem. In this paper we highlight the relationship between textsc{Pykat} and textsc{Finesse}, how it is used, and provide an illustrative example of how it has helped to better understand the characteristics of the current generation of gravitational wave interferometers.
The advent of the X-ray Free Electron Laser (XFEL) has made it possible to record snapshots of biological entities injected into the X-ray beam before the onset of radiation damage. Algorithmic means must then be used to determine the snapshot orientations and reconstruct the three-dimensional structure of the object. Existing approaches are limited in reconstruction resolution to at best 1/30th of the object diameter, with the computational expense increasing as the eighth power of the ratio of diameter to resolution. We present an approach capable of exploiting object symmetries to recover three-dimensional structure to 1/100th of the object diameter, and thus reconstruct the structure of the satellite tobacco necrosis virus to atomic resolution. Combined with the previously demonstrated capability to operate at ultralow signal, our approach offers the highest reconstruction resolution for XFEL snapshots to date, and provides a potentially powerful alternative route for analysis of data from crystalline and nanocrystalline objects.
Model fitting is possibly the most extended problem in science. Classical approaches include the use of least-squares fitting procedures and maximum likelihood methods to estimate the value of the parameters in the model. However, in recent years, Bayesian inference tools have gained traction. Usually, Markov chain Monte Carlo methods are applied to inference problems, but they present some disadvantages, particularly when comparing different models fitted to the same dataset. Other Bayesian methods can deal with this issue in a natural and effective way. We have implemented an importance sampling algorithm adapted to Bayesian inference problems in which the power of the noise in the observations is not known a priori. The main advantage of importance sampling is that the model evidence can be derived directly from the so-called importance weights -- while MCMC methods demand considerable postprocessing. The use of our adaptive target, adaptive importance sampling (ATAIS) method is shown by inferring, on the one hand, the parameters of a simulated flaring event which includes a damped oscillation {and, on the other hand, real data from the Kepler mission. ATAIS includes a novel automatic adaptation of the target distribution. It automatically estimates the variance of the noise in the model. ATAIS admits parallelisation, which decreases the computational run-times notably. We compare our method against a nested sampling method within a model selection problem.
438 - D.M. Harrington , J.R. Kuhn 2010
Stellar spectropolarimetry is a relatively new remote sensing tool for exploring stellar atmospheres and circumstellar environments. We present the results of our HiVIS survey and a multi-wavelength ESPaDOnS follow-up campaign showing detectable linear polarization signatures in many lines for most obscured stars. This survey shows polarization at and below 0.1% across many lines are common in stars with often much larger H-alpha signatures. These smaller signatures are near the limit of typical systematic errors in most night-time spectropolarimeters. In an effort to increase our precision and efficiency for detecting small signals we designed and implemented the new HiVIS bi-directionally clocked detector synchronized with the new liquid-crystal polarimeter package. We can now record multiple independent polarized spectra in a single exposure on identical pixels and have demonstrated 10^-4 relative polarimetric precision. The new detector allows for the movement of charge on the device to be synchronized with phase changes in the liquid-crystal variable retarders at rates of >5Hz. It also allows for more efficient observing on bright targets by effectively increasing the pixel well depth. With the new detector, low and high resolution modes and polarization calibrations for the instrument and telescope, we substantially reduce limitations to the precision and accuracy of this new spectropolarimetric tool.
Adaptive filtering is a powerful class of control theoretic concepts useful in extracting information from noisy data sets or performing forward prediction in time for a dynamic system. The broad utilization of the associated algorithms makes them attractive targets for similar problems in the quantum domain. To date, however, the construction of adaptive filters for quantum systems has typically been carried out in terms of stochastic differential equations for weak, continuous quantum measurements, as used in linear quantum systems such as optical cavities. Discretized measurement models are not as easily treated in this framework, but are frequently employed in quantum information systems leveraging projective measurements. This paper presents a detailed analysis of several technical innovations that enable classical filtering of discrete projective measurements, useful for adaptively learning system-dynamics, noise properties, or hardware performance variations in classically correlated measurement data from quantum devices. In previous work we studied a specific case of this framework, in which noise and calibration errors on qubit arrays could be efficiently characterized in space; here, we present a generalized analysis of filtering in quantum systems and demonstrate that the traditional convergence properties of nonlinear classical filtering hold using single-shot projective measurements. These results are important early demonstrations indicating that a range of concepts and techniques from classical nonlinear filtering theory may be applied to the characterization of quantum systems involving discretized projective measurements, paving the way for broader adoption of control theoretic techniques in quantum technology.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا