Do you want to publish a course? Click here

Optimal Estimation of Several Linear Parameters in the Presence of Lorentzian Thermal Noise

165   0   0.0 ( 0 )
 Added by Jason Steffen
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

In a previous article we developed an approach to the optimal (minimum variance, unbiased) statistical estimation technique for the equilibrium displacement of a damped, harmonic oscillator in the presence of thermal noise. Here, we expand that work to include the optimal estimation of several linear parameters from a continuous time series. We show that working in the basis of the thermal driving force both simplifies the calculations and provides additional insight to why various approximate (not optimal) estimation techniques perform as they do. To illustrate this point, we compare the variance in the optimal estimator that we derive for thermal noise with those of two approximate methods which, like the optimal estimator, suppress the contribution to the variance that would come from the irrelevant, resonant motion of the oscillator. We discuss how these methods fare when the dominant noise process is either white displacement noise or noise with power spectral density that is inversely proportional to the frequency ($1/f$ noise). We also construct, in the basis of the driving force, an estimator that performs well for a mixture of white noise and thermal noise. To find the optimal multi-parameter estimators for thermal noise, we derive and illustrate a generalization of traditional matrix methods for parameter estimation that can accommodate continuous data. We discuss how this approach may help refine the design of experiments as they allow an exact, quantitative comparison of the precision of estimated parameters under various data acquisition and data analysis strategies.



rate research

Read More

We examine the problem of construction of confidence intervals within the basic single-parameter, single-iteration variation of the method of quasi-optimal weights. Two kinds of distortions of such intervals due to insufficiently large samples are examined, both allowing an analytical investigation. First, a criterion is developed for validity of the assumption of asymptotic normality together with a recipe for the corresponding corrections. Second, a method is derived to take into account the systematic shift of the confidence interval due to the non-linearity of the theoretical mean of the weight as a function of the parameter to be estimated. A numerical example illustrates the two corrections.
We study the possibility of taking bosonic systems subject to quadratic Hamiltonians and a noisy thermal environment to non-classical stationary states by feedback loops based on weak measurements and conditioned linear driving. We derive general analytical upper bounds for the single mode squeezing and multimode entanglement at steady state, depending only on the Hamiltonian parameters and on the number of thermal excitations of the bath. Our findings show that, rather surprisingly, larger number of thermal excitations in the bath allow for larger steady-state squeezing and entanglement if the efficiency of the optimal continuous measurements conditioning the feedback loop is high enough. We also consider the performance of feedback strategies based on homodyne detection and show that, at variance with the optimal measurements, it degrades with increasing temperature.
98 - Satoru Tokuda , Kenji Nagata , 2016
The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.
84 - David W. Hogg 2021
There are many uses for linear fitting; the context here is interpolation and denoising of data, as when you have calibration data and you want to fit a smooth, flexible function to those data. Or you want to fit a flexible function to de-trend a time series or normalize a spectrum. In these contexts, investigators often choose a polynomial basis, or a Fourier basis, or wavelets, or something equally general. They also choose an order, or number of basis functions to fit, and (often) some kind of regularization. We discuss how this basis-function fitting is done, with ordinary least squares and extensions thereof. We emphasize that it is often valuable to choose far more parameters than data points, despite folk rules to the contrary: Suitably regularized models with enormous numbers of parameters generalize well and make good predictions for held-out data; over-fitting is not (mainly) a problem of having too many parameters. It is even possible to take the limit of infinite parameters, at which, if the basis and regularization are chosen correctly, the least-squares fit becomes the mean of a Gaussian process. We recommend cross-validation as a good empirical method for model selection (for example, setting the number of parameters and the form of the regularization), and jackknife resampling as a good empirical method for estimating the uncertainties of the predictions made by the model. We also give advice for building stable computational implementations.
Langevin models are frequently used to model various stochastic processes in different fields of natural and social sciences. They are adapted to measured data by estimation techniques such as maximum likelihood estimation, Markov chain Monte Carlo methods, or the non-parametric direct estimation method introduced by Friedrich et al. The latter has the distinction of being very effective in the context of large data sets. Due to their $delta$-correlated noise, standard Langevin models are limited to Markovian dynamics. A non-Markovian Langevin model can be formulated by introducing a hidden component that realizes correlated noise. For the estimation of such a partially observed diffusion a different version of the direct estimation method was introduced by Lehle et al. However, this procedure includes the limitation that the correlation length of the noise component is small compared to that of the measured component. In this work we propose another version of the direct estimation method that does not include this restriction. Via this method it is possible to deal with large data sets of a wider range of examples in an effective way. We discuss the abilities of the proposed procedure using several synthetic examples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا