Do you want to publish a course? Click here

Estimation and uncertainty quantification for the output from quantum simulators

86   0   0.0 ( 0 )
 Added by Kody Law
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The problem of estimating certain distributions over ${0,1}^d$ is considered here. The distribution represents a quantum system of $d$ qubits, where there are non-trivial dependencies between the qubits. A maximum entropy approach is adopted to reconstruct the distribution from exact moments or observed empirical moments. The Robbins Monro algorithm is used to solve the intractable maximum entropy problem, by constructing an unbiased estimator of the un-normalized target with a sequential Monte Carlo sampler at each iteration. In the case of empirical moments, this coincides with a maximum likelihood estimator. A Bayesian formulation is also considered in order to quantify posterior uncertainty. Several approaches are proposed in order to tackle this challenging problem, based on recently developed methodologies. In particular, unbiased estimators of the gradient of the log posterior are constructed and used within a provably convergent Langevin-based Markov chain Monte Carlo method. The methods are illustrated on classically simulated output from quantum simulators.



rate research

Read More

153 - Mengyang Gu , Yimin Luo , Yue He 2021
Differential dynamic microscopy (DDM) is a form of video image analysis that combines the sensitivity of scattering and the direct visualization benefits of microscopy. DDM is broadly useful in determining dynamical properties including the intermediate scattering function for many spatiotemporally correlated systems. Despite its straightforward analysis, DDM has not been fully adopted as a routine characterization tool, largely due to computational cost and lack of algorithmic robustness. We present a comprehensive statistical framework that aims at quantifying error, reducing the computational order and enhancing the robustness of DDM analysis. We quantify the error, and propagate an independent noise term to derive a closed-form expression of the expected value and variance of the observed image structure function. Significantly, we propose an unbiased estimator of the mean of the noise in the observed image structure function, which can be determined experimentally and significantly improves the accuracy of applications of DDM. Furthermore, through use of Gaussian Process Regression (GPR), we find that predictive samples of the image structure function require only around 1% of the Fourier Transforms of the observed quantities. This vastly reduces computational cost, while preserving information of the quantities of interest, such as quantiles of the image scattering function, for subsequent analysis. The approach, which we call DDM with Uncertainty Quantification (DDM-UQ), is validated using both simulations and experiments with respect to accuracy and computational efficiency, as compared with conventional DDM and multiple particle tracking. Overall, we propose that DDM-UQ lays the foundation for important new applications of DDM, as well as to high-throughput characterization.
Standard approaches for uncertainty quantification in cardiovascular modeling pose challenges due to the large number of uncertain inputs and the significant computational cost of realistic three-dimensional simulations. We propose an efficient uncertainty quantification framework utilizing a multilevel multifidelity Monte Carlo estimator to improve the accuracy of hemodynamic quantities of interest while maintaining reasonable computational cost. This is achieved by leveraging three cardiovascular model fidelities, each with varying spatial resolution to rigorously quantify the variability in hemodynamic outputs. We employ two low-fidelity models to construct several different estimators. Our goal is to investigate and compare the efficiency of estimators built from combinations of these low-fidelity and high-fidelity models. We demonstrate this framework on healthy and diseased models of aortic and coronary anatomy, including uncertainties in material property and boundary condition parameters. We seek to demonstrate that for this application it is possible to accelerate the convergence of the estimators by utilizing a MLMF paradigm. Therefore, we compare our approach to Monte Carlo and multilevel Monte Carlo estimators based only on three-dimensional simulations. We demonstrate significant reduction in total computational cost with the MLMF estimators. We also examine the differing properties of the MLMF estimators in healthy versus diseased models, as well as global versus local quantities of interest. As expected, global quantities and healthy models show larger reductions than local quantities and diseased model, as the latter rely more heavily on the highest fidelity model evaluations. In all cases, our workflow coupling Dakotas MLMF estimators with the SimVascular cardiovascular modeling framework makes uncertainty quantification feasible for constrained computational budgets.
112 - Chris J. Oates 2021
The lectures were prepared for the {E}cole Th{e}matique sur les Incertitudes en Calcul Scientifique (ETICS) in September 2021.
It is not unusual for a data analyst to encounter data sets distributed across several computers. This can happen for reasons such as privacy concerns, efficiency of likelihood evaluations, or just the sheer size of the whole data set. This presents new challenges to statisticians as even computing simple summary statistics such as the median becomes computationally challenging. Furthermore, if other advanced statistical methods are desired, novel computational strategies are needed. In this paper we propose a new approach for distributed analysis of massive data that is suitable for generalized fiducial inference and is based on a careful implementation of a divide and conquer strategy combined with importance sampling. The proposed approach requires only small amount of communication between nodes, and is shown to be asymptotically equivalent to using the whole data set. Unlike most existing methods, the proposed approach produces uncertainty measures (such as confidence intervals) in addition to point estimates for parameters of interest. The proposed approach is also applied to the analysis of a large set of solar images.
Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer. Efficient and robust uncertainty quantification (UQ) in pose estimators is critically needed in many robotic tasks. In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation. We ensemble 2-3 pre-trained models with different neural network architectures and/or training data sources, and compute their average pairwise disagreement against one another to obtain the uncertainty quantification. We propose four disagreement metrics, including a learned metric, and show that the average distance (ADD) is the best learning-free metric and it is only slightly worse than the learned metric, which requires labeled target data. Our method has several advantages compared to the prior art: 1) our method does not require any modification of the training process or the model inputs; and 2) it needs only one forward pass for each model. We evaluate the proposed UQ method on three tasks where our uncertainty quantification yields much stronger correlations with pose estimation errors than the baselines. Moreover, in a real robot grasping task, our method increases the grasping success rate from 35% to 90%.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا