ترغب بنشر مسار تعليمي؟ اضغط هنا

Data-Driven Computational Methods: Parameter and Operator Estimations (Chapter 1)

114   0   0.0 ( 0 )
 نشر من قبل John Harlim
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English
 تأليف John Harlim




اسأل ChatGPT حول البحث

Modern scientific computational methods are undergoing a transformative change; big data and statistical learning methods now have the potential to outperform the classical first-principles modeling paradigm. This book bridges this transition, connecting the theory of probability, stochastic processes, functional analysis, numerical analysis, and differential geometry. It describes two classes of computational methods to leverage data for modeling dynamical systems. The first is concerned with data fitting algorithms to estimate parameters in parametric models that are postulated on the basis of physical or dynamical laws. The second class is on operator estimation, which uses the data to nonparametrically approximate the operator generated by the transition function of the underlying dynamical systems. This self-contained book is suitable for graduate studies in applied mathematics, statistics, and engineering. Carefully chosen elementary examples with supplementary MATLAB codes and appendices covering the relevant prerequisite materials are provided, making it suitable for self-study.



قيم البحث

اقرأ أيضاً

In this paper, we consider a surrogate modeling approach using a data-driven nonparametric likelihood function constructed on a manifold on which the data lie (or to which they are close). The proposed method represents the likelihood function using a spectral expansion formulation known as the kernel embedding of the conditional distribution. To respect the geometry of the data, we employ this spectral expansion using a set of data-driven basis functions obtained from the diffusion maps algorithm. The theoretical error estimate suggests that the error bound of the approximate data-driven likelihood function is independent of the variance of the basis functions, which allows us to determine the amount of training data for accurate likelihood function estimations. Supporting numerical results to demonstrate the robustness of the data-driven likelihood functions for parameter estimation are given on instructive examples involving stochastic and deterministic differential equations. When the dimension of the data manifold is strictly less than the dimension of the ambient space, we found that the proposed approach (which does not require the knowledge of the data manifold) is superior compared to likelihood functions constructed using standard parametric basis functions defined on the ambient coordinates. In an example where the data manifold is not smooth and unknown, the proposed method is more robust compared to an existing polynomial chaos surrogate model which assumes a parametric likelihood, the non-intrusive spectral projection.
384 - I. Grabec 2007
Redundancy of experimental data is the basic statistic from which the complexity of a natural phenomenon and the proper number of experiments needed for its exploration can be estimated. The redundancy is expressed by the entropy of information perta ining to the probability density function of experimental variables. Since the calculation of entropy is inconvenient due to integration over a range of variables, an approximate expression for redundancy is derived that includes only a sum over the set of experimental data about these variables. The approximation makes feasible an efficient estimation of the redundancy of data along with the related experimental information and information cost function. From the experimental information the complexity of the phenomenon can be simply estimated, while the proper number of experiments needed for its exploration can be determined from the minimum of the cost function. The performance of the approximate estimation of these statistics is demonstrated on two-dimensional normally distributed random data.
191 - I. Grabec 2007
The extraction of a physical law y=yo(x) from joint experimental data about x and y is treated. The joint, the marginal and the conditional probability density functions (PDF) are expressed by given data over an estimator whose kernel is the instrume nt scattering function. As an optimal estimator of yo(x) the conditional average is proposed. The analysis of its properties is based upon a new definition of prediction quality. The joint experimental information and the redundancy of joint measurements are expressed by the relative entropy. With the number of experiments the redundancy on average increases, while the experimental information converges to a certain limit value. The difference between this limit value and the experimental information at a finite number of data represents the discrepancy between the experimentally determined and the true properties of the phenomenon. The sum of the discrepancy measure and the redundancy is utilized as a cost function. By its minimum a reasonable number of data for the extraction of the law yo(x) is specified. The mutual information is defined by the marginal and the conditional PDFs of the variables. The ratio between mutual information and marginal information is used to indicate which variable is the independent one. The properties of the introduced statistics are demonstrated on deterministically and randomly related variables.
63 - Gh. Adam , S. Adam 1999
A subtractionless method for solving Fermi surface sheets ({tt FSS}), from measured $n$-axis-projected momentum distribution histograms by two-dimensional angular correlation of the positron-electron annihilation radiation ({tt 2D-ACAR}) technique, i s discussed. The window least squares statistical noise smoothing filter described in Adam {sl et al.}, NIM A, {bf 337} (1993) 188, is first refined such that the window free radial parameters ({tt WRP}) are optimally adapted to the data. In an ideal single crystal, the specific jumps induced in the {tt WRP} distribution by the existing Fermi surface jumps yield straightforward information on the resolved {tt FSS}. In a real crystal, the smearing of the derived {tt WRP} optimal values, which originates from positron annihilations with electrons at crystal imperfections, is ruled out by median smoothing of the obtained distribution, over symmetry defined stars of bins. The analysis of a gigacount {tt 2D-ACAR} spectrum, measured on the archetypal high-$T_c$ compound $YBasb{2}Cusb{3}Osb{7-delta}$ at room temperature, illustrates the method. Both electronic {tt FSS}, the ridge along $Gamma X$ direction and the pillbox centered at the $S$ point of the first Brillouin zone, are resolved.
77 - Stefan Schmitt 2016
A selection of unfolding methods commonly used in High Energy Physics is compared. The methods discussed here are: bin-by-bin correction factors, matrix inversion, template fit, Tikhonov regularisation and two examples of iterative methods. Two proce dures to choose the strength of the regularisation are tested, namely the L-curve scan and a scan of global correlation coefficients. The advantages and disadvantages of the unfolding methods and choices of the regularisation strength are discussed using a toy example.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا