ترغب بنشر مسار تعليمي؟ اضغط هنا

Long-timescale predictions from short-trajectory data: A benchmark analysis of the trp-cage miniprotein

77   0   0.0 ( 0 )
 نشر من قبل Aaron Dinner
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Elucidating physical mechanisms with statistical confidence from molecular dynamics simulations can be challenging owing to the many degrees of freedom that contribute to collective motions. To address this issue, we recently introduced a dynamical Galerkin approximation (DGA) [Thiede et al. J. Phys. Chem. 150, 244111 (2019)], in which chemical kinetic statistics that satisfy equations of dynamical operators are represented by a basis expansion. Here, we reformulate this approach, clarifying (and reducing) the dependence on the choice of lag time. We present a new projection of the reactive current onto collective variables and provide improved estimators for rates and committors. We also present simple procedures for constructing suitable smoothly varying basis functions from arbitrary molecular features. To evaluate estimators and basis sets numerically, we generate and carefully validate a dataset of short trajectories for the unfolding and folding of the trp-cage miniprotein, a well-studied system. Our analysis demonstrates a comprehensive strategy for characterizing reaction pathways quantitatively.



قيم البحث

اقرأ أيضاً

182 - I. Grabec 2007
The extraction of a physical law y=yo(x) from joint experimental data about x and y is treated. The joint, the marginal and the conditional probability density functions (PDF) are expressed by given data over an estimator whose kernel is the instrume nt scattering function. As an optimal estimator of yo(x) the conditional average is proposed. The analysis of its properties is based upon a new definition of prediction quality. The joint experimental information and the redundancy of joint measurements are expressed by the relative entropy. With the number of experiments the redundancy on average increases, while the experimental information converges to a certain limit value. The difference between this limit value and the experimental information at a finite number of data represents the discrepancy between the experimentally determined and the true properties of the phenomenon. The sum of the discrepancy measure and the redundancy is utilized as a cost function. By its minimum a reasonable number of data for the extraction of the law yo(x) is specified. The mutual information is defined by the marginal and the conditional PDFs of the variables. The ratio between mutual information and marginal information is used to indicate which variable is the independent one. The properties of the introduced statistics are demonstrated on deterministically and randomly related variables.
When dealing with non-stationary systems, for which many time series are available, it is common to divide time in epochs, i.e. smaller time intervals and deal with short time series in the hope to have some form of approximate stationarity on that t ime scale. We can then study time evolution by looking at properties as a function of the epochs. This leads to singular correlation matrices and thus poor statistics. In the present paper, we propose an ensemble technique to deal with a large set of short time series without any consideration of non-stationarity. We randomly select subsets of time series and thus create an ensemble of non-singular correlation matrices. As the selection possibilities are binomially large, we will obtain good statistics for eigenvalues of correlation matrices, which are typically not independent. Once we defined the ensemble, we analyze its behavior for constant and block-diagonal correlations and compare numerics with analytic results for the corresponding correlated Wishart ensembles. We discuss differences resulting from spurious correlations due to repeatitive use of time-series. The usefulness of this technique should extend beyond the stationary case if, on the time scale of the epochs, we have quasi-stationarity at least for most epochs.
372 - I. Grabec 2007
Redundancy of experimental data is the basic statistic from which the complexity of a natural phenomenon and the proper number of experiments needed for its exploration can be estimated. The redundancy is expressed by the entropy of information perta ining to the probability density function of experimental variables. Since the calculation of entropy is inconvenient due to integration over a range of variables, an approximate expression for redundancy is derived that includes only a sum over the set of experimental data about these variables. The approximation makes feasible an efficient estimation of the redundancy of data along with the related experimental information and information cost function. From the experimental information the complexity of the phenomenon can be simply estimated, while the proper number of experiments needed for its exploration can be determined from the minimum of the cost function. The performance of the approximate estimation of these statistics is demonstrated on two-dimensional normally distributed random data.
63 - Gh. Adam , S. Adam 1999
A subtractionless method for solving Fermi surface sheets ({tt FSS}), from measured $n$-axis-projected momentum distribution histograms by two-dimensional angular correlation of the positron-electron annihilation radiation ({tt 2D-ACAR}) technique, i s discussed. The window least squares statistical noise smoothing filter described in Adam {sl et al.}, NIM A, {bf 337} (1993) 188, is first refined such that the window free radial parameters ({tt WRP}) are optimally adapted to the data. In an ideal single crystal, the specific jumps induced in the {tt WRP} distribution by the existing Fermi surface jumps yield straightforward information on the resolved {tt FSS}. In a real crystal, the smearing of the derived {tt WRP} optimal values, which originates from positron annihilations with electrons at crystal imperfections, is ruled out by median smoothing of the obtained distribution, over symmetry defined stars of bins. The analysis of a gigacount {tt 2D-ACAR} spectrum, measured on the archetypal high-$T_c$ compound $YBasb{2}Cusb{3}Osb{7-delta}$ at room temperature, illustrates the method. Both electronic {tt FSS}, the ridge along $Gamma X$ direction and the pillbox centered at the $S$ point of the first Brillouin zone, are resolved.
229 - B. P. Datta 2011
In isotope ratio mass spectrometry (IRMS), any sample (S) measurement is performed as a relative-difference ((S/W)di) from a working-lab-reference (W), but the result is evaluated relative to a recommended-standard (D): (S/D)di. It is thus assumed th at different source specific results ((S1/D)di, (S2/D)di) would represent their sources (S1, S2), and be accurately intercomparable. However, the assumption has never been checked. In this manuscript we carry out this task by considering a system as CO2+-IRMS. We present a model for a priori predicting output-uncertainty. Our study shows that scale-conversion, even with the aid of auxiliary-reference-standard(s) Ai(s), cannot make (S/D)di free from W; and the ((S/W)di,(A1/W)di,(A2/W)di) To (S/D)di conversion-formula normally used in the literature is invalid. Besides, the latter-relation has been worked out, which leads to e.g., fJ([(S/W)dJCO2pmp%],[(A1/W)dJCO2pmp%],[(A2/W)dJCO2pmp%]) = ((S/D)dJCO2pm4.5p%); whereas FJ([(S/W)dJCO2pmp%],[(A1/W)dJCO2pmp%]) = ((S/D)dJCO2pm1.2p%). That is, contrary to the general belief (Nature 1978, 271, 534), the scale-conversion by employing one than two Ai-standards should ensure (S/D)di to be more accurate. However, a more valuable finding is that the transformation of any d-estimate into its absolute value helps improve accuracy, or any reverse-process enhances uncertainty. Thus, equally accurate though the absolute-estimates of isotopic-CO2 and constituent-elemental-isotopic abundance-ratios could be, in contradistinction any differential-estimate is shown to be less accurate. Further, for S and D to be similar, any absolute estimate is shown to turn out nearly absolute accurate but any (S/D)d value as really absurd. That is, estimated source specific absolute values, rather than corresponding differential results, should really represent their sources, and/ or be closely intercomparable.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا