ترغب بنشر مسار تعليمي؟ اضغط هنا

Uncertainty evaluation in the estimates of isotopic abundances and atomic weight of any element: a unique application of the theory of uncertainty for derived results

69   0   0.0 ( 0 )
 نشر من قبل B. P. Datta
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English
 تأليف B. P. Datta




اسأل ChatGPT حول البحث

It has been previously shown that any measurement system specific relationship (SSR)/ mathematical-model Y_d = f_d ({X_m}) or so is bracketed with certain parameters which should prefix the achievable-accuracy/ uncertainty (e_d^Y) of a desired result y_d. Here we clarify how the element-specific-expressions of isotopic abundances and/ or atomic weight could be parametrically distinguished from one another, and the achievable accuracy be even a priori predicted. It is thus signified that, irrespective of whether the measurement-uncertainty (u_m) could be purely random by origin or not, e_d^Y should be a systematic parameter. Further, by property-governing-factors, any SSR should belong to either variable-independent (F.1) or -dependent (F.2) family of SSRs/ models. The SSRs here are shown to be the members of the F.2 family. That is, it is pointed out that, and explained why, the uncertainty (e) of determining an either isotopic abundance or atomic weight should vary, even for any given measurement-accuracy(s) u_m(s), as a function of the measurable-variable(s) X_m(s). However, the required computational-step has been shown to behave as an error-sink in the overall process of indirect measurement in question.



قيم البحث

اقرأ أيضاً

370 - B. P. Datta 2011
Any physiochemical variable (Ym) is always determined from certain measured variables {Xi}. The uncertainties {ui} of measuring {Xi} are generally a priori ensured as acceptable. However, there is no general method for assessing uncertainty (em) in t he desired Ym, i.e. irrespective of whatever might be its system-specific-relationship (SSR) with {Xi}, and/ or be the causes of {ui}. We here therefore study the behaviors of different typical SSRs. The study shows that any SSR is characterized by a set of parameters, which govern em. That is, em is shown to represent a net SSR-driven (purely systematic) change in ui(s); and it cannot vary for whether ui(s) be caused by either or both statistical and systematic reasons. We thus present the general relationship of em with ui(s), and discuss how it can be used to predict a priori the requirements for an evaluated Ym to be representative, and hence to set the guidelines for designing experiments and also really appropriate evaluation models. Say: Y_m= f_m ({X_i}_(i=1)^N), then, although: e_m= g_m ({u_i}_(i=1)^N), N is not a key factor in governing em. However, simply for varying fm, the em is demonstrated to be either equaling a ui, or >ui, or even <ui. Further, the limiting error (d_m^(Lim.)) in determining an Ym is also shown to be decided by fm (SSR). Thus, all SSRs are classified into two groups: (I) the SSRs that can never lead d_m^(Lim.) to be zero; and (II) the SSRs that enable d_m^(Lim.) to be zero. In fact, the theoretical-tool (SSR) is by pros and cons no different from any discrete experimental-means of a study, and has resemblance with chemical reactions as well.
Previously, we presented a new interpretation of quantum mechanics that revealed it is indeed possible to have a local hidden variable that is consistent with Bells inequality experiments. In that article we suggested that the local hidden variable i s associated with vacuum fluctuations. In this article we expound upon that notion by introducing the Theory of Vacuum Texture (TVT). Here we show that replacing the highly restrictive assumptions of the quantization of energy levels in a system with the simpler, less restrictive postulate that there exists a threshold in order for energy to be released. With this new postulate, the models of blackbody radiation is shown to be consistent with the experiments. We also show, that the threshold condition contributes to a localized vacuum energy which leads us to conclude that the uncertainty principle is a statistical effect. These conditions also naturally leads to the prediction that massive particles transition to an ordered state at low temperatures. In addition, we show that thermodynamic laws must be modified to include two heat baths with temperatures: $T$ for dissipative energy levels and $T_{V}$ ($gg T$) for localized vacuum energy. In total, we show that our threshold postulate agrees with experimental observations of blackbody radiation, the uncertainty principle and quantum statistics without the need of the invoking quantum weirdness.
When the cooling rate $v$ is smaller than a certain material-dependent threshold, the glass transition temperature $T_g$ becomes to a certain degree the material parameter being nearly independent on the cooling rate. The common method to determine $ T_g$ is to extrapolate viscosity $ u$ of the liquid state at temperatures not far above the freezing conditions to lower temperatures where liquid freezes and viscosity is hardly measurable. It is generally accepted that the glass transition occurs when viscosity drops by $13leq nleq17$ orders of magnitude. The accuracy of $T_g$ depends on the extrapolation quality. We propose here an algorithm for a unique determining of $T_g$. The idea is to unambiguously extrapolate $ u(T)$ to low temperatures without relying upon a specific model. It can be done using the numerical analytical continuation of $ u(T)$-function from above $T_g$ where it is measurable, to $Tgtrsim T_g$. For numerical analytical continuation, we use the Pade approximant method.
We consider the core reinforcement-learning problem of on-policy value function approximation from a batch of trajectory data, and focus on various issues of Temporal Difference (TD) learning and Monte Carlo (MC) policy evaluation. The two methods ar e known to achieve complementary bias-variance trade-off properties, with TD tending to achieve lower variance but potentially higher bias. In this paper, we argue that the larger bias of TD can be a result of the amplification of local approximation errors. We address this by proposing an algorithm that adaptively switches between TD and MC in each state, thus mitigating the propagation of errors. Our method is based on learned confidence intervals that detect biases of TD estimates. We demonstrate in a variety of policy evaluation tasks that this simple adaptive algorithm performs competitively with the best approach in hindsight, suggesting that learned confidence intervals are a powerful technique for adapting policy evaluation to use TD or MC returns in a data-driven way.
To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating Counterfactual Latent Uncertainty Explanations (CLUEs). However, for a single input, such approaches could output a variety of explanatio ns due to the lack of constraints placed on the explanation. Here we augment the original CLUE approach, to provide what we call $delta$-CLUE. CLUE indicates $it{one}$ way to change an input, while remaining on the data manifold, such that the model becomes more confident about its prediction. We instead return a $it{set}$ of plausible CLUEs: multiple, diverse inputs that are within a $delta$ ball of the original input in latent space, all yielding confident predictions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا