ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimising experimental design in neutron reflectometry

89   0   0.0 ( 0 )
 نشر من قبل James Durant
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Using the Fisher information (FI), the design of neutron reflectometry experiments can be optimised, leading to greater confidence in parameters of interest and better use of experimental time [Durant, Wilkins, Butler, & Cooper (2021). J. Appl. Cryst. 54, 1100-1110]. In this work, the FI is utilised in optimising the design of a wide range of reflectometry experiments. Two lipid bilayer systems are investigated to determine the optimal choice of measurement angles and liquid contrasts, in addition to the ratio of the total counting time that should be spent measuring each condition. The reduction in parameter uncertainties with the addition of underlayers to these systems is then quantified, using the FI, and validated through the use of experiment simulation and Bayesian sampling methods. For a one-shot measurement of a degrading lipid monolayer, it is shown that the common practice of measuring null-reflecting water is indeed optimal, but that the optimal measurement angle is dependent on the deuteration state of the monolayer. Finally, the framework is used to demonstrate the feasibility of measuring magnetic signals as small as $0.01mu_{B}/text{atom}$ in layers only $20r{A}$ thick, given the appropriate experimental design, and that time to reach a given level of confidence in the small magnetic moment is quantifiable.



قيم البحث

اقرأ أيضاً

An approach based on the Fisher information (FI) is developed to quantify the maximum information gain and optimal experimental design in neutron reflectometry experiments. In these experiments, the FI can be analytically calculated and used to provi de sub-second predictions of parameter uncertainties. This approach can be used to influence real-time decisions about measurement angle, measurement time, contrast choice and other experimental conditions based on parameters of interest. The FI provides a lower bound on parameter estimation uncertainties and these are shown to decrease with the square root of measurement time, providing useful information for the planning and scheduling of experimental work. As the FI is computationally inexpensive to calculate, it can be computed repeatedly during the course of an experiment, saving costly beam time by signalling that sufficient data has been obtained; or saving experimental datasets by signalling that an experiment needs to continue. The approachs predictions are validated through the introduction of an experiment simulation framework that incorporates instrument-specific incident flux profiles, and through the investigation of measuring the structural properties of a phospholipid bilayer.
Deviations from Brownian motion leading to anomalous diffusion are ubiquitously found in transport dynamics, playing a crucial role in phenomena from quantum physics to life sciences. The detection and characterization of anomalous diffusion from the measurement of an individual trajectory are challenging tasks, which traditionally rely on calculating the mean squared displacement of the trajectory. However, this approach breaks down for cases of important practical interest, e.g., short or noisy trajectories, ensembles of heterogeneous trajectories, or non-ergodic processes. Recently, several new approaches have been proposed, mostly building on the ongoing machine-learning revolution. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition, the Anomalous Diffusion challenge (AnDi). Participating teams independently applied their own algorithms to a commonly-defined dataset including diverse conditions. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, providing practical advice for users and a benchmark for developers.
Amplitude analysis is a powerful technique to study hadron decays. A significant complication in these analyses is the treatment of instrumental effects, such as background and selection efficiency variations, in the multidimensional kinematic phase space. This paper reviews conventional methods to estimate efficiency and background distributions and outlines the methods of density estimation using Gaussian processes and artificial neural networks. Such techniques see widespread use elsewhere, but have not gained popularity in use for amplitude analyses. Finally, novel applications of these models are proposed, to estimate background density in the signal region from the sidebands in multiple dimensions, and a more general method for model-assisted density estimation using artificial neural networks.
86 - Andrew Fowlie 2021
I would like to thank Junk and Lyons (arXiv:2009.06864) for beginning a discussion about replication in high-energy physics (HEP). Junk and Lyons ultimately argue that HEP learned its lessons the hard way through past failures and that other fields c ould learn from our procedures. They emphasize that experimental collaborations would risk their legacies were they to make a type-1 error in a search for new physics and outline the vigilance taken to avoid one, such as data blinding and a strict $5sigma$ threshold. The discussion, however, ignores an elephant in the room: there are regularly anomalies in searches for new physics that result in substantial scientific activity but dont replicate with more data.
Evaluated nuclear data uncertainties are often perceived as unrealistic, most often because they are thought to be too small. The impact of this issue in applied nuclear science has been discussed widely in recent years. Commonly suggested causes are : poor estimates of specific error components, neglect of uncertainty correlations, and overlooked known error sources. However, instances have been reported where very careful, objective assessments of all known error sources have been made with realistic error magnitudes and correlations provided, yet the resulting evaluated uncertainties still appear to be inconsistent with observed scatter of predicted mean values. These discrepancies might be attributed to significant unrecognized sources of uncertainty (USU) that limit the accuracy to which these physical quantities can be determined. The objective of our work has been to develop procedures for revealing and including USU estimates in nuclear data evaluations involving experimental input data. We conclude that the presence of USU may be revealed, and estimates of magnitudes made, through quantitative analyses. This paper identifies several specific clues that can be explored by evaluators in identifying the existence of USU. It then describes numerical procedures to generate quantitative estimates of USU magnitudes. Key requirements for these procedures to be viable are that sufficient numbers of data points be available, for statistical reasons, and that additional supporting information about the measurements be provided by the experimenters. Realistic examples are described to illustrate these procedures and demonstrate their outcomes as well as limitations. Our work strongly supports the view that USU is an important issue in nuclear data evaluation, with significant consequences for applications, and that this topic warrants further investigation by the nuclear science community.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا