No Arabic abstract
Amplitude analysis is a powerful technique to study hadron decays. A significant complication in these analyses is the treatment of instrumental effects, such as background and selection efficiency variations, in the multidimensional kinematic phase space. This paper reviews conventional methods to estimate efficiency and background distributions and outlines the methods of density estimation using Gaussian processes and artificial neural networks. Such techniques see widespread use elsewhere, but have not gained popularity in use for amplitude analyses. Finally, novel applications of these models are proposed, to estimate background density in the signal region from the sidebands in multiple dimensions, and a more general method for model-assisted density estimation using artificial neural networks.
The traditional approach in HEP analysis software is to loop over every event and every object via the ROOT framework. This method follows an imperative paradigm, in which the code is tied to the storage format and steps of execution. A more desirable strategy would be to implement a declarative language, such that the storage medium and execution are not included in the abstraction model. This will become increasingly important to managing the large dataset collected by the LHC and the HL-LHC. A new analysis description language (ADL) inspired by functional programming, FuncADL, was developed using Python as a host language. The expressiveness of this language was tested by implementing example analysis tasks designed to benchmark the functionality of ADLs. Many simple selections are expressible in a declarative way with FuncADL, which can be used as an interface to retrieve filtered data. Some limitations were identified, but the design of the language allows for future extensions to add missing features. FuncADL is part of a suite of analysis software tools being developed by the Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP). These tools will be available to develop highly scalable physics analyses for the LHC.
The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.
I would like to thank Junk and Lyons (arXiv:2009.06864) for beginning a discussion about replication in high-energy physics (HEP). Junk and Lyons ultimately argue that HEP learned its lessons the hard way through past failures and that other fields could learn from our procedures. They emphasize that experimental collaborations would risk their legacies were they to make a type-1 error in a search for new physics and outline the vigilance taken to avoid one, such as data blinding and a strict $5sigma$ threshold. The discussion, however, ignores an elephant in the room: there are regularly anomalies in searches for new physics that result in substantial scientific activity but dont replicate with more data.
Evaluated nuclear data uncertainties are often perceived as unrealistic, most often because they are thought to be too small. The impact of this issue in applied nuclear science has been discussed widely in recent years. Commonly suggested causes are: poor estimates of specific error components, neglect of uncertainty correlations, and overlooked known error sources. However, instances have been reported where very careful, objective assessments of all known error sources have been made with realistic error magnitudes and correlations provided, yet the resulting evaluated uncertainties still appear to be inconsistent with observed scatter of predicted mean values. These discrepancies might be attributed to significant unrecognized sources of uncertainty (USU) that limit the accuracy to which these physical quantities can be determined. The objective of our work has been to develop procedures for revealing and including USU estimates in nuclear data evaluations involving experimental input data. We conclude that the presence of USU may be revealed, and estimates of magnitudes made, through quantitative analyses. This paper identifies several specific clues that can be explored by evaluators in identifying the existence of USU. It then describes numerical procedures to generate quantitative estimates of USU magnitudes. Key requirements for these procedures to be viable are that sufficient numbers of data points be available, for statistical reasons, and that additional supporting information about the measurements be provided by the experimenters. Realistic examples are described to illustrate these procedures and demonstrate their outcomes as well as limitations. Our work strongly supports the view that USU is an important issue in nuclear data evaluation, with significant consequences for applications, and that this topic warrants further investigation by the nuclear science community.
Using the Fisher information (FI), the design of neutron reflectometry experiments can be optimised, leading to greater confidence in parameters of interest and better use of experimental time [Durant, Wilkins, Butler, & Cooper (2021). J. Appl. Cryst. 54, 1100-1110]. In this work, the FI is utilised in optimising the design of a wide range of reflectometry experiments. Two lipid bilayer systems are investigated to determine the optimal choice of measurement angles and liquid contrasts, in addition to the ratio of the total counting time that should be spent measuring each condition. The reduction in parameter uncertainties with the addition of underlayers to these systems is then quantified, using the FI, and validated through the use of experiment simulation and Bayesian sampling methods. For a one-shot measurement of a degrading lipid monolayer, it is shown that the common practice of measuring null-reflecting water is indeed optimal, but that the optimal measurement angle is dependent on the deuteration state of the monolayer. Finally, the framework is used to demonstrate the feasibility of measuring magnetic signals as small as $0.01mu_{B}/text{atom}$ in layers only $20r{A}$ thick, given the appropriate experimental design, and that time to reach a given level of confidence in the small magnetic moment is quantifiable.