No Arabic abstract
Presently, models for the parameterization of cross sections for nodal diffusion nuclear reactor calculations at different conditions using histories and branches are developed from reactor physics expertise and by trial and error. In this paper we describe the development and application of a novel graph theoretic approach (GTA) to develop the expressions for evaluating the cross sections in a nodal diffusion code. The GTA generalizes existing nodal cross section models into a ``non-orthogonal and extensible dimensional parameter space. Furthermore, it utilizes a rigorous calculus on graphs to formulate partial derivatives. The GTA cross section models can be generated in a number of ways. In our current work we explore a step-wise regression and a complete Taylor series expansion of the parameterized cross sections to develop expressions to evaluate them. To establish proof-of-principle of the GTA, we compare numerical results of GTA generated cross section evaluations with traditional models for canonical PWR case matrices and the AP1000 lattice designs.
A fundamental problem in quantum computation and quantum information is finding the minimum quantum dimension needed for a task. For tasks involving state preparation and measurements, this problem can be addressed using only the input-output correlations. This has been applied to Bell, prepare-and-measure, and Kochen-Specker contextuality scenarios. Here, we introduce a novel approach to quantum dimension witnessing for scenarios with one preparation and several measurements, which uses the graphs of mutual exclusivity between sets of measurement events. We present the concepts and tools needed for graph-theoretic quantum dimension witnessing and illustrate their use by identifying novel quantum dimension witnesses, including a family that can certify arbitrarily high quantum dimensions with few events.
We present a new approach to a classical problem in statistical physics: estimating the partition function and other thermodynamic quantities of the ferromagnetic Ising model. Markov chain Monte Carlo methods for this problem have been well-studied, although an algorithm that is truly practical remains elusive. Our approach takes advantage of the fact that, for a fixed bond strength, studying the ferromagnetic Ising model is a question of counting particular subgraphs of a given graph. We combine graph theory and heuristic sampling to determine coefficients that are independent of temperature and that, once obtained, can be used to determine the partition function and to compute physical quantities such as mean energy, mean magnetic moment, specific heat, and magnetic susceptibility.
This paper updates and complements a previously published evaluation of computational methods for total and partial cross sections, relevant to modeling the photoelectric effect in Monte Carlo particle transport. It examines calculation methods that have become available since the publication of the previous paper, some of which claim improvements over previous calculations; it tests them with statistical methods against the same sample of experimental data collected for the previous evaluation. No statistically significant improvements are observed with respect to the calculation method identified in the previous paper as the state of the art for the intended purpose, encoded in the EPDL97 data library. Some of the more recent computational methods exhibit significantly lower capability to reproduce experimental measurements than the existing alternatives.
Classical molecular dynamics (MD) simulations enable modeling of materials and examination of microscopic details that are not accessible experimentally. The predictive capability of MD relies on the force field (FF) used to describe interatomic interactions. FF parameters are typically determined to reproduce selected material properties computed from density functional theory (DFT) and/or measured experimentally. A common practice in parameterizing FFs is to use least-squares local minimization algorithms. Genetic algorithms (GAs) have also been demonstrated as a viable global optimization approach, even for complex FFs. However, an understanding of the relative effectiveness and efficiency of different optimization techniques for the determination of FF parameters is still lacking. In this work, we evaluate various FF parameter optimization schemes, using as example a training data set calculated from DFT for different polymorphs of Ir$O_2$. The Morse functional form is chosen for the pairwise interactions and the optimization of the parameters against the training data is carried out using (1) multi-start local optimization algorithms: Simplex, Levenberg-Marquardt, and POUNDERS, (2) single-objective GA, and (3) multi-objective GA. Using random search as a baseline, we compare the algorithms in terms of reaching the lowest error, and number of function evaluations. We also compare the effectiveness of different approaches for FF parameterization using a test data set with known ground truth (i.e generated from a specific Morse FF). We find that the performance of optimization approaches differs when using the Test data vs. the DFT data. Overall, this study provides insight for selecting a suitable optimization method for FF parameterization, which in turn can enable more accurate prediction of material properties and chemical phenomena.
In Monte Carlo particle transport codes, it is often important to adjust reaction cross sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analogous Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross section change. This makes it possible to increase the cross section of nuclear reactions by factors exceeding 10^4 (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful, for example, in problems that involve computation of particle penetration deep into a target, such as occurs in atmospheric showers or in shielding.