No Arabic abstract
Purpose: A reliable model to simulate nuclear interactions is fundamental for Ion-therapy. We already showed how BLOB (Boltzmann-Langevin One Body), a model developed to simulate heavy ion interactions up to few hundreds of MeV/u, could simulate also $^{12}$C reactions in the same energy domain. However, its computation time is too long for any medical application. For this reason we present the possibility of emulating it with a Deep Learning algorithm. Methods: The BLOB final state is a Probability Density Function (PDF) of finding a nucleon in a position of the phase space. We discretised this PDF and trained a Variational Auto-Encoder (VAE) to reproduce such a discrete PDF. As a proof of concept, we developed and trained a VAE to emulate BLOB in simulating the interactions of $^{12}$C with $^{12}$C at 62 MeV/u. To have more control on the generation, we forced the VAE latent space to be organised with respect to the impact parameter ($b$) training a classifier of $b$ jointly with the VAE. Results: The distributions obtained from the VAE are similar to the input ones and the computation time needed to use the VAE as a generator is negligible. Conclusions: We show that it is possible to use a Deep Learning approach to emulate a model developed to simulate nuclear reactions in the energy range of interest for Ion-therapy. We foresee the implementation of the generation part in C++ and to interface it with the most used Monte Carlo toolkit: Geant4.
We present a deep learning model trained to emulate the radiative transfer during the epoch of cosmological reionization. CRADLE (Cosmological Reionization And Deep LEarning) is an autoencoder convolutional neural network that uses two-dimensional maps of the star number density and the gas density field at z=6 as inputs and that predicts 3D maps of the times of reionization $mathrm{t_{reion}}$ as outputs. These predicted single fields are sufficient to describe the global reionization history of the intergalactic medium in a given simulation. We trained the model on a given simulation and tested the predictions on another simulation with the same paramaters but with different initial conditions. The model is successful at predicting $mathrm{t_{reion}}$ maps that are in good agreement with the test simulation. We used the power spectrum of the $mathrm{t_{reion}}$ field as an indicator to validate our model. We show that the network predicts large scales almost perfectly but is somewhat less accurate at smaller scales. While the current model is already well-suited to get average estimates about the reionization history, we expect it can be further improved with larger samples for the training, better data pre-processing and finer tuning of hyper-parameters. Emulators of this kind could be systematically used to rapidly obtain the evolving HII regions associated with hydro-only simulations and could be seen as precursors of fully emulated physics solvers for future generations of simulations.
An accurate description of interactions between thermal neutrons (below 4 eV) and materials is key to simulate the transport of neutrons in a wide range of applications such as criticality-safety, reactor physics, compact accelerator-driven neutron sources, radiological shielding or nuclear instrumentation, just to name a few. While the Monte Carlo transport code Geant4 was initially developed to simulate particle physics experiments, %-with a large emphasis given on modeled cross-sections for all known particles at all conceivable energies-, its use has spread to neutronics applications, requiring evaluated cross-sections for neutrons and gammas between $0$ and $20$ MeV (the so-called neutron High Precision -HP- package), as well as a proper offline or on-the-flight treatment of these cross-sections. In this paper we will point out limitations affecting Geant4 (version 10.07.p01) thermal neutron treatment and associated nuclear data libraries, by using comparisons with the reference Monte Carlo neutron transport code tripoli, version 11, and we will present the results of various modifications of the Geant4 neutron-HP package, required to overcome these limitations. Also, in order to broaden the support of nuclear data libraries compatible with Geant4, a nuclear processing tool has been developed and validated allowing the use of the code together with ENDF-BVIII.0 and JEFF-3.3 libraries for example. These changes should be taken into account in an upcoming Geant4 release.
Energy loss of energetic ions in solid is crucial in many field, and accurate prediction of the ion stopping power is a long-time goal. Though great efforts have been made, it is still very difficult to find a universal prediction model to accurately calculate the ion stopping power in distinct target materials. Deep learning algorithm is a newly emerged method to solve multi-factors physical problems and can mine the deeply implicit relations among parameters, which make it a powerful tool in energy loss prediction. In this work, we developed an energy loss prediction model based on deep learning. When experimental data are available, our model can give predictions with an average absolute difference close to 5.7%, which is in the same level compared with other widely used programs e.g. SRIM. In the regime without experimental data, our model still can maintain a high performance, and has higher reliability compared with the existing models. The ion range of Au ions in SiC can be calculated with a relative error of 0.6~25% for ions in the energy range of 700~10000 keV, which is much better than the results calculated by SRIM. Moreover, our model support the reciprocity conjecture of ion stopping power in solid proposed by P. Sigmund, which has been known for a long time but can hardly been proved by any of the existing stopping power models. This high-accuracy energy loss prediction model is very important for the research of ion-solid interaction mechanism and enormous relevant applications of energetic ions, such as in semiconductor fabrications, nuclear energy systems and the space facilities.
Nuclear spin-dependent parity violation arises from weak interactions between electrons and nucleons, and from nuclear anapole moments. We outline a method to measure such effects, using a Stark-interference technique to determine the mixing between opposite-parity rotational/hyperfine levels of ground-state molecules. The technique is applicable to nuclei over a wide range of atomic number, in diatomic species that are theoretically tractable for interpretation. This should provide data on anapole moments of many nuclei, and on previously unmeasured neutral weak couplings.
Reduced Order Modeling (ROM) for engineering applications has been a major research focus in the past few decades due to the unprecedented physical insight into turbulence offered by high-fidelity CFD. The primary goal of a ROM is to model the key physics/features of a flow-field without computing the full Navier-Stokes (NS) equations. This is accomplished by projecting the high-dimensional dynamics to a low-dimensional subspace, typically utilizing dimensionality reduction techniques like Proper Orthogonal Decomposition (POD), coupled with Galerkin projection. In this work, we demonstrate a deep learning based approach to build a ROM using the POD basis of canonical DNS datasets, for turbulent flow control applications. We find that a type of Recurrent Neural Network, the Long Short Term Memory (LSTM) which has been primarily utilized for problems like speech modeling and language translation, shows attractive potential in modeling temporal dynamics of turbulence. Additionally, we introduce the Hurst Exponent as a tool to study LSTM behavior for non-stationary data, and uncover useful characteristics that may aid ROM development for a variety of applications.