No Arabic abstract
Single-particle imaging with X-ray free-electron lasers depends crucially on algorithms that merge large numbers of weak diffraction patterns despite missing measurements of parameters such as particle orientations. The Expand-Maximize-Compress (EMC) algorithm is highly effective at merging single-particle diffraction patterns with missing orientation values, but most implementations exhaustively sample the space of missing parameters and may become computationally prohibitive as the number of degrees of freedom extend beyond orientation angles. Here we describe how the EMC algorithm can be modified to employ Metropolis Monte Carlo sampling rather than grid sampling, which may be favorable for cases with more than three missing parameters. Using simulated data, this variant is compared to the standard EMC algorithm. Higher dimensional cases of mixed target species and variable x-ray fluence are also explored.
We present MadFlow, a first general multi-purpose framework for Monte Carlo (MC) event simulation of particle physics processes designed to take full advantage of hardware accelerators, in particular, graphics processing units (GPUs). The automation process of generating all the required components for MC simulation of a generic physics process and its deployment on hardware accelerator is still a big challenge nowadays. In order to solve this challenge, we design a workflow and code library which provides to the user the possibility to simulate custom processes through the MadGraph5_aMC@NLO framework and a plugin for the generation and exporting of specialized code in a GPU-like format. The exported code includes analytic expressions for matrix elements and phase space. The simulation is performed using the VegasFlow and PDFFlow libraries which deploy automatically the full simulation on systems with different hardware acceleration capabilities, such as multi-threading CPU, single-GPU and multi-GPU setups. The package also provides an asynchronous unweighted events procedure to store simulation results. Crucially, although only Leading Order is automatized, the library provides all ingredients necessary to build full complex Monte Carlo simulators in a modern, extensible and maintainable way. We show simulation results at leading-order for multiple processes on different hardware configurations.
We introduce a variant of the Hybrid Monte Carlo (HMC) algorithm to address large-deviation statistics in stochastic hydrodynamics. Based on the path-integral approach to stochastic (partial) differential equations, our HMC algorithm samples space-time histories of the dynamical degrees of freedom under the influence of random noise. First, we validate and benchmark the HMC algorithm by reproducing multiscale properties of the one-dimensional Burgers equation driven by Gaussian and white-in-time noise. Second, we show how to implement an importance sampling protocol to significantly enhance, by orders of magnitudes, the probability to sample extreme and rare events, making it possible to estimate moments of field variables of extremely high order (up to 30 and more). By employing reweighting techniques, we map the biased configurations back to the original probability measure in order to probe their statistical importance. Finally, we show that by biasing the system towards very intense negative gradients, the HMC algorithm is able to explore the statistical fluctuations around instanton configurations. Our results will also be interesting and relevant in lattice gauge theory since they provide insight into reweighting techniques.
In this proceedings we present MadFlow, a new framework for the automation of Monte Carlo (MC) simulation on graphics processing units (GPU) for particle physics processes. In order to automate MC simulation for a generic number of processes, we design a program which provides to the user the possibility to simulate custom processes through the MadGraph5_aMC@NLO framework. The pipeline includes a first stage where the analytic expressions for matrix elements and phase space are generated and exported in a GPU-like format. The simulation is then performed using the VegasFlow and PDFFlow libraries which deploy automatically the full simulation on systems with different hardware acceleration capabilities, such as multi-threading CPU, single-GPU and multi-GPU setups. We show some preliminary results for leading-order simulations on different hardware configurations.
In Monte Carlo particle transport codes, it is often important to adjust reaction cross sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analogous Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross section change. This makes it possible to increase the cross section of nuclear reactions by factors exceeding 10^4 (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful, for example, in problems that involve computation of particle penetration deep into a target, such as occurs in atmospheric showers or in shielding.
Parallel tempering Monte Carlo has proven to be an efficient method in optimization and sampling applications. Having an optimized temperature set enhances the efficiency of the algorithm through more-frequent replica visits to the temperature limits. The approaches for finding an optimal temperature set can be divided into two main categories. The methods of the first category distribute the replicas such that the swapping ratio between neighbouring replicas is constant and independent of the temperature values. The second-category techniques including the feedback-optimized method, on the other hand, aim for a temperature distribution that has higher density at simulation bottlenecks, resulting in temperature-dependent replica-exchange probabilities. In this paper, we compare the performance of various temperature setting methods on both sparse and fully-connected spin-glass problems as well as fully-connected Wishart problems that have planted solutions. These include two classes of problems that have either continuous or discontinuous phase transitions in the order parameter. Our results demonstrate that there is no performance advantage for the methods that promote nonuniform swapping probabilities on spin-glass problems where the order parameter has a smooth transition between phases at the critical temperature. However, on Wishart problems that have a first-order phase transition at low temperatures, the feedback-optimized method exhibits a time-to-solution speedup of at least a factor of two over the other approaches.