No Arabic abstract
Numerical simulation for comminution processes inside the vial of ball mills are performed using Monte Carlo method. The internal dynamics is represented by recently developed model based on hamiltonian involving the impact and surrounding electromagnetic potentials. The paper is focused on investigating the behaviors of normalized macroscopic pressure, $P/{P_0}$, in term of system temperature and the milled powder mass. The results provide theoretical justification that high efficiency is expected at low system temperature region. It is argued that keeping the system temperature as low as possible is crucial to prevent agglomeration which is a severe obstacle for further comminution processes.
Two theory-driven models of electron ionization cross sections, the Binary-Encounter-Bethe model and the Deutsch-Mark model, have been design and implemented; they are intended to extend the simulation capabilities of the Geant4 toolkit. The resulting values, along with the cross sections included in the EEDL data library, have been compared to an extensive set of experimental data, covering more than 50 elements over the whole periodic table.
Accurate simulations of isotropic permanent magnets require to take the magnetization process into account and consider the anisotropic, nonlinear, and hysteretic material behaviour near the saturation configuration. An efficient method for the solution of the magnetostatic Maxwell equations including the description of isotropic permanent magnets is presented. The algorithm can easily be implemented on top of existing finite element methods and does not require a full characterization of the hysteresis of the magnetic material. Strayfield measurements of an isotropic permanent magnet and simulation results are in good agreement and highlight the importance of a proper description of the isotropic material.
Two theory-driven models of electron ionization cross sections, the Binary-Encounter-Bethe model and the Deutsch-Mark model, have been design and implemented; they are intended to extend the simulation capabilities of the Geant4 toolkit. The resulting values, along with the cross sections included in the EEDL data library, have been compared to an extensive set of experimental data, covering more than 50 elements over the whole periodic table.
In this proceedings we present MadFlow, a new framework for the automation of Monte Carlo (MC) simulation on graphics processing units (GPU) for particle physics processes. In order to automate MC simulation for a generic number of processes, we design a program which provides to the user the possibility to simulate custom processes through the MadGraph5_aMC@NLO framework. The pipeline includes a first stage where the analytic expressions for matrix elements and phase space are generated and exported in a GPU-like format. The simulation is then performed using the VegasFlow and PDFFlow libraries which deploy automatically the full simulation on systems with different hardware acceleration capabilities, such as multi-threading CPU, single-GPU and multi-GPU setups. We show some preliminary results for leading-order simulations on different hardware configurations.
Numerical solution of reaction-diffusion equations in three dimensions is one of the most challenging applied mathematical problems. Since these simulations are very time consuming, any ideas and strategies aiming at the reduction of CPU time are important topics of research. A general and robust idea is the parallelization of source codes/programs. Recently, the technological development of graphics hardware created a possibility to use desktop video cards to solve numerically intensive problems. We present a powerful parallel computing framework to solve reaction-diffusion equations numerically using the Graphics Processing Units (GPUs) with CUDA. Four different reaction-diffusion problems, (i) diffusion of chemically inert compound, (ii) Turing pattern formation, (iii) phase separation in the wake of a moving diffusion front and (iv) air pollution dispersion were solved, and additionally both the Shared method and the Moving Tiles method were tested. Our results show that parallel implementation achieves typical acceleration values in the order of 5-40 times compared to CPU using a single-threaded implementation on a 2.8 GHz desktop computer.