No Arabic abstract
The Lorentz equations describe the motion of electrically charged particles in electric and magnetic fields and are used widely in plasma physics. The most popular numerical algorithm for solving them is the Boris method, a variant of the Stormer-Verlet algorithm. Boris method is phase space volume conserving and simulated particles typically remain near the correct trajectory. However, it is only second order accurate. Therefore, in scenarios where it is not enough to know that a particle stays on the right trajectory but one needs to know where on the trajectory the particle is at a given time, Boris method requires very small time steps to deliver accurate phase information, making it computationally expensive. We derive an improved version of the high-order Boris spectral deferred correction algorithm (Boris-SDC) by adopting a convergence acceleration strategy for second order problems based on the Generalised Minimum Residual (GMRES) method. Our new algorithm is easy to implement as it still relies on the standard Boris method. Like Boris-SDC it can deliver arbitrary order of accuracy through simple changes of runtime parameter but possesses better long-term energy stability. We demonstrate for two examples, a magnetic mirror trap and the Solevev equilibrium, that the new method can deliver better accuracy at lower computational cost compared to the standard Boris method. While our examples are motivated by tracking ions in the magnetic field of a nuclear fusion reactor, the introduced algorithm can potentially deliver similar improvements in efficiency for other applications.
We show that for the simulation of crack propagation in quasi-brittle, two-dimensional solids, very good results can be obtained with an embedded strong discontinuity quadrilateral finite element that has incompatible modes. Even more importantly, we demonstrate that these results can be obtained without using a crack tracking algorithm. Therefore, the simulation of crack patterns with several cracks, including branching, becomes possible. The avoidance of a tracking algorithm is mainly enabled by the application of a novel, local (Gauss-point based) criterion for crack nucleation, which determines the time of embedding the localisation line as well as its position and orientation. We treat the crack evolution in terms of a thermodynamical framework, with softening variables describing internal dissipative mechanisms of material degradation. As presented by numerical examples, many elements in the mesh may develop a crack, but only some of them actually open and/or slide, dissipate fracture energy, and eventually form the crack pattern. The novel approach has been implemented for statics and dynamics, and the results of computed difficult examples (including Kalthoffs test) illustrate its very satisfying performance. It effectively overcomes unfavourable restrictions of the standard embedded strong discontinuity formulations, namely the simulation of the propagation of a single crack only. Moreover, it is computationally fast and straightforward to implement. Our numerical solutions match the results of experimental tests and previously reported numerical results in terms of crack pattern, dissipated fracture energy, and load-displacement curve.
Stiffness degradation and progressive failure of composite laminates are complex processes involving evolution and multi-mode interactions among fiber fractures, intra-ply matrix cracks and inter-ply delaminations. This paper presents a novel finite element model capable of explicitly treating such discrete failures in laminates of random layup. Matching of nodes is guaranteed at potential crack bifurcations to ensure correct displacement jumps near crack tips and explicit load transfer among cracks. The model is entirely geometry-based (no mesh prerequisite) with distinct segments assembled together using surface-based tie constraints, and thus requires no element partitioning or enrichment. Several numerical examples are included to demonstrate the models ability to generate results that are in qualitative and quantitative agreement with experimental observations on both damage evolution and tensile strength of specimens. The present model is believed unique in realizing simultaneous and accurate coupling of all three types of failures in laminates having arbitrary ply angles and layup.
We construct a high-order adaptive time stepping scheme for vesicle suspensions with viscosity contrast. The high-order accuracy is achieved using a spectral deferred correction (SDC) method, and adaptivity is achieved by estimating the local truncation error with the numerical error of physically constant values. Numerical examples demonstrate that our method can handle suspensions with vesicles that are tumbling, tank-treading, or both. Moreover, we demonstrate that a user-prescribed tolerance can be automatically achieved for simulations with long time horizons.
We present a novel approach which aims at high-performance uncertainty quantification for cardiac electrophysiology simulations. Employing the monodomain equation to model the transmembrane potential inside the cardiac cells, we evaluate the effect of spatially correlated perturbations of the heart fibers on the statistics of the resulting quantities of interest. Our methodology relies on a close integration of multilevel quadrature methods, parallel iterative solvers and space-time finite element discretizations, allowing for a fully parallelized framework in space, time and stochastics. Extensive numerical studies are presented to evaluate convergence rates and to compare the performance of classical Monte Carlo methods such as standard Monte Carlo (MC) and quasi-Monte Carlo (QMC), as well as multilevel strategies, i.e. multilevel Monte Carlo (MLMC) and multilevel quasi-Monte Carlo (MLQMC) on hierarchies of nested meshes. Finally, we employ a recently suggested variant of the multilevel approach for non-nested meshes to deal with a realistic heart geometry.
Getting good speedup -- let alone high parallel efficiency -- for parallel-in-time (PinT) integration examples can be frustratingly difficult. The high complexity and large number of parameters in PinT methods can easily (and unintentionally) lead to numerical experiments that overestimate the algorithms performance. In the tradition of Baileys article Twelve ways to fool the masses when giving performance results on parallel computers, we discuss and demonstrate pitfalls to avoid when evaluating performance of PinT methods. Despite being written in a light-hearted tone, this paper is intended to raise awareness that there are many ways to unintentionally fool yourself and others and that by avoiding these fallacies more meaningful PinT performance results can be obtained.