No Arabic abstract
Modelling neutral beam injection (NBI) in fusion reactors requires computing the trajectories of large ensembles of particles. Slowing down times of up to one second combined with nanosecond time steps make these simulations computationally very costly. This paper explores the performance of BGSDC, a new numerical time stepping method, for tracking ions generated by NBI in the DIII-D and JET reactors. BGSDC is a high-order generalisation of the Boris method, combining it with spectral deferred corrections and the Generalized Minimal Residual method GMRES. Without collision modelling, where numerical drift can be quantified accurately, we find that BGSDC can deliver higher quality particle distributions than the standard Boris integrator at comparable cost or comparable distributions at lower cost. With collision models, quantifying accuracy is difficult but we show that BGSDC produces stable distributions at larger time steps than Boris.
The tritium breeding ratio (TBR) is an essential quantity for the design of modern and next-generation D-T fueled nuclear fusion reactors. Representing the ratio between tritium fuel generated in breeding blankets and fuel consumed during reactor runtime, the TBR depends on reactor geometry and material properties in a complex manner. In this work, we explored the training of surrogate models to produce a cheap but high-quality approximation for a Monte Carlo TBR model in use at the UK Atomic Energy Authority. We investigated possibilities for dimensional reduction of its feature space, reviewed 9 families of surrogate models for potential applicability, and performed hyperparameter optimisation. Here we present the performance and scaling properties of these models, the fastest of which, an artificial neural network, demonstrated $R^2=0.985$ and a mean prediction time of $0.898 mumathrm{s}$, representing a relative speedup of $8cdot 10^6$ with respect to the expensive MC model. We further present a novel adaptive sampling algorithm, Quality-Adaptive Surrogate Sampling, capable of interfacing with any of the individually studied surrogates. Our preliminary testing on a toy TBR theory has demonstrated the efficacy of this algorithm for accelerating the surrogate modelling process.
In this paper we describe the development and first tests of a neutron spectrometer designed for high flux environments, such as the ones found in fast nuclear reactors. The spectrometer is based on the conversion of neutrons impinging on $^6$Li into $alpha$ and $t$ whose total energy comprises the initial neutron energy and the reaction $Q$-value. The $^6$LiF layer is sandwiched between two CVD diamond detectors, which measure the two reaction products in coincidence. The spectrometer was calibrated at two neutron energies in well known thermal and 3 MeV neutron fluxes. The measured neutron detection efficiency varies from 4.2$times 10^{-4}$ to 3.5$times 10^{-8}$ for thermal and 3 MeV neutrons, respectively. These values are in agreement with Geant4 simulations and close to simple estimates based on the knowledge of the $^6$Li(n,$alpha$)$t$ cross section. The energy resolution of the spectrometer was found to be better than 100 keV when using 5 m cables between the detector and the preamplifiers.
A computational fluid dynamics (CFD) simulation framework for predicting complex flows is developed on the Tensor Processing Unit (TPU) platform. The TPU architecture is featured with accelerated performance of dense matrix multiplication, large high bandwidth memory, and a fast inter-chip interconnect, which makes it attractive for high-performance scientific computing. The CFD framework solves the variable-density Navier-Stokes equation using a Low-Mach approximation, and the governing equations are discretized by a finite difference method on a collocated structured mesh. It uses the graph-based TensorFlow as the programming paradigm. The accuracy and performance of this framework is studied both numerically and analytically, specifically focusing on effects of TPU-native single precision floating point arithmetic on solution accuracy. The algorithm and implementation are validated with canonical 2D and 3D Taylor Green vortex simulations. To demonstrate the capability for simulating turbulent flows, simulations are conducted for two configurations, namely the decaying homogeneous isotropic turbulence and a turbulent planar jet. Both simulations show good statistical agreement with reference solutions. The performance analysis shows a linear weak scaling and a super-linear strong scaling up to a full TPU v3 pod with 2048 cores.
We present a particle method for estimating the curvature of interfaces in volume-of-fluid simulations of multiphase flows. The method is well suited for under-resolved interfaces, and it is shown to be more accurate than the parabolic fitting that is employed in such cases. The curvature is computed from the equilibrium positions of particles constrained to circular arcs and attracted to the interface. The proposed particle method is combined with the method of height functions at higher resolutions, and it is shown to outperform the current combinations of height functions and parabolic fitting. The algorithm is conceptually simple and straightforward to implement on new and existing software frameworks for multiphase flow simulations thus enhancing their capabilities in challenging flow problems. We evaluate the proposed hybrid method on a number of two- and three-dimensional benchmark flow problems and illustrate its capabilities on simulations of flows involving bubble coalescence and turbulent multiphase flows.
Solving linear systems and computing eigenvalues are two fundamental problems in linear algebra. For solving linear systems, many efficient quantum algorithms have been discovered. For computing eigenvalues, currently, we have efficient quantum algorithms for Hermitian and unitary matrices. However, the general case is far from fully understood. Combining quantum phase estimation, quantum algorithm to solve linear differential equations and quantum singular value estimation, we propose two quantum algorithms to compute the eigenvalues of diagonalizable matrices that only have real eigenvalues and normal matrices. The output of the quantum algorithms is a superposition of the eigenvalues and the corresponding eigenvectors. The complexities are dominated by solving a linear system of ODEs and performing quantum singular value estimation, which usually can be solved efficiently in a quantum computer. In the special case when the matrix $M$ is $s$-sparse, the complexity is $widetilde{O}(srho^2 kappa^2/epsilon^2)$ for diagonalizable matrices that only have real eigenvalues, and $widetilde{O}(srho|M|_{max} /epsilon^2)$ for normal matrices. Here $rho$ is an upper bound of the eigenvalues, $kappa$ is the conditioning of the eigenvalue problem, and $epsilon$ is the precision to approximate the eigenvalues. We also extend the quantum algorithm to diagonalizable matrices with complex eigenvalues under an extra assumption.