ترغب بنشر مسار تعليمي؟ اضغط هنا

FSEI-GPU: GPU accelerated simulations of the fluid-structure-electrophysiology interaction in the left heart

116   0   0.0 ( 0 )
 نشر من قبل Francesco Viola
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The reliability of cardiovascular computational models depends on the accurate solution of the hemodynamics, the realistic characterization of the hyperelastic and electric properties of the tissues along with the correct description of their interaction. The resulting fluid-structure-electrophysiology interaction (FSEI) thus requires an immense computational power, usually available in large supercomputing centers, and requires long time to obtain results even if multi-CPU processors are used (MPI acceleration). In recent years, graphics processing units (GPUs) have emerged as a convenient platform for high performance computing, as they allow for considerable reductions of the time-to-solution. This approach is particularly appealing if the tool has to support medical decisions that require solutions within reduced times and possibly obtained by local computational resources. Accordingly, our multi-physics solver has been ported to GPU architectures using CUDA Fortran to tackle fast and accurate hemodynamics simulations of the human heart without resorting to large-scale supercomputers. This work describes the use of CUDA to accelerate the FSEI on heterogeneous clusters, where both the CPUs and GPUs are used in synergistically with minor modifications of the original source code. The resulting GPU accelerated code solves a single heartbeat within a few hours (from three to ten depending on the grid resolution) running on premises computing facility made of few GPU cards, which can be easily installed in a medical laboratory or in a hospital, thus opening towards a systematic computational fluid dynamics (CFD) aided diagnostic.



قيم البحث

اقرأ أيضاً

The aortic valve is a three-leaflet passive structure that, driven by pressure differences between the left ventricle and the aorta, opens and closes during the heartbeat to ensure the correct stream direction and flow rate. In elderly individuals or because of particular pathologies, the valve leaflets can stiffen thus impairing the valve functioning and, in turn, the pumping efficiency of the heart. Using a multi-physics left heart model accounting for the electrophysiology, the active contraction of the myocardium, the hemodynamics and the related fluid-structure-interaction, we have investigated the changes in the flow features for different severities of the aortic valve stenosis. We have found that, in addition to the increase of the transvalvular pressure drop and of the systolic jet velocity, a stenotic aortic valve significantly alters the wall shear stresses and their spatial distribution over the aortic arch and valve leaflets, which may induce a remodelling process of the ventricular myocardium. The numerical results from the multi-physics model are fully consistent with the clinical experience, thus further opening the way for computational engineering aided medical diagnostic.
Magnetohydrodynamical (MHD) dynamos emerge in many different astrophysical situations where turbulence is present, but the interaction between large-scale (LSD) and small-scale dynamos (SSD) is not fully understood. We performed a systematic study of turbulent dynamos driven by isotropic forcing in isothermal MHD with magnetic Prandtl number of unity, focusing on the exponential growth stage. Both helical and non-helical forcing was employed to separate the effects of LSD and SSD in a periodic domain. Reynolds numbers (Rm) up to $approx 250$ were examined and multiple resolutions used for convergence checks. We ran our simulations with the Astaroth code, designed to accelerate 3D stencil computations on graphics processing units (GPUs) and to employ multiple GPUs with peer-to-peer communication. We observed a speedup of $approx 35$ in single-node performance compared to the widely used multi-CPU MHD solver Pencil Code. We estimated the growth rates both from the averaged magnetic fields and their power spectra. At low Rm, LSD growth dominates, but at high Rm SSD appears to dominate in both helically and non-helically forced cases. Pure SSD growth rates follow a logarithmic scaling as a function of Rm. Probability density functions of the magnetic field from the growth stage exhibit SSD behaviour in helically forced cases even at intermediate Rm. We estimated mean-field turbulence transport coefficients using closures like the second-order correlation approximation (SOCA). They yield growth rates similar to the directly measured ones and provide evidence of $alpha$ quenching. Our results are consistent with the SSD inhibiting the growth of the LSD at moderate Rm, while the dynamo growth is enhanced at higher Rm.
We present the results of large scale simulations of 4th order nonlinear partial differential equations of dif- fusion type that are typically encountered when modeling dynamics of thin fluid films on substrates. The simulations are based on the alte rnate direction implicit (ADI) method, with the main part of the compu- tational work carried out in the GPU computing environment. Efficient and accurate computations allow for simulations on large computational domains in three spatial dimensions (3D) and for long computational times. We apply the methods developed to the particular problem of instabilities of thin fluid films of nanoscale thickness. The large scale of the simulations minimizes the effects of boundaries, and also allows for simulating domains of the size encountered in published experiments. As an outcome, we can analyze the development of instabilities with an unprecedented level of detail. A particular focus is on analyzing the manner in which instability develops, in particular regarding differences between spinodal and nucleation types of dewetting for linearly unstable films, as well as instabilities of metastable films. Simulations in 3D allow for consideration of some recent results that were previously obtained in the 2D geometry (J. Fluid Mech. 841, 925 (2018)). Some of the new results include using Fourier transforms as well as topological invariants (Betti numbers) to distinguish the outcomes of spinodal and nucleation types of instabilities, describing in precise terms the complex processes that lead to the formation of satellite drops, as well as distinguishing the shape of the evolving film front in linearly unstable and metastable regimes. We also discuss direct comparison between simulations and available experimental results for nematic liquid crystal and polymer films.
A modern graphics processing unit (GPU) is able to perform massively parallel scientific computations at low cost. We extend our implementation of the checkerboard algorithm for the two dimensional Ising model [T. Preis et al., J. Comp. Phys. 228, 44 68 (2009)] in order to overcome the memory limitations of a single GPU which enables us to simulate significantly larger systems. Using multi-spin coding techniques, we are able to accelerate simulations on a single GPU by factors up to 35 compared to an optimized single Central Processor Unit (CPU) core implementation which employs multi-spin coding. By combining the Compute Unified Device Architecture (CUDA) with the Message Parsing Interface (MPI) on the CPU level, a single Ising lattice can be updated by a cluster of GPUs in parallel. For large systems, the computation time scales nearly linearly with the number of GPUs used. As proof of concept we reproduce the critical temperature of the 2D Ising model using finite size scaling techniques.
WarpX is a general purpose electromagnetic particle-in-cell code that was originally designed to run on many-core CPU architectures. We describe the strategy followed to allow WarpX to use the GPU-accelerated nodes on OLCFs Summit supercomputer, a st rategy we believe will extend to the upcoming machines Frontier and Aurora. We summarize the challenges encountered, lessons learned, and give current performance results on a series of relevant benchmark problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا