Do you want to publish a course? Click here

Network Compression for Machine-Learnt Fluid Simulations

154   0   0.0 ( 0 )
 Added by Peetak Mitra
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Multi-scale, multi-fidelity numerical simulations form the pillar of scientific applications related to numerically modeling fluids. However, simulating the fluid behavior characterized by the non-linear Navier Stokes equations are often times computational expensive. Physics informed machine learning methods is a viable alternative and as such has seen great interest in the community [refer to Kutz (2017); Brunton et al. (2020); Duraisamy et al. (2019) for a detailed review on this topic]. For full physics emulators, the cost of network inference is often trivial. However, in the current paradigm of data-driven fluid mechanics models are built as surrogates for complex sub-processes. These models are then used in conjunction to the Navier Stokes solvers, which makes ML model inference an important factor in the terms of algorithmic latency. With the ever growing size of networks, and often times overparameterization, exploring effective network compression techniques becomes not only relevant but critical for engineering systems design. In this study, we explore the applicability of pruning and quantization (FP32 to int8) methods for one such application relevant to modeling fluid turbulence. Post-compression, we demonstrate the improvement in the accuracy of network predictions and build intuition in the process by comparing the compressed to the original network state.

rate research

Read More

A discontinuous Galerkin (DG) method suitable for large-scale astrophysical simulations on Cartesian meshes as well as arbitrary static and moving Voronoi meshes is presented. Most major astrophysical fluid dynamics codes use a finite volume (FV) approach. We demonstrate that the DG technique offers distinct advantages over FV formulations on both static and moving meshes. The DG method is also easily generalized to higher than second-order accuracy without requiring the use of extended stencils to estimate derivatives (thereby making the scheme highly parallelizable). We implement the technique in the AREPO code for solving the fluid and the magnetohydrodynamic (MHD) equations. By examining various test problems, we show that our new formulation provides improved accuracy over FV approaches of the same order, and reduces post-shock oscillations and artificial diffusion of angular momentum. In addition, the DG method makes it possible to represent magnetic fields in a locally divergence-free way, improving the stability of MHD simulations and moderating global divergence errors, and is a viable alternative for solving the MHD equations on meshes where Constrained-Transport (CT) cannot be applied. We find that the DG procedure on a moving mesh is more sensitive to the choice of slope limiter than is its FV method counterpart. Therefore, future work to improve the performance of the DG scheme even further will likely involve the design of optimal slope limiters. As presently constructed, our technique offers the potential of improved accuracy in astrophysical simulations using the moving mesh AREPO code as well as those employing adaptive mesh refinement (AMR).
We demonstrate neural-network runtime prediction for complex, many-parameter, massively parallel, heterogeneous-physics simulations running on cloud-based MPI clusters. Because individual simulations are so expensive, it is crucial to train the network on a limited dataset despite the potentially large input space of the physics at each point in the spatial domain. We achieve this using a two-part strategy. First, we perform data-driven static load balancing using regression coefficients extracted from small simulations, which both improves parallel performance and reduces the dependency of the runtime on the precise spatial layout of the heterogeneous physics. Second, we divide the execution time of these load-balanced simulations into computation and communication, factoring crude asymptotic scalings out of each term, and training neural nets for the remaining factor coefficients. This strategy is implemented for Meep, a popular and complex open-source electrodynamics simulation package, and are validated for heterogeneous simulations drawn from published engineering models.
This paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.
We present a topology-based method for mesh-partitioning in three-dimensional discrete fracture network (DFN) simulations that take advantage of the intrinsic multi-level nature of a DFN. DFN models are used to simulate flow and transport through low-permeability fractured media in the subsurface by explicitly representing fractures as discrete entities. The governing equations for flow and transport are numerically integrated on computational meshes generated on the interconnected fracture networks. Modern high-fidelity DFN simulations require high-performance computing on multiple processors where performance and scalability depend partially on obtaining a high-quality partition of the mesh to balance work-loads and minimize communication across all processors. The discrete structure of a DFN naturally lends itself to various graph representations. We develop two applications of the multilevel graph partitioning algorithm to partition the mesh of a DFN. In the first, we project a partition of the graph based on the DFN topology onto the mesh of the DFN and in the second, this projection is used as the initial condition for further partitioning refinement of the mesh. We compare the performance of these methods with standard multi-level graph partitioning using graph-based metrics (cut, imbalance, partitioning time), computational-based metrics (FLOPS, iterations, solver time), and total run time. The DFN-based and the mesh-based partitioning methods are comparable in terms of the graph-based metrics, but the time required to obtain the partition is several orders of magnitude faster using the DFN-based partitions. In combination, these partitions are several orders of magnitude faster than the mesh-based partition. In turn, this hybrid method outperformed both of the other methods in terms of the total run time.
We propose a customized convolutional neural network based autoencoder called a hierarchical autoencoder, which allows us to extract nonlinear autoencoder modes of flow fields while preserving the contribution order of the latent vectors. As preliminary tests, the proposed method is first applied to a cylinder wake at $Re_D$ = 100 and its transient process. It is found that the proposed method can extract the features of these laminar flow fields as the latent vectors while keeping the order of their energy content. The present hierarchical autoencoder is further assessed with a two-dimensional $y-z$ cross-sectional velocity field of turbulent channel flow at $Re_{tau}$ = 180 in order to examine its applicability to turbulent flows. It is demonstrated that the turbulent flow field can be efficiently mapped into the latent space by utilizing the hierarchical model with a concept of ordered autoencoder mode family. The present results suggest that the proposed concept can be extended to meet various demands in fluid dynamics including reduced order modeling and its combination with linear theory-based methods by using its ability to arrange the order of the extracted nonlinear modes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا