ترغب بنشر مسار تعليمي؟ اضغط هنا

Explicit physics-informed neural networks for non-linear upscaling closure: the case of transport in tissues

92   0   0.0 ( 0 )
 نشر من قبل Ehsan Taghizadeh
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we use a combination of formal upscaling and data-driven machine learning for explicitly closing a nonlinear transport and reaction process in a multiscale tissue. The classical effectiveness factor model is used to formulate the macroscale reaction kinetics. We train a multilayer perceptron network using training data generated by direct numerical simulations over microscale examples. Once trained, the network is used for numerically solving the upscaled (coarse-grained) differential equation describing mass transport and reaction in two example tissues. The network is described as being explicit in the sense that the network is trained using macroscale concentrations and gradients of concentration as components of the feature space. Network training and solutions to the macroscale transport equations were computed for two different tissues. The two tissue types (brain and liver) exhibit markedly different geometrical complexity and spatial scale (cell size and sample size). The upscaled solutions for the average concentration are compared with numerical solutions derived from the microscale concentration fields by a posteriori averaging. There are two outcomes of this work of particular note: 1) we find that the trained network exhibits good generalizability, and it is able to predict the effectiveness factor with high fidelity for realistically-structured tissues despite the significantly different scale and geometry of the two example tissue types; and 2) the approach results in an upscaled PDE with an effectiveness factor that is predicted (implicitly) via the trained neural network. This latter result emphasizes our purposeful connection between conventional averaging methods with the use of machine learning for closure; this contrasts with some machine learning methods for upscaling where the exact form of the macroscale equation remains unknown.



قيم البحث

اقرأ أيضاً

90 - Guofei Pang , Lu Lu , 2018
Physics-informed neural networks (PINNs) are effective in solving integer-order partial differential equations (PDEs) based on scattered and noisy data. PINNs employ standard feedforward neural networks (NNs) with the PDEs explicitly encoded into the NN using automatic differentiation, while the sum of the mean-squared PDE-residuals and the mean-squared error in initial/boundary conditions is minimized with respect to the NN parameters. We extend PINNs to fractional PINNs (fPINNs) to solve space-time fractional advection-diffusion equations (fractional ADEs), and we demonstrate their accuracy and effectiveness in solving multi-dimensional forward and inverse problems with forcing terms whose values are only known at randomly scattered spatio-temporal coordinates (black-box forcing terms). A novel element of the fPINNs is the hybrid approach that we introduce for constructing the residual in the loss function using both automatic differentiation for the integer-order operators and numerical discretization for the fractional operators. We consider 1D time-dependent fractional ADEs and compare white-box (WB) and black-box (BB) forcing. We observe that for the BB forcing fPINNs outperform FDM. Subsequently, we consider multi-dimensional time-, space-, and space-time-fractional ADEs using the directional fractional Laplacian and we observe relative errors of $10^{-4}$. Finally, we solve several inverse problems in 1D, 2D, and 3D to identify the fractional orders, diffusion coefficients, and transport velocities and obtain accurate results even in the presence of significant noise.
Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for whi ch there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid approach combining deep learning with probabilistic graphical models (PGMs) that acts as a surrogate for physics-based representations of multiscale and multiphysics systems . GINNs address the twin challenges of removing intrinsic computational bottlenecks in physics-based models and generating large data sets for estimating probability distributions of quantities of interest (QoIs) with a high degree of confidence. Both the selection of the complex physics learned by the NN and its supervised learning/prediction are informed by the PGM, which includes the formulation of structured priors for tunable control variables (CVs) to account for their mutual correlations and ensure physically sound CV and QoI distributions. GINNs accelerate the prediction of QoIs essential for simulation-based decision-making where generating sufficient sample data using physics-based models alone is often prohibitively expensive. Using a real-world application grounded in supercapacitor-based energy storage, we describe the construction of GINNs from a Bayesian network-embedded homogenized model for supercapacitor dynamics, and demonstrate their ability to produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
We employ physics-informed neural networks (PINNs) to infer properties of biological materials using synthetic data. In particular, we successfully apply PINNs on inferring the thrombus permeability and visco-elastic modulus from thrombus deformation data, which can be described by the fourth-order Cahn-Hilliard and Navier-Stokes Equations. In PINNs, the partial differential equations are encoded into the loss function, where partial derivatives can be obtained through automatic differentiation (AD). In addition, to tackling the challenge of calculating the fourth-order derivative in the Cahn-Hilliard equation with AD, we introduce an auxiliary network along with the main neural network to approximate the second-derivative of the energy potential term. Our model can predict simultaneously unknown parameters and velocity, pressure, and deformation gradient fields by merely training with partial information among all data, i.e., phase-field and pressure measurements, and is also highly flexible in sampling within the spatio-temporal domain for data acquisition. We validate our model by numerical solutions from the spectral/textit{hp} element method (SEM) and demonstrate its robustness by training it with noisy measurements. Our results show that PINNs can accurately infer the material properties with noisy synthetic data, and thus they have great potential for inferring these properties from experimental multi-modality and multi-fidelity data.
Inverse design arises in a variety of areas in engineering such as acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics. Topology optimization is a major form of inverse design, where we optimize a designed geometry to achi eve targeted properties and the geometry is parameterized by a density function. This optimization is challenging, because it has a very high dimensionality and is usually constrained by partial differential equations (PDEs) and additional inequalities. Here, we propose a new deep learning method -- physics-informed neural networks with hard constraints (hPINNs) -- for solving topology optimization. hPINN leverages the recent development of PINNs for solving PDEs, and thus does not rely on any numerical PDE solver. However, all the constraints in PINNs are soft constraints, and hence we impose hard constraints by using the penalty method and the augmented Lagrangian method. We demonstrate the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow. We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique. Moreover, the implementation of inverse design with hPINN can be easier than that of conventional methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا