ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics-Informed Machine Learning with Conditional Karhunen-Lo`eve Expansions

148   0   0.0 ( 0 )
 نشر من قبل David Barajas-Solano
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a new physics-informed machine learning approach for the inversion of PDE models with heterogeneous parameters. In our approach, the space-dependent partially-observed parameters and states are approximated via Karhunen-Lo`eve expansions (KLEs). Each of these KLEs is then conditioned on their corresponding measurements, resulting in low-dimensional models of the parameters and states that resolve observed data. Finally, the coefficients of the KLEs are estimated by minimizing the norm of the residual of the PDE model evaluated at a finite set of points in the computational domain, ensuring that the reconstructed parameters and states are consistent with both the observations and the PDE model to an arbitrary level of accuracy. In our approach, KLEs are constructed using the eigendecomposition of covariance models of spatial variability. For the model parameters, we employ a parameterized covariance model calibrated on parameter observations; for the model states, the covariance is estimated from a number of forward simulations of the PDE model corresponding to realizations of the parameters drawn from their KLE. We apply the proposed approach to identifying heterogeneous log-diffusion coefficients in diffusion equations from spatially sparse measurements of the log-diffusion coefficient and the solution of the diffusion equation. We find that the proposed approach compares favorably against state-of-the-art point estimates such as maximum a posteriori estimation and physics-informed neural networks.



قيم البحث

اقرأ أيضاً

Starlight subtraction algorithms based on the method of Karhunen-Lo`eve eigenimages have proved invaluable to exoplanet direct imaging. However, they scale poorly in runtime when paired with differential imaging techniques. In such observations, refe rence frames and frames to be starlight-subtracted are drawn from the same set of data, requiring a new subset of references (and eigenimages) for each frame processed to avoid self-subtraction of the signal of interest. The data rates of extreme adaptive optics instruments are such that the only way to make this computationally feasible has been to downsample the data. We develop a technique that updates a pre-computed singular value decomposition of the full dataset to remove frames (i.e. a downdate) without a full recomputation, yielding the modified eigenimages. This not only enables analysis of much larger data volumes in the same amount of time, but also exhibits near-linear scaling in runtime as the number of observations increases. We apply this technique to archival data and investigate its scaling behavior for very large numbers of frames $N$. The resulting algorithm provides speed improvements of $2.6times$ (for 200 eigenimages at $N = 300$) to $140 times$ (at $N = 10^4$) with the advantage only increasing as $N$ grows. This algorithm has allowed us to substantially accelerate KLIP even for modest $N$, and will let us quickly explore how KLIP parameters affect exoplanet characterization in large $N$ datasets.
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems. The concept of PINNs is expanded to learn not only the solution of one particular differential equation but the solutions to a class of problems. We demonstrate this idea by estimating the coercive field of permanent magnets which depends on the width and strength of local defects. When the neural network incorporates the physics of magnetization reversal, training can be achieved in an unsupervised way. There is no need to generate labeled training data. The presented test cases have been rigorously studied in the past. Thus, a detailed and easy comparison with analytical solutions is made. We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
In this work we present a new physics-informed machine learning model that can be used to analyze kinematic data from an instrumented mouthguard and detect impacts to the head. Monitoring player impacts is vitally important to understanding and prote cting from injuries like concussion. Typically, to analyze this data, a combination of video analysis and sensor data is used to ascertain the recorded events are true impacts and not false positives. In fact, due to the nature of using wearable devices in sports, false positives vastly outnumber the true positives. Yet, manual video analysis is time-consuming. This imbalance leads traditional machine learning approaches to exhibit poor performance in both detecting true positives and preventing false negatives. Here, we show that by simulating head impacts numerically using a standard Finite Element head-neck model, a large dataset of synthetic impacts can be created to augment the gathered, verified, impact data from mouthguards. This combined physics-informed machine learning impact detector reported improved performance on test datasets compared to traditional impact detectors with negative predictive value and positive predictive values of 88% and 87% respectively. Consequently, this model reported the best results to date for an impact detection algorithm for American Football, achieving an F1 score of 0.95. In addition, this physics-informed machine learning impact detector was able to accurately detect true and false impacts from a test dataset at a rate of 90% and 100% relative to a purely manual video analysis workflow. Saving over 12 hours of manual video analysis for a modest dataset, at an overall accuracy of 92%, these results indicate that this model could be used in place of, or alongside, traditional video analysis to allow for larger scale and more efficient impact detection in sports such as American Football.
Reduced models describing the Lagrangian dynamics of the Velocity Gradient Tensor (VGT) in Homogeneous Isotropic Turbulence (HIT) are developed under the Physics-Informed Machine Learning (PIML) framework. We consider VGT at both Kolmogorov scale and coarse-grained scale within the inertial range of HIT. Building reduced models requires resolving the pressure Hessian and sub-filter contributions, which is accomplished by constructing them using the integrity bases and invariants of VGT. The developed models can be expressed using the extended Tensor Basis Neural Network (TBNN). Physical constraints, such as Galilean invariance, rotational invariance, and incompressibility condition, are thus embedded in the models explicitly. Our PIML models are trained on the Lagrangian data from a high-Reynolds number Direct Numerical Simulation (DNS). To validate the results, we perform a comprehensive out-of-sample test. We observe that the PIML model provides an improved representation for the magnitude and orientation of the small-scale pressure Hessian contributions. Statistics of the flow, as indicated by the joint PDF of second and third invariants of the VGT, show good agreement with the ground-truth DNS data. A number of other important features describing the structure of HIT are reproduced by the model successfully. We have also identified challenges in modeling inertial range dynamics, which indicates that a richer modeling strategy is required. This helps us identify important directions for future research, in particular towards including inertial range geometry into TBNN.
Physics-Informed Neural Networks (PINNs) have recently shown great promise as a way of incorporating physics-based domain knowledge, including fundamental governing equations, into neural network models for many complex engineering systems. They have been particularly effective in the area of inverse problems, where boundary conditions may be ill-defined, and data-absent scenarios, where typical supervised learning approaches will fail. Here, we further explore the use of this modeling methodology to surrogate modeling of a fluid dynamical system, and demonstrate additional undiscussed and interesting advantages of such a modeling methodology over conventional data-driven approaches: 1) improving the models predictive performance even with incomplete description of the underlying physics; 2) improving the robustness of the model to noise in the dataset; 3) reduced effort to convergence during optimization for a new, previously unseen scenario by transfer optimization of a pre-existing model. Hence, we noticed the inclusion of a physics-based regularization term can substantially improve the equivalent data-driven surrogate model in many substantive ways, including an order of magnitude improvement in test error when the dataset is very noisy, and a 2-3x improvement when only partial physics is included. In addition, we propose a novel transfer optimization scheme for use in such surrogate modeling scenarios and demonstrate an approximately 3x improvement in speed to convergence and an order of magnitude improvement in predictive performance over conventional Xavier initialization for training of new scenarios.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا