No Arabic abstract
We present the Cholesky-factored symmetric positive definite neural network (SPD-NN) for modeling constitutive relations in dynamical equations. Instead of directly predicting the stress, the SPD-NN trains a neural network to predict the Cholesky factor of a tangent stiffness matrix, based on which the stress is calculated in the incremental form. As a result of the special structure, SPD-NN weakly imposes convexity on the strain energy function, satisfies time consistency for path-dependent materials, and therefore improves numerical stability, especially when the SPD-NN is used in finite element simulations. Depending on the types of available data, we propose two training methods, namely direct training for strain and stress pairs and indirect training for loads and displacement pairs. We demonstrate the effectiveness of SPD-NN on hyperelastic, elasto-plastic, and multiscale fiber-reinforced plate problems from solid mechanics. The generality and robustness of the SPD-NN make it a promising tool for a wide range of constitutive modeling applications.
Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We will show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss some new an old bounds and include several examples where an expected error norm can be computed exactly.
In an iterative approach for solving linear systems with ill-conditioned, symmetric positive definite (SPD) kernel matrices, both fast matrix-vector products and fast preconditioning operations are required. Fast (linear-scaling) matrix-vector products are available by expressing the kernel matrix in an $mathcal{H}^2$ representation or an equivalent fast multipole method representation. Preconditioning such matrices, however, requires a structured matrix approximation that is more regular than the $mathcal{H}^2$ representation, such as the hierarchically semiseparable (HSS) matrix representation, which provides fast solve operations. Previously, an algorithm was presented to construct an HSS approximation to an SPD kernel matrix that is guaranteed to be SPD. However, this algorithm has quadratic cost and was only designed for recursive binary partitionings of the points defining the kernel matrix. This paper presents a general algorithm for constructing an SPD HSS approximation. Importantly, the algorithm uses the $mathcal{H}^2$ representation of the SPD matrix to reduce its computational complexity from quadratic to quasilinear. Numerical experiments illustrate how this SPD HSS approximation performs as a preconditioner for solving linear systems arising from a range of kernel functions.
Spatial symmetries and invariances play an important role in the description of materials. When modelling material properties, it is important to be able to respect such invariances. Here we discuss how to model and generate random ensembles of tensors where one wants to be able to prescribe certain classes of spatial symmetries and invariances for the whole ensemble, while at the same time demanding that the mean or expected value of the ensemble be subject to a possibly higher spatial invariance class. Our special interest is in the class of physically symmetric and positive definite tensors, as they appear often in the description of materials. As the set of positive definite tensors is not a linear space, but rather an open convex cone in the linear vector space of physically symmetric tensors, it may be advantageous to widen the notion of mean to the so-called Frechet mean, which is based on distance measures between positive definite tensors other than the usual Euclidean one. For the sake of simplicity, as well as to expose the main idea as clearly as possible, we limit ourselves here to second order tensors. It is shown how the random ensemble can be modelled and generated, with fine control of the spatial symmetry or invariance of the whole ensemble, as well as its Frechet mean, independently in its scaling and directional aspects. As an example, a 2D and a 3D model of steady-state heat conduction in a human proximal femur, a bone with high material anisotropy, is explored. It is modelled with a random thermal conductivity tensor, and the numerical results show the distinct impact of incorporating into the constitutive model different material uncertainties$-$scaling, orientation, and prescribed material symmetry$-$on the desired quantities of interest, such as temperature distribution and heat flux.
In this work, we consider symmetric positive definite pencils depending on two parameters. That is, we are concerned with the generalized eigenvalue problem $A(x)-lambda B(x)$, where $A$ and $B$ are symmetric matrix valued functions in ${mathbb R}^{ntimes n}$, smoothly depending on parameters $xin Omegasubset {mathbb R}^2$; further, $B$ is also positive definite. In general, the eigenvalues of this multiparameter problem will not be smooth, the lack of smoothness resulting from eigenvalues being equal at some parameter values (conical intersections). We first give general theoretical results on the smoothness of eigenvalues and eigenvectors for the present generalized eigenvalue problem, and hence for the corresponding projections, and then perform a numerical study of the statistical properties of coalescing eigenvalues for pencils where $A$ and $B$ are either full or banded, for several bandwidths. Our numerical study will be performed with respect to a random matrix ensemble which respects the underlying engineering problems motivating our study.
We present a class of reduced basis (RB) methods for the iterative solution of parametrized symmetric positive-definite (SPD) linear systems. The essential ingredients are a Galerkin projection of the underlying parametrized system onto a reduced basis space to obtain a reduced system; an adaptive greedy algorithm to efficiently determine sampling parameters and associated basis vectors; an offline-online computational procedure and a multi-fidelity approach to decouple the construction and application phases of the reduced basis method; and solution procedures to employ the reduced basis approximation as a {em stand-alone iterative solver} or as a {em preconditioner} in the conjugate gradient method. We present numerical examples to demonstrate the performance of the proposed methods in comparison with multigrid methods. Numerical results show that, when applied to solve linear systems resulting from discretizing the Poissons equations, the speed of convergence of our methods matches or surpasses that of the multigrid-preconditioned conjugate gradient method, while their computational cost per iteration is significantly smaller providing a feasible alternative when the multigrid approach is out of reach due to timing or memory constraints for large systems. Moreover, numerical results verify that this new class of reduced basis methods, when applied as a stand-alone solver or as a preconditioner, is capable of achieving the accuracy at the level of the {em truth approximation} which is far beyond the RB level.