ترغب بنشر مسار تعليمي؟ اضغط هنا

Energy scaling laws for geometrically linear elasticity models for microstructures in shape memory alloys

67   0   0.0 ( 0 )
 نشر من قبل Sergio Conti
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a singularly-perturbed two-well problem in the context of planar geometrically linear elasticity to model a rectangular martensitic nucleus in an austenitic matrix. We derive the scaling regimes for the minimal energy in terms of the problem parameters, which represent the {shape} of the nucleus, the quotient of the elastic moduli of the two phases, the surface energy constant, and the volume fraction of the two martensitic variants. We identify several different scaling regimes, which are distinguished either by the exponents in the parameters, or by logarithmic corrections, for which we have matching upper and lower bounds.



قيم البحث

اقرأ أيضاً

169 - Thilo Simon 2017
We analyze generic sequences for which the geometrically linear energy [E_eta(u,chi):= eta^{-frac{2}{3}}int_{B_{0}(1)} left| e(u)- sum_{i=1}^3 chi_ie_iright|^2 d x+eta^frac{1}{3} sum_{i=1}^3 |Dchi_i|(B_{0}(1))] remains bounded in the limit $eta to 0$. Here $ e(u) :=1/2(Du + Du^T)$ is the (linearized) strain of the displacement $u$, the strains $e_i$ correspond to the martensite strains of a shape memory alloy undergoing cubic-to-tetragonal transformations and $chi_i:B_{0}(1) to {0,1}$ is the partition into phases. In this regime it is known that in addition to simple laminates also branched structures are possible, which if austenite was present would enable the alloy to form habit planes. In an ansatz-free manner we prove that the alignment of macroscopic interfaces between martensite twins is as predicted by well-known rank-one conditions. Our proof proceeds via the non-convex, non-discrete-valued differential inclusion [e(u) in bigcup_{1leq i eq jleq 3} operatorname{conv} {e_i,e_j}] satisfied by the weak limits of bounded energy sequences and of which we classify all solutions. In particular, there exist no convex integration solutions of the inclusion with complicated geometric structures.
There is a recent trend in machine learning to increase model quality by growing models to sizes previously thought to be unreasonable. Recent work has shown that autoregressive generative models with cross-entropy objective functions exhibit smooth power-law relationships, or scaling laws, that predict model quality from model size, training set size, and the available compute budget. These scaling laws allow one to choose nearly optimal hyper-parameters given constraints on available training data, model parameter count, or training computation budget. In this paper, we demonstrate that acoustic models trained with an auto-predictive coding loss behave as if they are subject to similar scaling laws. We extend previous work to jointly predict loss due to model size, to training set size, and to the inherent irreducible loss of the task. We find that the scaling laws accurately match model performance over two orders of magnitude in both model size and training set size, and make predictions about the limits of model performance.
111 - Richard Craster 2018
We make precise some results on the cloaking of displacement fields in linear elasticity. In the spirit of transformation media theory, the transformed governing equations in Cosserat and Willis frameworks are shown to be equivalent to certain high c ontrast small defect problems for the usual Navier equations. We discuss near-cloaking for elasticity systems via a regularized transform and perform numerical experiments to illustrate our near-cloaking results. We also study the sharpness of the estimates from [H. Ammari, H. Kang, K. Kim and H. Lee, J. Diff. Eq. 254, 4446-4464 (2013)], wherein the convergence of the solutions to the transmission problems is investigated, when the Lame parameters in the inclusion tend to extreme values. Both soft and hard inclusion limits are studied and we also touch upon the finite frequency case. Finally, we propose an approximate isotropic cloak algorithm for a symmetrized Cosserat cloak.
We consider a strongly nonlinear PDE system describing solid-solid phase transitions in shape memory alloys. The system accounts for the evolution of an order parameter (related to different symmetries of the crystal lattice in the phase configuratio ns), of the stress (and the displacement), and of the absolute temperature. The resulting equations present several technical difficulties to be tackled: in particular, we emphasize the presence of nonlinear coupling terms, higher order dissipative contributions, possibly multivalued operators. As for the evolution of temperature, a highly nonlinear parabolic equation has to be solved for a right hand side that is controlled only in L^1. We prove the existence of a solution for a regularized version, by use of a time discretization technique. Then, we perform suitable a priori estimates which allow us pass to the limit and find a weak global-in-time solution to the system.
We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of m agnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا