ترغب بنشر مسار تعليمي؟ اضغط هنا

Elliptic-regularization of nonpotential perturbations of doubly-nonlinear gradient flows of nonconvex energies: A variational approach

50   0   0.0 ( 0 )
 نشر من قبل Stefano Melchionna
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a variational approach to doubly-nonlinear (gradient) flows (P) of nonconvex energies along with nonpotential perturbations (i.e., perturbation terms without any potential structures). An elliptic-in-time regularization of the original equation ${rm (P)}_varepsilon$ is introduced, and then, a variational approach and a fixed-point argument are employed to prove existence of strong solutions to regularized equations. More precisely, we introduce a functional (defined for each entire trajectory and including a small approximation parameter $varepsilon$) whose Euler-Lagrange equation corresponds to the elliptic-in-time regularization of an unperturbed (i.e. without nonpotential perturbations) doubly-nonlinear flow. Secondly, due to the presence of nonpotential perturbation, a fixed-point argument is performed to construct strong solutions $u_varepsilon$ to the elliptic-in-time regularized equations ${rm (P)}_varepsilon$. Here, the minimization problem mentioned above defines an operator $S$ whose fixed point corresponds to a solution $u_varepsilon$ of ${rm (P)}_varepsilon$. Finally, a strong solution to the original equation (P) is obtained by passing to the limit of $u_varepsilon$ as $varepsilon to 0$. Applications of the abstract theory developed in the present paper to concrete PDEs are also exhibited.



قيم البحث

اقرأ أيضاً

In this paper we present a variational technique that handles coarse-graining and passing to a limit in a unified manner. The technique is based on a duality structure, which is present in many gradient flows and other variational evolutions, and whi ch often arises from a large-deviations principle. It has three main features: (A) a natural interaction between the duality structure and the coarse-graining, (B) application to systems with non-dissipative effects, and (C) application to coarse-graining of approximate solutions which solve the equation only to some error. As examples, we use this technique to solve three limit problems, the overdamped limit of the Vlasov-Fokker-Planck equation and the small-noise limit of randomly perturbed Hamiltonian systems with one and with many degrees of freedom.
We propose a variational form of the BDF2 method as an alternative to the commonly used minimizing movement scheme for the time-discrete approximation of gradient flows in abstract metric spaces. Assuming uniform semi-convexity --- but no smoothness --- of the augmented energy functional, we prove well-posedness of the method and convergence of the discrete approximations to a curve of steepest descent. In a smooth Hilbertian setting, classical theory would predict a convergence order of two in time, we prove convergence order of one-half in the general metric setting and under our weak hypotheses. Further, we illustrate these results with numerical experiments for gradient flows on a compact Riemannian manifold, in a Hilbert space, and in the $L^2$-Wasserstein metric.
Matrix completion has attracted much interest in the past decade in machine learning and computer vision. For low-rank promotion in matrix completion, the nuclear norm penalty is convenient due to its convexity but has a bias problem. Recently, vario us algorithms using nonconvex penalties have been proposed, among which the proximal gradient descent (PGD) algorithm is one of the most efficient and effective. For the nonconvex PGD algorithm, whether it converges to a local minimizer and its convergence rate are still unclear. This work provides a nontrivial analysis on the PGD algorithm in the nonconvex case. Besides the convergence to a stationary point for a generalized nonconvex penalty, we provide more deep analysis on a popular and important class of nonconvex penalties which have discontinuous thresholding functions. For such penalties, we establish the finite rank convergence, convergence to restricted strictly local minimizer and eventually linear convergence rate of the PGD algorithm. Meanwhile, convergence to a local minimizer has been proved for the hard-thresholding penalty. Our result is the first shows that, nonconvex regularized matrix completion only has restricted strictly local minimizers, and the PGD algorithm can converge to such minimizers with eventually linear rate under certain conditions. Illustration of the PGD algorithm via experiments has also been provided. Code is available at https://github.com/FWen/nmc.
In this paper we discuss a family of viscous Cahn-Hilliard equations with a non-smooth viscosity term. This system may be viewed as an approximation of a forward-backward parabolic equation. The resulting problem is highly nonlinear, coupling in the same equation two nonlinearities with the diffusion term. In particular, we prove existence of solutions for the related initial and boundary value problem. Under suitable assumptions, we also state uniqueness and continuous dependence on data.
This paper addresses the existence and regularity of weak solutions for a fully parabolic model of chemotaxis, with prevention of overcrowding, that degenerates in a two-sided fashion, including an extra nonlinearity represented by a $p$-Laplacian di ffusion term. To prove the existence of weak solutions, a Schauder fixed-point argument is applied to a regularized problem and the compactness method is used to pass to the limit. The local Holder regularity of weak solutions is established using the method of intrinsic scaling. The results are a contribution to showing, qualitatively, to what extent the properties of the classical Keller-Segel chemotaxis models are preserved in a more general setting. Some numerical examples illustrate the model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا