ترغب بنشر مسار تعليمي؟ اضغط هنا

Second-order edge-penalization in the Ambrosio-Tortorelli functional

136   0   0.0 ( 0 )
 نشر من قبل Martin Burger
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose and study two variants of the Ambrosio-Tortorelli functional where the first-order penalization of the edge variable $v$ is replaced by a second-order term depending on the Hessian or on the Laplacian of $v$, respectively. We show that both the variants as above provide an elliptic approximation of the Mumford-Shah functional in the sense of $Gamma$-convergence. In particular the variant with the Laplacian penalization can be implemented without any difficulties compared to the standard Ambrosio-Tortorelli functional. The computational results indicate several advantages however. First of all, the diffuse approximation of the edge contours appears smoother and clearer for the minimizers of the second-order functional. Moreover, the convergence of alternating minimization algorithms seems improved for the new functional. We also illustrate the findings with several computational results.



قيم البحث

اقرأ أيضاً

Motivated by models of fracture mechanics, this paper is devoted to the analysis of unilateral gradient flows of the Ambrosio-Tortorelli functional, where unilaterality comes from an irreversibility constraint on the fracture density. In the spirit o f gradient flows in metric spaces, such evolutions are defined in terms of curves of maximal unilateral slope, and are constructed by means of implicit Euler schemes. An asymptotic analysis in the Mumford-Shah regime is also carried out. It shows the convergence towards a generalized heat equation outside a time increasing crack set.
75 - Yufei Yu , Weizhang Huang 2017
The Ambrosio-Tortorelli functional is a phase-field approximation of the Mumford-Shah functional that has been widely used for image segmentation. The approximation has the advantages of being easy to implement, maintaining the segmentation ability, and $Gamma$-converging to the Mumford-Shah functional. However, it has been observed in actual computation that the segmentation ability of the Ambrosio-Tortorelli functional varies significantly with different values of the parameter and it even fails to $Gamma$-converge to the original functional for some cases. In this paper we present an asymptotic analysis on the gradient flow equation of the Ambrosio-Tortorelli functional and show that the functional can have different segmentation behavior for small but finite values of the regularization parameter and eventually loses its segmentation ability as the parameter goes to zero when the input image is treated as a continuous function. This is consistent with the existing observation as well as the numerical examples presented in this work. A selection strategy for the regularization parameter and a scaling procedure for the solution are devised based on the analysis. Numerical results show that they lead to good segmentation of the Ambrosio-Tortorelli functional for real images.
In this paper we study second order master equations arising from mean field games with common noise over arbitrary time duration. A classical solution typically requires the monotonicity condition (or small time duration) and sufficiently smooth dat a. While keeping the monotonicity condition, our goal is to relax the regularity of the data, which is an open problem in the literature. In particular, we do not require any differentiability in terms of the measures, which prevents us from obtaining classical solutions. We shall propose three weaker notions of solutions, named as {it good solutions}, {it weak solutions}, and {it viscosity solutions}, respectively, and establish the wellposedness of the master equation under all three notions. We emphasize that, due to the game nature, one cannot expect comparison principle even for classical solutions. The key for the global (in time) wellposedness is the uniform a priori estimate for the Lipschitz continuity of the solution in the measures. The monotonicity condition is crucial for this uniform estimate and thus is crucial for the existence of the global solution, but is not needed for the uniqueness. To facilitate our analysis, we construct a smooth mollifier for functions on Wasserstein space, which is new in the literature and is interesting in its own right. As an important application of our results, we prove the convergence of the Nash system, a high dimensional system of PDEs arising from the corresponding $N$-player game, under mild regularity requirements. We shall also prove a propagation of chaos property for the associated optimal trajectories.
503 - Zeng-hui Yang 2021
We derive the second-order approximation (PT2) to the ensemble correlation energy functional by applying the G{o}rling-Levy perturbation theory on the ensemble density-functional theory (EDFT). Its performance is checked by calculating excitation ene rgies with the direct ensemble correction method in 1D model systems and 3D atoms using numerically exact Kohn-Sham orbitals and potentials. Comparing with the exchange-only approximation, the inclusion of the ensemble PT2 correlation improves the excitation energies in 1D model systems in most cases, including double excitations and charge-transfer excitations. However, the excitation energies for atoms are generally worse with PT2. We find that the failure of PT2 in atoms is due to the two contributions of an orbital-dependent functional to excitation energies being inconsistent in the calculations. We also analyze the convergence of PT2 excitation energies with respect to the number of unoccupied orbitals.
Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statist ics of the data, are far less prevalent despite strong theoretical properties, due to their prohibitive computation, memory and communication costs. In an attempt to bridge this gap between theoretical and practical optimization, we present a scalable implementation of a second-order preconditioned method (concretely, a variant of full-matrix Adagrad), that along with several critical algorithmic and numerical improvements, provides significant convergence and wall-clock time improvements compared to conventional first-order methods on state-of-the-art deep models. Our novel design effectively utilizes the prevalent heterogeneous hardware architecture for training deep models, consisting of a multicore CPU coupled with multiple accelerator units. We demonstrate superior performance compared to state-of-the-art on very large learning tasks such as machine translation with Transformers, language modeling with BERT, click-through rate prediction on Criteo, and image classification on ImageNet with ResNet-50.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا