ترغب بنشر مسار تعليمي؟ اضغط هنا

A Proximal-Gradient Algorithm for Crystal Surface Evolution

83   0   0.0 ( 0 )
 نشر من قبل Jeremy Marzuola
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As a counterpoint to recent numerical methods for crystal surface evolution, which agree well with microscopic dynamics but suffer from significant stiffness that prevents simulation on fine spatial grids, we develop a new numerical method based on the macroscopic partial differential equation, leveraging its formal structure as the gradient flow of the total variation energy, with respect to a weighted $H^{-1}$ norm. This gradient flow structure relates to several metric space gradient flows of recent interest, including 2-Wasserstein flows and their generalizations to nonlinear mobilities. We develop a novel semi-implicit time discretization of the gradient flow, inspired by the classical minimizing movements scheme (known as the JKO scheme in the 2-Wasserstein case). We then use a primal dual hybrid gradient (PDHG) method to compute each element of the semi-implicit scheme. In one dimension, we prove convergence of the PDHG method to the semi-implicit scheme, under general integrability assumptions on the mobility and its reciprocal. Finally, by taking finite difference approximations of our PDHG method, we arrive at a fully discrete numerical algorithm, with iterations that converge at a rate independent of the spatial discretization: in particular, the convergence properties do not deteriorate as we refine our spatial grid. We close with several numerical examples illustrating the properties of our method, including facet formation at local maxima, pinning at local minima, and convergence as the spatial and temporal discretizations are refined.



قيم البحث

اقرأ أيضاً

177 - Chenglong Bao , Chang Chen , 2020
In this paper, we compute the stationary states of the multicomponent phase-field crystal model by formulating it as a block constrained minimization problem. The original infinite-dimensional non-convex minimization problem is approximated by a fini te-dimensional constrained non-convex minimization problem after an appropriate spatial discretization. To efficiently solve the above optimization problem, we propose a so-called adaptive block Bregman proximal gradient (AB-BPG) algorithm that fully exploits the problems block structure. The proposed method updates each order parameter alternatively, and the update order of blocks can be chosen in a deterministic or random manner. Besides, we choose the step size by developing a practical linear search approach such that the generated sequence either keeps energy dissipation or has a controllable subsequence with energy dissipation. The convergence property of the proposed method is established without the requirement of global Lipschitz continuity of the derivative of the bulk energy part by using the Bregman divergence. The numerical results on computing stationary ordered structures in binary, ternary, and quinary component coupled-mode Swift-Hohenberg models have shown a significant acceleration over many existing methods.
The Cadzows algorithm is a signal denoising and recovery method which was designed for signals corresponding to low rank Hankel matrices. In this paper we first introduce a Fast Cadzows algorithm which is developed by incorporating a novel subspace p rojection to reduce the high computational cost of the SVD in the Cadzows algorithm. Then a Gradient method and a Fast Gradient method are proposed to address the non-decreasing MSE issue when applying the Cadzows or Fast Cadzows algorithm for signal denoising. Extensive empirical performance comparisons demonstrate that the proposed algorithms can complete the denoising and recovery tasks more efficiently and effectively.
A direct reconstruction algorithm based on Calderons linearization method for the reconstruction of isotropic conductivities is proposed for anisotropic conductivities in two-dimensions. To overcome the non-uniqueness of the anisotropic inverse condu ctivity problem, the entries of the unperturbed anisotropic tensors are assumed known emph{a priori}, and it remains to reconstruct the multiplicative scalar field. The quasi-conformal map in the plane facilitates the Calderon-based approach for anisotropic conductivities. The method is demonstrated on discontinuous radially symmetric conductivities of high and low contrast.
The inverse problem in Acousto-Electric tomography concerns the reconstruction of the electric conductivity in a domain from knowledge of the power density function in the interior of the body. This interior power density results from currents prescr ibed at boundary electrodes (and can be obtained through electro-static boundary measurements together with auxiliary acoustic measurement. In Electrical Impedance Tomography, the complete electrode model is known to be the most accurate model for the forward modelling. In this paper, the reconstruction problem of Acousto-Electric tomography is posed using the (smooth) complete electrode model, and a Levenberg-Marquardt iteration is formulated in appropriate function spaces. This results in a system of partial differential equations to be solved in each iteration. To increase the computational efficiency and stability, a strategy based on both the complete electrode model and the continuum model with Dirichlet boundary condition is proposed. The system of equations is implemented numerically for a two dimensional scenario and the algorithm is tested on two different numerical phantoms, a heart and lung model and a human brain model. Several numerical experiments are carried out confirming the feasibility, accuracy and stability of the methods.
Decentralized optimization is a powerful paradigm that finds applications in engineering and learning design. This work studies decentralized composite optimization problems with non-smooth regularization terms. Most existing gradient-based proximal decentralized methods are known to converge to the optimal solution with sublinear rates, and it remains unclear whether this family of methods can achieve global linear convergence. To tackle this problem, this work assumes the non-smooth regularization term is common across all networked agents, which is the case for many machine learning problems. Under this condition, we design a proximal gradient decentralized algorithm whose fixed point coincides with the desired minimizer. We then provide a concise proof that establishes its linear convergence. In the absence of the non-smooth term, our analysis technique covers the well known EXTRA algorithm and provides useful bounds on the convergence rate and step-size.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا