ترغب بنشر مسار تعليمي؟ اضغط هنا

A fast two-point gradient algorithm based on sequential subspace optimization method for nonlinear ill-posed problems

184   0   0.0 ( 0 )
 نشر من قبل Guangyu Gao
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose and analyze a fast two-point gradient algorithm for solving nonlinear ill-posed problems, which is based on the sequential subspace optimization method. A complete convergence analysis is provided under the classical assumptions for iterative regularization methods. The design of the two-point gradient method involves the choices of the combination parameters which is systematically discussed. Furthermore, detailed numerical simulations are presented for inverse potential problem, which exhibit that the proposed method leads to a strongly decrease of the iteration numbers and the overall computational time can be significantly reduced.



قيم البحث

اقرأ أيضاً

In this paper, we consider Nesterovs Accelerated Gradient method for solving Nonlinear Inverse and Ill-Posed Problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to p romising results for ill-posed problems. Here, we provide a convergence analysis for ill-posed problems of this method based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.
Classical optimization techniques often formulate the feasibility of the problems as set, equality or inequality constraints. However, explicitly designing these constraints is indeed challenging for complex real-world applications and too strict con straints may even lead to intractable optimization problems. On the other hand, it is still hard to incorporate data-dependent information into conventional numerical iterations. To partially address the above limits and inspired by the leader-follower gaming perspective, this work first introduces a bilevel-type formulation to jointly investigate the feasibility and optimality of nonconvex and nonsmooth optimization problems. Then we develop an algorithmic framework to couple forward-backward proximal computations to optimize our established bilevel leader-follower model. We prove its convergence and estimate the convergence rate. Furthermore, a learning-based extension is developed, in which we establish an unrolling strategy to incorporate data-dependent network architectures into our iterations. Fortunately, it can be proved that by introducing some mild checking conditions, all our original convergence results can still be preserved for this learnable extension. As a nontrivial byproduct, we demonstrate how to apply this ensemble-like methodology to address different low-level vision tasks. Extensive experiments verify the theoretical results and show the advantages of our method against existing state-of-the-art approaches.
The aim of this paper is to investigate the use of an entropic projection method for the iterative regularization of linear ill-posed problems. We derive a closed form solution for the iterates and analyze their convergence behaviour both in a case o f reconstructing general nonnegative unknowns as well as for the sake of recovering probability distributions. Moreover, we discuss several variants of the algorithm and relations to other methods in the literature. The effectiveness of the approach is studied numerically in several examples.
In multiple scientific and technological applications we face the problem of having low dimensional data to be justified by a linear model defined in a high dimensional parameter space. The difference in dimensionality makes the problem ill-defined: the model is consistent with the data for many values of its parameters. The objective is to find the probability distribution of parameter values consistent with the data, a problem that can be cast as the exploration of a high dimensional convex polytope. In this work we introduce a novel algorithm to solve this problem efficiently. It provides results that are statistically indistinguishable from currently used numerical techniques while its running time scales linearly with the system size. We show that the algorithm performs robustly in many abstract and practical applications. As working examples we simulate the effects of restricting reaction fluxes on the space of feasible phenotypes of a {em genome} scale E. Coli metabolic network and infer the traffic flow between origin and destination nodes in a real communication network.
116 - Zhongxiao Jia , Yanfei Yang 2018
Based on the joint bidiagonalization process of a large matrix pair ${A,L}$, we propose and develop an iterative regularization algorithm for the large scale linear discrete ill-posed problems in general-form regularization: $min|Lx| mbox{{rm subjec t to}} xinmathcal{S} = {x| |Ax-b|leq tau|e|}$ with a Gaussian white noise $e$ and $tau>1$ slightly, where $L$ is a regularization matrix. Our algorithm is different from the hybrid one proposed by Kilmer {em et al.}, which is based on the same process but solves the general-form Tikhonov regularization problem: $min_xleft{|Ax-b|^2+lambda^2|Lx|^2right}$. We prove that the iterates take the form of attractive filtered generalized singular value decomposition (GSVD) expansions, where the filters are given explicitly. This result and the analysis on it show that the method must have the desired semi-convergence property and get insight into the regularizing effects of the method. We use the L-curve criterion or the discrepancy principle to determine $k^*$. The algorithm is simple and effective, and numerical experiments illustrate that it often computes more accurate regularized solutions than the hybrid one.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا