ترغب بنشر مسار تعليمي؟ اضغط هنا

Analysis of a greedy reconstruction algorithm

56   0   0.0 ( 0 )
 نشر من قبل Julien Salomon
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A novel and detailed convergence analysis is presented for a greedy algorithm that was previously introduced for operator reconstruction problems in the field of quantum mechanics. This algorithm is based on an offline/online decomposition of the reconstruction process and on an ansatz for the unknown operator obtained by an a priori chosen set of linearly independent matrices. The presented convergence analysis focuses on linear-quadratic (optimization) problems governed by linear differential systems and reveals the strong dependence of the performance of the greedy algorithm on the observability properties of the system and on the ansatz of the basis elements. Moreover, the analysis allows us to use a precise (and in some sense optimal) choice of basis elements for the linear case and led to the introduction of a new and more robust optimized greedy reconstruction algorithm. This optimized approach also applies to nonlinear Hamiltonian reconstruction problems, and its efficiency is demonstrated by numerical experiments.



قيم البحث

اقرأ أيضاً

We suggest a new greedy strategy for convex optimization in Banach spaces and prove its convergent rates under a suitable behavior of the modulus of uniform smoothness of the objective function.
The interest for networks of dynamical systems has been increasing in the past years, especially because of their capability of modeling and describing a large variety of phenomena and behaviors. We propose a technique, based on Wiener filtering, whi ch provides general theoretical guarantees for the detection of links in a network of dynamical systems. For a large class of network that we name self-kin sufficient conditions for a correct detection of a link are formulated. For networks not belonging to this class we give conditions for correct detection of links belonging to the smallest self-kin network containing the actual one.
Bandit learning is characterized by the tension between long-term exploration and short-term exploitation. However, as has recently been noted, in settings in which the choices of the learning algorithm correspond to important decisions about individ ual people (such as criminal recidivism prediction, lending, and sequential drug trials), exploration corresponds to explicitly sacrificing the well-being of one individual for the potential future benefit of others. This raises a fairness concern. In such settings, one might like to run a greedy algorithm, which always makes the (myopically) optimal decision for the individuals at hand - but doing this can result in a catastrophic failure to learn. In this paper, we consider the linear contextual bandit problem and revisit the performance of the greedy algorithm. We give a smoothed analysis, showing that even when contexts may be chosen by an adversary, small perturbations of the adversarys choices suffice for the algorithm to achieve no regret, perhaps (depending on the specifics of the setting) with a constant amount of initial training data. This suggests that generically (i.e. in slightly perturbed environments), exploration and exploitation need not be in conflict in the linear setting.
Many algorithms for maximizing a monotone submodular function subject to a knapsack constraint rely on the natural greedy heuristic. We present a novel refined analysis of this greedy heuristic which enables us to: $(1)$ reduce the enumeration in the tight $(1-e^{-1})$-approximation of [Sviridenko 04] from subsets of size three to two; $(2)$ present an improved upper bound of $0.42945$ for the classic algorithm which returns the better between a single element and the output of the greedy heuristic.
The alternating direction method of multipliers (ADMM) is a popular method for solving convex separable minimization problems with linear equality constraints. The generalization of the two-block ADMM to the three-block ADMM is not trivial since the three-block ADMM is not convergence in general. Many variants of three-block ADMM have been developed with guarantee convergence. Besides the ADMM, the alternating minimization algorithm (AMA) is also an important algorithm for solving the convex separable minimization problem with linear equality constraints. The AMA is first proposed by Tseng, and it is equivalent to the forward-backward splitting algorithm applied to the corresponding dual problem. In this paper, we design a variant of three-block AMA, which is derived by employing an inertial extension of the three-operator splitting algorithm to the dual problem. Compared with three-block ADMM, the first subproblem of the proposed algorithm only minimizing the Lagrangian function. As a by-product, we obtain a relaxed algorithm of Davis and Yin. Under mild conditions on the parameters, we establish the convergence of the proposed algorithm in infinite-dimensional Hilbert spaces. Finally, we conduct numerical experiments on the stable principal component pursuit (SPCP) to verify the efficiency and effectiveness of the proposed algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا