ترغب بنشر مسار تعليمي؟ اضغط هنا

Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization

60   0   0.0 ( 0 )
 نشر من قبل Carlos Rautenberg
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Total Generalized Variation (TGV) regularization in image reconstruction relies on an infimal convolution type combination of generalized first- and second-order derivatives. This helps to avoid the staircasing effect of Total Variation (TV) regularization, while still preserving sharp contrasts in images. The associated regularization effect crucially hinges on two parameters whose proper adjustment represents a challenging task. In this work, a bilevel optimization framework with a suitable statistics-based upper level objective is proposed in order to automatically select these parameters. The framework allows for spatially varying parameters, thus enabling better recovery in high-detail image areas. A rigorous dualization framework is established, and for the numerical solution, two Newton type methods for the solution of the lower level problem, i.e. the image reconstruction problem, and two bilevel TGV algorithms are introduced, respectively. Denoising tests confirm that automatically selected distributed regularization parameters lead in general to improved reconstructions when compared to results for scalar parameters.


قيم البحث

اقرأ أيضاً

Structured convex optimization on weighted graphs finds numerous applications in machine learning and computer vision. In this work, we propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class. Our preconditione r is driven by a sharp analysis of the local linear convergence rate depending on the active set at the current iterate. We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate. Further, we propose a practical greedy heuristic which realizes such nested decompositions and show in several numerical experiments that our reconditioning strategy, when applied to proximal gradient or primal-dual hybrid gradient algorithm, achieves competitive performances. Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
296 - Feihu Huang , Heng Huang 2021
Bilevel optimization has been widely applied many machine learning problems such as hyperparameter optimization, policy optimization and meta learning. Although many bilevel optimization methods more recently have been proposed to solve the bilevel o ptimization problems, they still suffer from high computational complexities and do not consider the more general bilevel problems with nonsmooth regularization. In the paper, thus, we propose a class of efficient bilevel optimization methods based on Bregman distance. In our methods, we use the mirror decent iteration to solve the outer subproblem of the bilevel problem by using strongly-convex Bregman functions. Specifically, we propose a bilevel optimization method based on Bregman distance (BiO-BreD) for solving deterministic bilevel problems, which reaches the lower computational complexities than the best known results. We also propose a stochastic bilevel optimization method (SBiO-BreD) for solving stochastic bilevel problems based on the stochastic approximated gradients and Bregman distance. Further, we propose an accelerated version of SBiO-BreD method (ASBiO-BreD) by using the variance-reduced technique. Moreover, we prove that the ASBiO-BreD outperforms the best known computational complexities with respect to the condition number $kappa$ and the target accuracy $epsilon$ for finding an $epsilon$-stationary point of nonconvex-strongly-convex bilevel problems. In particular, our methods can solve the bilevel optimization problems with nonsmooth regularization with a lower computational complexity.
This work is concerned with the design and effects of the synchronization gains on the synchronization problem for a class of networked distributed parameter systems. The networked systems, assumed to be described by the same evolution equation in a Hilbert space, differ in their initial conditions. The proposed synchronization controllers aim at achieving both the control objective and the synchronization objective. To enhance the synchronization, as measured by the norm of the pairwise state difference of the networked systems, an adaptation of the gains is proposed. An alternative design arrives at constant gains that are optimized with respect to an appropriate measure of synchronization. A subsequent formulation casts the control and synchronization design problem into an optimal control problem for the aggregate systems. An extensive numerical study examines the various aspects of the optimization and adaptation of the gains on the control and synchronization of networked 1D parabolic differential equations.
Optimization in distributed networks plays a central role in almost all distributed machine learning problems. In principle, the use of distributed task allocation has reduced the computational time, allowing better response rates and higher data rel iability. However, for these computational algorithms to run effectively in complex distributed systems, the algorithms ought to compensate for communication asynchrony, network node failures and delays known as stragglers. These issues can change the effective connection topology of the network, which may vary over time, thus hindering the optimization process. In this paper, we propose a new distributed unconstrained optimization algorithm for minimizing a convex function which is adaptable to a parameter server network. In particular, the network worker nodes solve their local optimization problems, allowing the computation of their local coded gradients, which will be sent to different server nodes. Then within this parameter server platform each server node aggregates its communicated local gradients, allowing convergence to the desired optimizer. This algorithm is robust to network s worker node failures, disconnection, or delaying nodes known as stragglers. One way to overcome the straggler problem is to allow coding over the network. We further extend this coding framework to enhance the convergence of the proposed algorithm under such varying network topologies. By using coding and utilizing evaluations of gradients of uniformly bounded delay we further enhance the proposed algorithm performance. Finally, we implement the proposed scheme in MATLAB and provide comparative results demonstrating the effectiveness of the proposed framework
This paper proposes a new algorithm -- the underline{S}ingle-timescale Dounderline{u}ble-momentum underline{St}ochastic underline{A}pproxunderline{i}matiounderline{n} (SUSTAIN) -- for tackling stochastic unconstrained bilevel optimization problems. W e focus on bilevel problems where the lower level subproblem is strongly-convex and the upper level objective function is smooth. Unlike prior works which rely on emph{two-timescale} or emph{double loop} techniques, we design a stochastic momentum-assisted gradient estimator for both the upper and lower level updates. The latter allows us to control the error in the stochastic gradient updates due to inaccurate solution to both subproblems. If the upper objective function is smooth but possibly non-convex, we show that {aname}~requires $mathcal{O}(epsilon^{-3/2})$ iterations (each using ${cal O}(1)$ samples) to find an $epsilon$-stationary solution. The $epsilon$-stationary solution is defined as the point whose squared norm of the gradient of the outer function is less than or equal to $epsilon$. The total number of stochastic gradient samples required for the upper and lower level objective functions matches the best-known complexity for single-level stochastic gradient algorithms. We also analyze the case when the upper level objective function is strongly-convex.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا