ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Regularization of Squared Curvature

65   0   0.0 ( 0 )
 نشر من قبل Yuri Boykov
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Curvature has received increased attention as an important alternative to length based regularization in computer vision. In contrast to length, it preserves elongated structures and fine details. Existing approaches are either inefficient, or have low angular resolution and yield results with strong block artifacts. We derive a new model for computing squared curvature based on integral geometry. The model counts responses of straight line triple cliques. The corresponding energy decomposes into submodular and supermodular pairwise potentials. We show that this energy can be efficiently minimized even for high angular resolutions using the trust region framework. Our results confirm that we obtain accurate and visually pleasing solutions without strong artifacts at reasonable run times.

قيم البحث

اقرأ أيضاً

Many applications in vision require estimation of thin structures such as boundary edges, surfaces, roads, blood vessels, neurons, etc. Unlike most previous approaches, we simultaneously detect and delineate thin structures with sub-pixel localizatio n and real-valued orientation estimation. This is an ill-posed problem that requires regularization. We propose an objective function combining detection likelihoods with a prior minimizing curvature of the center-lines or surfaces. Unlike simple block-coordinate descent, we develop a novel algorithm that is able to perform joint optimization of location and detection variables more effectively. Our lower bound optimization algorithm applies to quadratic or absolute curvature. The proposed early vision framework is sufficiently general and it can be used in many higher-level applications. We illustrate the advantage of our approach on a range of 2D and 3D examples.
Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss. Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fact that the expressiveness of CNNs is fragile and needs a more gentle way of regularization for the networks to adapt during pruning. To solve this problem, we propose a new regularization-based pruning method (named IncReg) to incrementally assign different regularization factors to different weight groups based on their relative importance, whose effectiveness is proved on popular CNNs compared with state-of-the-art methods.
We describe the supersymmetric completion of several curvature-squared invariants for ${cal N}=(1,0)$ supergravity in six dimensions. The construction of the invariants is based on a close interplay between superconformal tensor calculus and recently developed superspace techniques to study general off-shell supergravity-matter couplings. In the case of minimal off-shell Poincare supergravity based on the dilaton-Weyl multiplet coupled to a linear multiplet as a conformal compensator, we describe off-shell supersymmetric completions for all the three possible purely gravitational curvature-squared terms in six dimensions: Riemann, Ricci, and scalar curvature squared. A linear combination of these invariants describes the off-shell completion of the Gauss-Bonnet term, recently presented in arXiv:1706.09330. We study properties of the Einstein-Gauss-Bonnet supergravity, which plays a central role in the effective low-energy description of $alpha^prime$-corrected string theory compactified to six dimensions, including a detailed analysis of the spectrum about the ${rm AdS}_3times {rm S}^3$ solution. We also present a novel locally superconformal invariant based on a higher-derivative action for the linear multiplet. This invariant, which includes gravitational curvature-squared terms, can be defined both coupled to the standard-Weyl or dilaton-Weyl multiplet for conformal supergravity. In the first case, we show how the addition of this invariant to the supersymmetric Einstein-Hilbert term leads to a dynamically generated cosmological constant and non-supersymmetric (A)dS$_6$ solutions. In the dilaton-Weyl multiplet, the new off-shell invariant includes Ricci and scalar curvature-squared terms and possesses a nontrivial dependence on the dilaton field.
325 - Yanyan Niu , Shicheng Xu 2021
Let $nge 2$ and $kge 1$ be two integers. Let $M$ be an isometrically immersed closed $n$-submanifold of co-dimension $k$ that is homotopic to a point in a complete manifold $N$, where the sectional curvature of $N$ is no more than $delta<0$. We prove that the total squared mean curvature of $M$ in $N$ and the first non-zero eigenvalue $lambda_1(M)$ of $M$ satisfies $$lambda_1(M)le nleft(delta +frac{1}{operatorname{Vol} M}int_M |H|^2 operatorname{dvol}right).$$ The equality implies that $M$ is minimally immersed in a metric sphere after lifted to the universal cover of $N$. This completely settles an open problem raised by E. Heintze in 1988.
State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations. One of the most effective strategies to improve robustness is adversarial training. In this paper, we investigate the effect of adversarial training o n the geometry of the classification landscape and decision boundaries. We show in particular that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs, leading to a drastically more linear behaviour of the network. Using a locally quadratic approximation, we provide theoretical evidence on the existence of a strong relation between large robustness and small curvature. To further show the importance of reduced curvature for improving the robustness, we propose a new regularizer that directly minimizes curvature of the loss surface, and leads to adversarial robustness that is on par with adversarial training. Besides being a more efficient and principled alternative to adversarial training, the proposed regularizer confirms our claims on the importance of exhibiting quasi-linear behavior in the vicinity of data points in order to achieve robustness.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا