ترغب بنشر مسار تعليمي؟ اضغط هنا

A Color Elastica Model for Vector-Valued Image Regularization

102   0   0.0 ( 0 )
 نشر من قبل Hao Liu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Models related to the Eulers elastica energy have proven to be useful for many applications including image processing. Extending elastica models to color images and multi-channel data is a challenging task, as stable and consistent numerical solvers for these geometric models often involve high order derivatives. Like the single channel Eulers elastica model and the total variation (TV) models, geometric measures that involve high order derivatives could help when considering image formation models that minimize elastic properties. In the past, the Polyakov action from high energy physics has been successfully applied to color image processing. Here, we introduce an addition to the Polyakov action for color images that minimizes the color manifold curvature. The color image curvature is computed by applying of the Laplace-Beltrami operator to the color image channels. When reduced to gray-scale images, while selecting appropriate scaling between space and color, the proposed model minimizes the Eulers elastica operating on the image level sets. Finding a minimizer for the proposed nonlinear geometric model is a challenge we address in this paper. Specifically, we present an operator-splitting method to minimize the proposed functional. The non-linearity is decoupled by introducing three vector-valued and matrix-valued variables. The problem is then converted into solving for the steady state of an associated initial-value problem. The initial-value problem is time-split into three fractional steps, such that each sub-problem has a closed form solution, or can be solved by fast algorithms. The efficiency and robustness of the proposed method are demonstrated by systematic numerical experiments.



قيم البحث

اقرأ أيضاً

Image segmentation is a fundamental topic in image processing and has been studied for many decades. Deep learning-based supervised segmentation models have achieved state-of-the-art performance but most of them are limited by using pixel-wise loss f unctions for training without geometrical constraints. Inspired by Eulers Elastica model and recent active contour models introduced into the field of deep learning, we propose a novel active contour with elastica (ACE) loss function incorporating Elastica (curvature and length) and region information as geometrically-natural constraints for the image segmentation tasks. We introduce the mean curvature i.e. the average of all principal curvatures, as a more effective image prior to representing curvature in our ACE loss function. Furthermore, based on the definition of the mean curvature, we propose a fast solution to approximate the ACE loss in three-dimensional (3D) by using Laplace operators for 3D image segmentation. We evaluate our ACE loss function on four 2D and 3D natural and biomedical image datasets. Our results show that the proposed loss function outperforms other mainstream loss functions on different segmentation networks. Our source code is available at https://github.com/HiLab-git/ACELoss.
Eulers elastica model has a wide range of applications in Image Processing and Computer Vision. However, the non-convexity, the non-smoothness and the nonlinearity of the associated energy functional make its minimization a challenging task, further complicated by the presence of high order derivatives in the model. In this article we propose a new operator-splitting algorithm to minimize the Euler elastica functional. This algorithm is obtained by applying an operator-splitting based time discretization scheme to an initial value problem (dynamical flow) associated with the optimality system (a system of multivalued equations). The sub-problems associated with the three fractional steps of the splitting scheme have either closed form solutions or can be handled by fast dedicated solvers. Compared with earlier approaches relying on ADMM (Alternating Direction Method of Multipliers), the new method has, essentially, only the time discretization step as free parameter to choose, resulting in a very robust and stable algorithm. The simplicity of the sub-problems and its modularity make this algorithm quite efficient. Applications to the numerical solution of smoothing test problems demonstrate the efficiency and robustness of the proposed methodology.
174 - Miklos Laczkovich 2020
Let $G$ be a topological Abelian semigroup with unit, let $E$ be a Banach space, and let $C(G,E)$ denote the set of continuous functions $fcolon Gto E$. A function $fin C(G,E)$ is a generalized polynomial, if there is an $nge 0$ such that $Delta_{h_1 } ldots Delta_{h_{n+1}} f=0$ for every $h_1 ,ldots , h_{n+1} in G$, where $Delta_h$ is the difference operator. We say that $fin C(G,E)$ is a polynomial, if it is a generalized polynomial, and the linear span of its translates is of finite dimension; $f$ is a w-polynomial, if $ucirc f$ is a polynomial for every $uin E^*$, and $f$ is a local polynomial, if it is a polynomial on every finitely generated subsemigroup. We show that each of the classes of polynomials, w-polynomials, generalized polynomials, local polynomials is contained in the next class. If $G$ is an Abelian group and has a dense subgroup with finite torsion free rank, then these classes coincide. We introduce the classes of exponential polynomials and w-expo-nential polynomials as well, establish their representations and connection with polynomials and w-polynomials. We also investigate spectral synthesis and analysis in the class $C(G,E)$. It is known that if $G$ is a compact Abelian group and $E$ is a Banach space, then spectral synthesis holds in $C(G,E)$. On the other hand, we show that if $G$ is an infinite and discrete Abelian group and $E$ is a Banach space of infinite dimension, then even spectral analysis fails in $C(G,E)$. If, however, $G$ is discrete, has finite torsion free rank and if $E$ is a Banach space of finite dimension, then spectral synthesis holds in $C(G,E)$.
We consider total variation minimization for manifold valued data. We propose a cyclic proximal point algorithm and a parallel proximal point algorithm to minimize TV functionals with $ell^p$-type data terms in the manifold case. These algorithms are based on iterative geodesic averaging which makes them easily applicable to a large class of data manifolds. As an application, we consider denoising images which take their values in a manifold. We apply our algorithms to diffusion tensor images, interferometric SAR images as well as sphere and cylinder valued images. For the class of Cartan-Hadamard manifolds (which includes the data space in diffusion tensor imaging) we show the convergence of the proposed TV minimizing algorithms to a global minimizer.
In this paper, we consider the sparse regularization of manifold-valued data with respect to an interpolatory wavelet/multiscale transform. We propose and study variational models for this task and provide results on their well-posedness. We present algorithms for a numerical realization of these models in the manifold setup. Further, we provide experimental results to show the potential of the proposed schemes for applications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا