Do you want to publish a course? Click here

Multidimensional TV-Stokes for image processing

80   0   0.0 ( 0 )
 Added by Bin Wu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A complete multidimential TV-Stokes model is proposed based on smoothing a gradient field in the first step and reconstruction of the multidimensional image from the gradient field. It is the correct extension of the original two dimensional TV-Stokes to multidimensions. Numerical algorithm using the Chambolles semi-implicit dual formula is proposed. Numerical results applied to denoising 3D images and movies are presented. They show excellent performance in avoiding the staircase effect, and preserving fine structures.



rate research

Read More

We propose a set of iterative regularization algorithms for the TV-Stokes model to restore images from noisy images with Gaussian noise. These are some extensions of the iterative regularization algorithm proposed for the classical Rudin-Osher-Fatemi (ROF) model for image reconstruction, a single step model involving a scalar field smoothing, to the TV-Stokes model for image reconstruction, a two steps model involving a vector field smoothing in the first and a scalar field smoothing in the second. The iterative regularization algorithms proposed here are Richardsons iteration like. We have experimental results that show improvement over the original method in the quality of the restored image. Convergence analysis and numerical experiments are presented.
81 - Bin Wu , Xue-Cheng Tai , 2020
The paper presents a fully coupled TV-Stokes model, and propose an algorithm based on alternating minimization of the objective functional whose first iteration is exactly the modified TV-Stokes model proposed earlier. The model is a generalization of the second order Total Generalized Variation model. A convergence analysis is given.
Low-rank approximations of original samples are playing more and more an important role in many recently proposed mathematical models from data science. A natural and initial requirement is that these representations inherit original structures or properties. With this aim, we propose a new multi-symplectic method based on the Lanzcos bidiagonalization to compute the partial singular triplets of JRS-symmetric matrices. These singular triplets can be used to reconstruct optimal low-rank approximations while preserving the intrinsic multi-symmetry. The augmented Ritz and harmonic Ritz vectors are used to perform implicit restarting to obtain a satisfactory bidiagonal matrix for calculating the $k$ largest or smallest singular triplets, respectively. We also apply the new multi-symplectic Lanczos algorithms to color face recognition and color video compressing and reconstruction. Numerical experiments indicate their superiority over the state-of-the-art algorithms.
Wavelets are closely related to the Schrodingers wave functions and the interpretation of Born. Similarly to the appearance of atomic orbital, it is proposed to combine anti-symmetric wavelets into orbital wavelets. The proposed approach allows the increase of the dimension of wavelets through this process. New orbital 2D-wavelets are introduced for the decomposition of still images, showing that it is possible to perform an analysis simultaneous in two distinct scales. An example of such an image analysis is shown.
Scientific computations or measurements may result in huge volumes of data. Often these can be thought of representing a real-valued function on a high-dimensional domain, and can be conceptually arranged in the format of a tensor of high degree in some truncated or lossy compressed format. We look at some common post-processing tasks which are not obvious in the compressed format, as such huge data sets can not be stored in their entirety, and the value of an element is not readily accessible through simple look-up. The tasks we consider are finding the location of maximum or minimum, or minimum and maximum of a function of the data, or finding the indices of all elements in some interval --- i.e. level sets, the number of elements with a value in such a level set, the probability of an element being in a particular level set, and the mean and variance of the total collection. The algorithms to be described are fixed point iterations of particular functions of the tensor, which will then exhibit the desired result. For this, the data is considered as an element of a high degree tensor space, although in an abstract sense, the algorithms are independent of the representation of the data as a tensor. All that we require is that the data can be considered as an element of an associative, commutative algebra with an inner product. Such an algebra is isomorphic to a commutative sub-algebra of the usual matrix algebra, allowing the use of matrix algorithms to accomplish the mentioned tasks. We allow the actual computational representation to be a lossy compression, and we allow the algebra operations to be performed in an approximate fashion, so as to maintain a high compression level. One such example which we address explicitly is the representation of data as a tensor with compression in the form of a low-rank representation.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا