No Arabic abstract
Low-rank approximations of original samples are playing more and more an important role in many recently proposed mathematical models from data science. A natural and initial requirement is that these representations inherit original structures or properties. With this aim, we propose a new multi-symplectic method based on the Lanzcos bidiagonalization to compute the partial singular triplets of JRS-symmetric matrices. These singular triplets can be used to reconstruct optimal low-rank approximations while preserving the intrinsic multi-symmetry. The augmented Ritz and harmonic Ritz vectors are used to perform implicit restarting to obtain a satisfactory bidiagonal matrix for calculating the $k$ largest or smallest singular triplets, respectively. We also apply the new multi-symplectic Lanczos algorithms to color face recognition and color video compressing and reconstruction. Numerical experiments indicate their superiority over the state-of-the-art algorithms.
Scientific computations or measurements may result in huge volumes of data. Often these can be thought of representing a real-valued function on a high-dimensional domain, and can be conceptually arranged in the format of a tensor of high degree in some truncated or lossy compressed format. We look at some common post-processing tasks which are not obvious in the compressed format, as such huge data sets can not be stored in their entirety, and the value of an element is not readily accessible through simple look-up. The tasks we consider are finding the location of maximum or minimum, or minimum and maximum of a function of the data, or finding the indices of all elements in some interval --- i.e. level sets, the number of elements with a value in such a level set, the probability of an element being in a particular level set, and the mean and variance of the total collection. The algorithms to be described are fixed point iterations of particular functions of the tensor, which will then exhibit the desired result. For this, the data is considered as an element of a high degree tensor space, although in an abstract sense, the algorithms are independent of the representation of the data as a tensor. All that we require is that the data can be considered as an element of an associative, commutative algebra with an inner product. Such an algebra is isomorphic to a commutative sub-algebra of the usual matrix algebra, allowing the use of matrix algorithms to accomplish the mentioned tasks. We allow the actual computational representation to be a lossy compression, and we allow the algebra operations to be performed in an approximate fashion, so as to maintain a high compression level. One such example which we address explicitly is the representation of data as a tensor with compression in the form of a low-rank representation.
A complete multidimential TV-Stokes model is proposed based on smoothing a gradient field in the first step and reconstruction of the multidimensional image from the gradient field. It is the correct extension of the original two dimensional TV-Stokes to multidimensions. Numerical algorithm using the Chambolles semi-implicit dual formula is proposed. Numerical results applied to denoising 3D images and movies are presented. They show excellent performance in avoiding the staircase effect, and preserving fine structures.
Optimization of conflicting functions is of paramount importance in decision making, and real world applications frequently involve data that is uncertain or unknown, resulting in multi-objective optimization (MOO) problems of stochastic type. We study the stochastic multi-gradient (SMG) method, seen as an extension of the classical stochastic gradient method for single-objective optimization. At each iteration of the SMG method, a stochastic multi-gradient direction is calculated by solving a quadratic subproblem, and it is shown that this direction is biased even when all individual gradient estimators are unbiased. We establish rates to compute a point in the Pareto front, of order similar to what is known for stochastic gradient in both convex and strongly convex cases. The analysis handles the bias in the multi-gradient and the unknown a priori weights of the limiting Pareto point. The SMG method is framed into a Pareto-front type algorithm for the computation of the entire Pareto front. The Pareto-front SMG algorithm is capable of robustly determining Pareto fronts for a number of synthetic test problems. One can apply it to any stochastic MOO problem arising from supervised machine learning, and we report results for logistic binary classification where multiple objectives correspond to distinct-sources data groups.
This paper is concerned with solving ill-posed tensor linear equations. These kinds of equations may appear from finite difference discretization of high-dimensional convection-diffusion problems or when partial differential equations in many dimensions are discretized by collocation spectral methods. Here, we propose the Tensor Golub--Kahan bidiagonalization (TGKB) algorithm in conjunction with the well known Tikhonov regularization method to solve the mentioned problems. Theoretical results are presented to discuss on conditioning of the Stein tensor equation and to reveal that how the TGKB process can be exploited for general tensor equations. In the last section, some classical test problems are examined to numerically illustrate the feasibility of proposed algorithms and also applications for color image restoration are considered.
By applying the MC algorithm and the Bauer-Muir transformation for continued fractions, in this paper we shall give six examples to show how to establish an infinite set of continued fraction formulas for certain Ramanujan-type series, such as Catalans constant, the exponential function, etc.