Do you want to publish a course? Click here

System theory and orthogonal multi-wavelets

68   0   0.0 ( 0 )
 Added by Maria Charina
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

In this paper we provide a complete and unifying characterization of compactly supported univariate scalar orthogonal wavelets and vector-valued or matrix-valued orthogonal multi-wavelets. This characterization is based on classical results from system theory and basic linear algebra. In particular, we show that the corresponding wavelet and multi-wavelet masks are identified with a transfer function $$ F(z)=A+B z (I-Dz)^{-1} , C, quad z in mathbb{D}={z in mathbb{C} : |z| < 1}, $$ of a conservative linear system. The complex matrices $A, B, C, D$ define a block circulant unitary matrix. Our results show that there are no intrinsic differences between the elegant wavelet construction by Daubechies or any other construction of vector-valued or matrix-valued multi-wavelets. The structure of the unitary matrix defined by $A, B, C, D$ allows us to parametrize in a systematic way all classes of possible wavelet and multi-wavelet masks together with the masks of the corresponding refinable functions.



rate research

Read More

We present algorithms to numerically evaluate Daubechies wavelets and scaling functions to high relative accuracy. These algorithms refine the suggestion of Daubechies and Lagarias to evaluate functions defined by two-scale difference equations using splines; carefully choosing amongst a family of rapidly convergent interpolators which effectively capture all the smoothness present in the function and whose error term admits a small asymptotic constant. We are also able to efficiently compute derivatives, though with a smoothness-induced reduction in accuracy. An implementation is provided in the Boost Software Library.
We present a class of fast subspace tracking algorithms based on orthogonal iterations for structured matrices/pencils that can be represented as small rank perturbations of unitary matrices. The algorithms rely upon an updated data sparse factorization -- named LFR factorization -- using orthogonal Hessenberg matrices. These new subspace trackers reach a complexity of only $O(nk^2)$ operations per time update, where $n$ and $k$ are the size of the matrix and of the small rank perturbation, respectively.
122 - Karim Halaseh , Tommi Muller , 2020
In this paper we study the problem of recovering a tensor network decomposition of a given tensor $mathcal{T}$ in which the tensors at the vertices of the network are orthogonally decomposable. Specifically, we consider tensor networks in the form of tensor trains (aka matrix product states). When the tensor train has length 2, and the orthogonally decomposable tensors at the two vertices of the network are symmetric, we show how to recover the decomposition by considering random linear combinations of slices. Furthermore, if the tensors at the vertices are symmetric but not orthogonally decomposable, we show that a whitening procedure can transform the problem into an orthogonal one, thereby yielding a solution for the decomposition of the tensor. When the tensor network has length 3 or more and the tensors at the vertices are symmetric and orthogonally decomposable, we provide an algorithm for recovering them subject to some rank conditions. Finally, in the case of tensor trains of length two in which the tensors at the vertices are orthogonally decomposable but not necessarily symmetric, we show that the decomposition problem reduces to the problem of a novel matrix decomposition, that of an orthogonal matrix multiplied by diagonal matrices on either side. We provide two solutions for the full-rank tensor case using Sinkhorns theorem and Procrustes algorithm, respectively, and show that the Procrustes-based solution can be generalized to any rank case. We conclude with a multitude of open problems in linear and multilinear algebra that arose in our study.
123 - Chao Zeng 2021
The orthogonal decomposition factorizes a tensor into a sum of an orthogonal list of rankone tensors. We present several properties of orthogonal rank. We find that a subtensor may have a larger orthogonal rank than the whole tensor and prove the lower semicontinuity of orthogonal rank. The lower semicontinuity guarantees the existence of low orthogonal rank approximation. To fit the orthogonal decomposition, we propose an algorithm based on the augmented Lagrangian method and guarantee the orthogonality by a novel orthogonalization procedure. Numerical experiments show that the proposed method has a great advantage over the existing methods for strongly orthogonal decompositions in terms of the approximation error.
213 - Tamara G. Kolda 2015
We consider the problem of decomposing a real-valued symmetric tensor as the sum of outer products of real-valued, pairwise orthogonal vectors. Such decompositions do not generally exist, but we show that some symmetric tensor decomposition problems can be converted to orthogonal problems following the whitening procedure proposed by Anandkumar et al. (2012). If an orthogonal decomposition of an $m$-way $n$-dimensional symmetric tensor exists, we propose a novel method to compute it that reduces to an $n times n$ symmetric matrix eigenproblem. We provide numerical results demonstrating the effectiveness of the method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا