Do you want to publish a course? Click here

Learning Schatten--von Neumann Operators

68   0   0.0 ( 0 )
 Added by Puoya Tabaghi
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We study the learnability of a class of compact operators known as Schatten--von Neumann operators. These operators between infinite-dimensional function spaces play a central role in a variety of applications in learning theory and inverse problems. We address the question of sample complexity of learning Schatten-von Neumann operators and provide an upper bound on the number of measurements required for the empirical risk minimizer to generalize with arbitrary precision and probability, as a function of class parameter $p$. Our results give generalization guarantees for regression of infinite-dimensional signals from infinite-dimensional data. Next, we adapt the representer theorem of Abernethy emph{et al.} to show that empirical risk minimization over an a priori infinite-dimensional, non-compact set, can be converted to a convex finite dimensional optimization problem over a compact set. In summary, the class of $p$-Schatten--von Neumann operators is probably approximately correct (PAC)-learnable via a practical convex program for any $p < infty$.

rate research

Read More

We study differential operators on complete Riemannian manifolds which act on sections of a bundle of finite type modules over a von Neumann algebra with a trace. We prove a relative index and a Callias-type index theorems for von Neumann indexes of such operators. We apply these results to obtain a version of Atiyahs $L^2$-index theorem, which states that the index of a Callias-type operator on a non-compact manifold $M$ is equal to the $Gamma$-index of its lift to a Galois cover of $M$. We also prove the cobordism invariance of the index of Callias-type operators. In particular, we give a new proof of the cobordism invariance of the von Neumann index of operators on compact manifolds.
316 - Ammar Shaker , Shujian Yu 2021
The similarity of feature representations plays a pivotal role in the success of domain adaptation and generalization. Feature similarity includes both the invariance of marginal distributions and the closeness of conditional distributions given the desired response $y$ (e.g., class labels). Unfortunately, traditional methods always learn such features without fully taking into consideration the information in $y$, which in turn may lead to a mismatch of the conditional distributions or the mix-up of discriminative structures underlying data distributions. In this work, we introduce the recently proposed von Neumann conditional divergence to improve the transferability across multiple domains. We show that this new divergence is differentiable and eligible to easily quantify the functional dependence between features and $y$. Given multiple source tasks, we integrate this divergence to capture discriminative information in $y$ and design novel learning objectives assuming those source tasks are observed either simultaneously or sequentially. In both scenarios, we obtain favorable performance against state-of-the-art methods in terms of smaller generalization error on new tasks and less catastrophic forgetting on source tasks (in the sequential setup).
We discuss structured Schatten norms for tensor decomposition that includes two recently proposed norms (overlapped and latent) for convex-optimization-based tensor decomposition, and connect tensor decomposition with wider literature on structured sparsity. Based on the properties of the structured Schatten norms, we mathematically analyze the performance of latent approach for tensor decomposition, which was empirically found to perform better than the overlapped approach in some settings. We show theoretically that this is indeed the case. In particular, when the unknown true tensor is low-rank in a specific mode, this approach performs as good as knowing the mode with the smallest rank. Along the way, we show a novel duality result for structures Schatten norms, establish the consistency, and discuss the identifiability of this approach. We confirm through numerical simulations that our theoretical prediction can precisely predict the scaling behavior of the mean squared error.
The Kubo-Ando theory deals with connections for positive bounded operators. On the other hand, in various analysis related to von Neumann algebras it is impossible to avoid unbounded operators. In this article we try to extend a notion of connections to cover various classes of positive unbounded operators (or unbounded objects such as positive forms and weights) appearing naturally in the setting of von Neumann algebras, and we must keep all the expected properties maintained. This generalization is carried out for the following classes: (i) positive $tau$-measurable operators (affiliated with a semi-finite von Neumann algebra equipped with a trace $tau$), (ii) positive elements in Haagerups $L^p$-spaces, (iii) semi-finite normal weights on a von Neumann algebra. Investigation on these generalizations requires some analysis (such as certain upper semi-continuity) on decreasing sequences in various classes. Several results in this direction are proved, which may be of independent interest. Ando studied Lebesgue decomposition for positive bounded operators by making use of parallel sums. Here, such decomposition is obtained in the setting of non-commutative (Hilsum) $L^p$-spaces.
94 - A. Vourdas 2015
The properties of quantum probabilities are linked to the geometry of quantum mechanics, described by the Birkhoff-von Neumann lattice. Quantum probabilities violate the additivity property of Kolmogorov probabilities, and they are interpreted as Dempster-Shafer probabilities. Deviations from the additivity property are quantified with the Mobius (or non-additivity) operators which are defined through Mobius transforms, and which are shown to be intimately related to commutators. The lack of distributivity in the Birkhoff-von Neumann lattice Lambda , causes deviations from the law of the total probability (which is central in Kolmogorovs probability theory). Projectors which quantify the lack of distributivity in Lambda , and also deviations from the law of the total probability, are introduced. All these operators, are observables and they can be measured experimentally. Constraints for the Mobius operators, which are based on the properties of the Birkhoff-von Neumann lattice (which in the case of finite quantum systems is a modular lattice), are derived.Application of this formalism in the context of coherent states, generalizes coherence to multi-dimensional structures.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا