ترغب بنشر مسار تعليمي؟ اضغط هنا

Tensor Analysis with n-Mode Generalized Difference Subspace

36   0   0.0 ( 0 )
 نشر من قبل Alessandro Lameiras Koerich
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

The increasing use of multiple sensors, which produce a large amount of multi-dimensional data, requires efficient representation and classification methods. In this paper, we present a new method for multi-dimensional data classification that relies on two premises: 1) multi-dimensional data are usually represented by tensors, since this brings benefits from multilinear algebra and established tensor factorization methods; and 2) multilinear data can be described by a subspace of a vector space. The subspace representation has been employed for pattern-set recognition, and its tensor representation counterpart is also available in the literature. However, traditional methods do not use discriminative information of the tensors, degrading the classification accuracy. In this case, generalized difference subspace (GDS) provides an enhanced subspace representation by reducing data redundancy and revealing discriminative structures. Since GDS does not handle tensor data, we propose a new projection called n-mode GDS, which efficiently handles tensor data. We also introduce the n-mode Fisher score as a class separability index and an improved metric based on the geodesic distance for tensor data similarity. The experimental results on gesture and action recognition show that the proposed method outperforms methods commonly used in the literature without relying on pre-trained models or transfer learning.

قيم البحث

اقرأ أيضاً

Efficient and interpretable spatial analysis is crucial in many fields such as geology, sports, and climate science. Tensor latent factor models can describe higher-order correlations for spatial data. However, they are computationally expensive to t rain and are sensitive to initialization, leading to spatially incoherent, uninterpretable results. We develop a novel Multiresolution Tensor Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns. MRTL initializes the latent factors from an approximate full-rank tensor model for improved interpretability and progressively learns from a coarse resolution to the fine resolution to reduce computation. We also prove the theoretical convergence and computational complexity of MRTL. When applied to two real-world datasets, MRTL demonstrates 4~5x speedup compared to a fixed resolution approach while yielding accurate and interpretable latent factors.
We introduce the Subspace Power Method (SPM) for calculating the CP decomposition of low-rank even-order real symmetric tensors. This algorithm applies the tensor power method of Kolda-Mayo to a certain modified tensor, constructed from a matrix flat tening of the original tensor, and then uses deflation steps. Numerical simulations indicate SPM is roughly one order of magnitude faster than state-of-the-art algorithms, while performing robustly for low-rank tensors subjected to additive noise. We obtain rigorous guarantees for SPM regarding convergence and global optima, for tensors of rank up to roughly the square root of the number of tensor entries, by drawing on results from classical algebraic geometry and dynamical systems. In a second contribution, we extend SPM to compute De Lathauwers symmetric block term tensor decompositions. As an application of the latter decomposition, we provide a method-of-moments for generalized principal component analysis.
Multi-view subspace learning (MSL) aims to find a low-dimensional subspace of the data obtained from multiple views. Different from single view case, MSL should take both common and specific knowledge among different views into consideration. To enha nce the robustness of model, the complexity, non-consistency and similarity of noise in multi-view data should be fully taken into consideration. Most current MSL methods only assume a simple Gaussian or Laplacian distribution for the noise while neglect the complex noise configurations in each view and noise correlations among different views of practical data. To this issue, this work initiates a MSL method by encoding the multi-view-shared and single-view-specific noise knowledge in data. Specifically, we model data noise in each view as a separated Mixture of Gaussians (MoG), which can fit a wider range of complex noise types than conventional Gaussian/Laplacian. Furthermore, we link all single-view-noise as a whole by regularizing them by a common MoG component, encoding the shared noise knowledge among them. Such regularization component can be formulated as a concise KL-divergence regularization term under a MAP framework, leading to good interpretation of our model and simple EM-based solving strategy to the problem. Experimental results substantiate the superiority of our method.
Studies on acquiring appropriate continuous representations of discrete objects, such as graphs and knowledge base data, have been conducted by many researchers in the field of machine learning. In this study, we introduce Nested SubSpace (NSS) arran gement, a comprehensive framework for representation learning. We show that existing embedding techniques can be regarded as special cases of the NSS arrangement. Based on the concept of the NSS arrangement, we implement a Disk-ANChor ARrangement (DANCAR), a representation learning method specialized to reproducing general graphs. Numerical experiments have shown that DANCAR has successfully embedded WordNet in ${mathbb R}^{20}$ with an F1 score of 0.993 in the reconstruction task. DANCAR is also suitable for visualization in understanding the characteristics of graphs.
We consider the problem of recovering a low-rank tensor from its noisy observation. Previous work has shown a recovery guarantee with signal to noise ratio $O(n^{lceil K/2 rceil /2})$ for recovering a $K$th order rank one tensor of size $ntimes cdots times n$ by recursive unfolding. In this paper, we first improve this bound to $O(n^{K/4})$ by a much simpler approach, but with a more careful analysis. Then we propose a new norm called the subspace norm, which is based on the Kronecker products of factors obtained by the proposed simple estimator. The imposed Kronecker structure allows us to show a nearly ideal $O(sqrt{n}+sqrt{H^{K-1}})$ bound, in which the parameter $H$ controls the blend from the non-convex estimator to mode-wise nuclear norm minimization. Furthermore, we empirically demonstrate that the subspace norm achieves the nearly ideal denoising performance even with $H=O(1)$.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا