ترغب بنشر مسار تعليمي؟ اضغط هنا

An Online Riemannian PCA for Stochastic Canonical Correlation Analysis

112   0   0.0 ( 0 )
 نشر من قبل Rudrasis Chakraborty Dr.
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) using a reparametrization of the projection matrices. We show how this reparametrization (into structured matrices), simple in hindsight, directly presents an opportunity to repurpose/adjust mature techniques for numerical optimization on Riemannian manifolds. Our developments nicely complement existing methods for this problem which either require $O(d^3)$ time complexity per iteration with $O(frac{1}{sqrt{t}})$ convergence rate (where $d$ is the dimensionality) or only extract the top $1$ component with $O(frac{1}{t})$ convergence rate. In contrast, our algorithm offers a strict improvement for this classical problem: it achieves $O(d^2k)$ runtime complexity per iteration for extracting the top $k$ canonical components with $O(frac{1}{t})$ convergence rate. While the paper primarily focuses on the formulation and technical analysis of its properties, our experiments show that the empirical behavior on common datasets is quite promising. We also explore a potential application in training fair models where the label of protected attribute is missing or otherwise unavailable.

قيم البحث

اقرأ أيضاً

We propose novel first-order stochastic approximation algorithms for canonical correlation analysis (CCA). Algorithms presented are instances of inexact matrix stochastic gradient (MSG) and inexact matrix exponentiated gradient (MEG), and achieve $ep silon$-suboptimality in the population objective in $operatorname{poly}(frac{1}{epsilon})$ iterations. We also consider practical variants of the proposed algorithms and compare them with other methods for CCA both theoretically and empirically.
We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods fo r nonlinear two-view representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview representation learning technique that combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many independent sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn DGCCA representations on two distinct datasets for three downstream tasks: phonetic transcription from acoustic and articulatory measurements, and recommending hashtags and friends on a dataset of Twitter users. We find that DGCCA representations soundly beat existing methods at phonetic transcription and hashtag recommendation, and in general perform no worse than standard linear many-view techniques.
93 - Benjamin Dutton 2020
Canonical Correlation Analysis (CCA) is a statistical technique used to extract common information from multiple data sources or views. It has been used in various representation learning problems, such as dimensionality reduction, word embedding, an d clustering. Recent work has given CCA probabilistic footing in a deep learning context and uses a variational lower bound for the data log likelihood to estimate model parameters. Alternatively, adversarial techniques have arisen in recent years as a powerful alternative to variational Bayesian methods in autoencoders. In this work, we explore straightforward adversarial alternatives to recent work in Deep Variational CCA (VCCA and VCCA-Private) we call ACCA and ACCA-Private and show how these approaches offer a stronger and more flexible way to match the approximate posteriors coming from encoders to much larger classes of priors than the VCCA and VCCA-Private models. This allows new priors for what constitutes a good representation, such as disentangling underlying factors of variation, to be more directly pursued. We offer further analysis on the multi-level disentangling properties of VCCA-Private and ACCA-Private through the use of a newly designed dataset we call Tangled MNIST. We also design a validation criteria for these models that is theoretically grounded, task-agnostic, and works well in practice. Lastly, we fill a minor research gap by deriving an additional variational lower bound for VCCA that allows the representation to use view-specific information from both input views.
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing. In this paper, we present a convergence analysis for an online tensorial ICA algorithm, by viewing the problem as a nonconvex stochastic approximation problem. For estimating one component, we provide a dynamics-based analysis to prove that our online tensorial ICA algorithm with a specific choice of stepsize achieves a sharp finite-sample error bound. In particular, under a mild assumption on the data-generating distribution and a scaling condition such that $d^4/T$ is sufficiently small up to a polylogarithmic factor of data dimension $d$ and sample size $T$, a sharp finite-sample error bound of $tilde{O}(sqrt{d/T})$ can be obtained.
79 - Lei Gao , Lin Qi , Enqing Chen 2021
In this paper, we propose the Discriminative Multiple Canonical Correlation Analysis (DMCCA) for multimodal information analysis and fusion. DMCCA is capable of extracting more discriminative characteristics from multimodal information representation s. Specifically, it finds the projected directions which simultaneously maximize the within-class correlation and minimize the between-class correlation, leading to better utilization of the multimodal information. In the process, we analytically demonstrate that the optimally projected dimension by DMCCA can be quite accurately predicted, leading to both superior performance and substantial reduction in computational cost. We further verify that Canonical Correlation Analysis (CCA), Multiple Canonical Correlation Analysis (MCCA) and Discriminative Canonical Correlation Analysis (DCCA) are special cases of DMCCA, thus establishing a unified framework for Canonical Correlation Analysis. We implement a prototype of DMCCA to demonstrate its performance in handwritten digit recognition and human emotion recognition. Extensive experiments show that DMCCA outperforms the traditional methods of serial fusion, CCA, MCCA and DCCA.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا