Do you want to publish a course? Click here

Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank Approximation

115   0   0.0 ( 0 )
 Added by Murad Tukan
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A common technique for compressing a neural network is to compute the $k$-rank $ell_2$ approximation $A_{k,2}$ of the matrix $Ainmathbb{R}^{ntimes d}$ that corresponds to a fully connected layer (or embedding layer). Here, $d$ is the number of the neurons in the layer, $n$ is the number in the next one, and $A_{k,2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$. This $ell_2$-approximation minimizes the sum over every entry to the power of $p=2$ in the matrix $A - A_{k,2}$, among every matrix $A_{k,2}inmathbb{R}^{ntimes d}$ whose rank is $k$. While it can be computed efficiently via SVD, the $ell_2$-approximation is known to be very sensitive to outliers (far-away rows). Hence, machine learning uses e.g. Lasso Regression, $ell_1$-regularization, and $ell_1$-SVM that use the $ell_1$-norm. This paper suggests to replace the $k$-rank $ell_2$ approximation by $ell_p$, for $pin [1,2]$. We then provide practical and provable approximation algorithms to compute it for any $pgeq1$, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. For example, our approach achieves $28%$ compression of RoBERTas embedding layer with only $0.63%$ additive drop in the accuracy (without fine-tuning) in average over all tasks in GLUE, compared to $11%$ drop using the existing $ell_2$-approximation. Open code is provided for reproducing and extending our results.



rate research

Read More

What learning algorithms can be run directly on compressively-sensed data? In this work, we consider the question of accurately and efficiently computing low-rank matrix or tensor factorizations given data compressed via random projections. We examine the approach of first performing factorization in the compressed domain, and then reconstructing the original high-dimensional factors from the recovered (compressed) factors. In both the matrix and tensor settings, we establish conditions under which this natural approach will provably recover the original factors. While it is well-known that random projections preserve a number of geometric properties of a dataset, our work can be viewed as showing that they can also preserve certain solutions of non-convex, NP-Hard problems like non-negative matrix factorization. We support these theoretical results with experiments on synthetic data and demonstrate the practical applicability of compressed factorization on real-world gene expression and EEG time series datasets.
99 - Osama Elnahas , Zhi Quan 2020
Clock synchronization has become a key design objective in wireless networks for its essential importance in many applications. However, as the wireless link is prone to random network delays due to unreliable channel conditions, it is in general difficult to achieve accurate clock synchronization among wireless nodes. This letter proposes robust clock synchronization algorithms based on low rank matrix approximation, which are able to correct timestamps in the presence of random network delays. We design a low rank approximation based maximum likelihood estimator (MLE) to jointly estimate the clock offset and clock skew under the two-way message exchange mechanism assuming Gaussian delay distribution. By formulating the timestamp correction problem into a low rank approximation problem, we can solve the problem in the singular value decomposition (SVD) domain and also via nuclear norm minimization. Numerical results show that the proposed schemes can correct noisy timestamps and thus achieve more robust synchronization performance than the MLE.
Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? We investigate the hypothesis that deeper nets are implicitly biased to find lower rank solutions and that these are the solutions that generalize well. We prove for the asymptotic case that the percent volume of low effective-rank solutions increases monotonically as linear neural networks are made deeper. We then show empirically that our claim holds true on finite width models. We further empirically find that a similar result holds for non-linear networks: deeper non-linear networks learn a feature space whose kernel has a lower rank. We further demonstrate how linear over-parameterization of deep non-linear models can be used to induce low-rank bias, improving generalization performance without changing the effective model capacity. We evaluate on various model architectures and demonstrate that linearly over-parameterized models outperform existing baselines on image classification tasks, including ImageNet.
210 - Huyan Huang , Yipeng Liu , Ce Zhu 2019
Low-rank tensor completion recovers missing entries based on different tensor decompositions. Due to its outstanding performance in exploiting some higher-order data structure, low rank tensor ring has been applied in tensor completion. To further deal with its sensitivity to sparse component as it does in tensor principle component analysis, we propose robust tensor ring completion (RTRC), which separates latent low-rank tensor component from sparse component with limited number of measurements. The low rank tensor component is constrained by the weighted sum of nuclear norms of its balanced unfoldings, while the sparse component is regularized by its l1 norm. We analyze the RTRC model and gives the exact recovery guarantee. The alternating direction method of multipliers is used to divide the problem into several sub-problems with fast solutions. In numerical experiments, we verify the recovery condition of the proposed method on synthetic data, and show the proposed method outperforms the state-of-the-art ones in terms of both accuracy and computational complexity in a number of real-world data based tasks, i.e., light-field image recovery, shadow removal in face images, and background extraction in color video.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا