ترغب بنشر مسار تعليمي؟ اضغط هنا

Sketching with Kerdocks crayons: Fast sparsifying transforms for arbitrary linear maps

61   0   0.0 ( 0 )
 نشر من قبل Dustin Mixon
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Given an arbitrary matrix $Ainmathbb{R}^{ntimes n}$, we consider the fundamental problem of computing $Ax$ for any $xinmathbb{R}^n$ such that $Ax$ is $s$-sparse. While fast algorithms exist for particular choices of $A$, such as the discrete Fourier transform, there is currently no $o(n^2)$ algorithm that treats the unstructured case. In this paper, we devise a randomized approach to tackle the unstructured case. Our method relies on a representation of $A$ in terms of certain real-valued mutually unbiased bases derived from Kerdock sets. In the preprocessing phase of our algorithm, we compute this representation of $A$ in $O(n^3log n)$ operations. Next, given any unit vector $xinmathbb{R}^n$ such that $Ax$ is $s$-sparse, our randomized fast transform uses this representation of $A$ to compute the entrywise $epsilon$-hard threshold of $Ax$ with high probability in only $O(sn + epsilon^{-2}|A|_{2toinfty}^2nlog n)$ operations. In addition to a performance guarantee, we provide numerical results that demonstrate the plausibility of real-world implementation of our algorithm.



قيم البحث

اقرأ أيضاً

Graph representation learning has many real-world applications, from super-resolution imaging, 3D computer vision to drug repurposing, protein classification, social networks analysis. An adequate representation of graph data is vital to the learning performance of a statistical or machine learning model for graph-structured data. In this paper, we propose a novel multiscale representation system for graph data, called decimated framelets, which form a localized tight frame on the graph. The decimated framelet system allows storage of the graph data representation on a coarse-grained chain and processes the graph data at multi scales where at each scale, the data is stored at a subgraph. Based on this, we then establish decimated G-framelet transforms for the decomposition and reconstruction of the graph data at multi resolutions via a constructive data-driven filter bank. The graph framelets are built on a chain-based orthonormal basis that supports fast graph Fourier transforms. From this, we give a fast algorithm for the decimated G-framelet transforms, or FGT, that has linear computational complexity O(N) for a graph of size N. The theory of decimated framelets and FGT is verified with numerical examples for random graphs. The effectiveness is demonstrated by real-world applications, including multiresolution analysis for traffic network, and graph neural networks for graph classification tasks.
Data is said to follow the transform (or analysis) sparsity model if it becomes sparse when acted on by a linear operator called a sparsifying transform. Several algorithms have been designed to learn such a transform directly from data, and data-ada ptive sparsifying transforms have demonstrated excellent performance in signal restoration tasks. Sparsifying transforms are typically learned using small sub-regions of data called patches, but these algorithms often ignore redundant information shared between neighboring patches. We show that many existing transform and analysis sparse representations can be viewed as filter banks, thus linking the local properties of patch-based model to the global properties of a convolutional model. We propose a new transform learning framework where the sparsifying transform is an undecimated perfect reconstruction filter bank. Unlike previous transform learning algorithms, the filter length can be chosen independently of the number of filter bank channels. Numerical results indicate filter bank sparsifying transforms outperform existing patch-based transform learning for image denoising while benefiting from additional flexibility in the design process.
Projection-based iterative methods for solving large over-determined linear systems are well-known for their simplicity and computational efficiency. It is also known that the correct choice of a sketching procedure (i.e., preprocessing steps that re duce the dimension of each iteration) can improve the performance of iterative methods in multiple ways, such as, to speed up the convergence of the method by fighting inner correlations of the system, or to reduce the variance incurred by the presence of noise. In the current work, we show that sketching can also help us to get better theoretical guarantees for the projection-based methods. Specifically, we use good properties of Gaussian sketching to prove an accelerated convergence rate of the sketched relaxation (also known as Motzkins) method. The new estimates hold for linear systems of arbitrary structure. We also provide numerical experiments in support of our theoretical analysis of the sketched relaxation method.
We describe a numerical scheme for evaluating the posterior moments of Bayesian linear regression models with partial pooling of the coefficients. The principal analytical tool of the evaluation is a change of basis from coefficient space to the spac e of singular vectors of the matrix of predictors. After this change of basis and an analytical integration, we reduce the problem of finding moments of a density over k + m dimensions, to finding moments of an m-dimensional density, where k is the number of coefficients and k + m is the dimension of the posterior. Moments can then be computed using, for example, MCMC, the trapezoid rule, or adaptive Gaussian quadrature. An evaluation of the SVD of the matrix of predictors is the dominant computational cost and is performed once during the precomputation stage. We demonstrate numerical results of the algorithm. The scheme described in this paper generalizes naturally to multilevel and multi-group hierarchical regression models where normal-normal parameters appear.
184 - Yu Tong , Dong An , Nathan Wiebe 2020
Preconditioning is the most widely used and effective way for treating ill-conditioned linear systems in the context of classical iterative linear system solvers. We introduce a quantum primitive called fast inversion, which can be used as a precondi tioner for solving quantum linear systems. The key idea of fast inversion is to directly block-encode a matrix inverse through a quantum circuit implementing the inversion of eigenvalues via classical arithmetics. We demonstrate the application of preconditioned linear system solvers for computing single-particle Greens functions of quantum many-body systems, which are widely used in quantum physics, chemistry, and materials science. We analyze the complexities in three scenarios: the Hubbard model, the quantum many-body Hamiltonian in the planewave-dual basis, and the Schwinger model. We also provide a method for performing Greens function calculation in second quantization within a fixed particle manifold and note that this approach may be valuable for simulation more broadly. Besides solving linear systems, fast inversion also allows us to develop fast algorithms for computing matrix functions, such as the efficient preparation of Gibbs states. We introduce two efficient approaches for such a task, based on the contour integral formulation and the inverse transform respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا