Do you want to publish a course? Click here

Message passing algorithms have proved surprisingly successful in solving hard constraint satisfaction problems on sparse random graphs. In such applications, variables are fixed sequentially to satisfy the constraints. Message passing is run after each step. Its outcome provides an heuristic to make choices at next step. This approach has been referred to as `decimation, with reference to analogous procedures in statistical physics. The behavior of decimation procedures is poorly understood. Here we consider a simple randomized decimation algorithm based on belief propagation (BP), and analyze its behavior on random k-satisfiability formulae. In particular, we propose a tree model for its analysis and we conjecture that it provides asymptotically exact predictions in the limit of large instances. This conjecture is confirmed by numerical simulations.
Random graph generation is an important tool for studying large complex networks. Despite abundance of random graph models, constructing models with application-driven constraints is poorly understood. In order to advance state-of-the-art in this area, we focus on random graphs without short cycles as a stylized family of graphs, and propose the RandGraph algorithm for randomly generating them. For any constant k, when m=O(n^{1+1/[2k(k+3)]}), RandGraph generates an asymptotically uniform random graph with n vertices, m edges, and no cycle of length at most k using O(n^2m) operations. We also characterize the approximation error for finite values of n. To the best of our knowledge, this is the first polynomial-time algorithm for the problem. RandGraph works by sequentially adding $m$ edges to an empty graph with n vertices. Recently, such sequential algorithms have been successful for random sampling problems. Our main contributions to this line of research includes introducing a new approach for sequentially approximating edge-specific probabilities at each step of the algorithm, and providing a new method for analyzing such algorithms.
Given a large data matrix $Ainmathbb{R}^{ntimes n}$, we consider the problem of determining whether its entries are i.i.d. with some known marginal distribution $A_{ij}sim P_0$, or instead $A$ contains a principal submatrix $A_{{sf Q},{sf Q}}$ whose entries have marginal distribution $A_{ij}sim P_1 eq P_0$. As a special case, the hidden (or planted) clique problem requires to find a planted clique in an otherwise uniformly random graph. Assuming unbounded computational resources, this hypothesis testing problem is statistically solvable provided $|{sf Q}|ge C log n$ for a suitable constant $C$. However, despite substantial effort, no polynomial time algorithm is known that succeeds with high probability when $|{sf Q}| = o(sqrt{n})$. Recently Meka and Wigderson cite{meka2013association}, proposed a method to establish lower bounds within the Sum of Squares (SOS) semidefinite hierarchy. Here we consider the degree-$4$ SOS relaxation, and study the construction of cite{meka2013association} to prove that SOS fails unless $kge C, n^{1/3}/log n$. An argument presented by Barak implies that this lower bound cannot be substantially improved unless the witness construction is changed in the proof. Our proof uses the moments method to bound the spectrum of a certain random association scheme, i.e. a symmetric random matrix whose rows and columns are indexed by the edges of an Erdos-Renyi random graph.
Sparse Principal Component Analysis (PCA) is a dimensionality reduction technique wherein one seeks a low-rank representation of a data matrix with additional sparsity constraints on the obtained representation. We consider two probabilistic formulations of sparse PCA: a spiked Wigner and spiked Wishart (or spiked covariance) model. We analyze an Approximate Message Passing (AMP) algorithm to estimate the underlying signal and show, in the high dimensional limit, that the AMP estimates are information-theoretically optimal. As an immediate corollary, our results demonstrate that the posterior expectation of the underlying signal, which is often intractable to compute, can be obtained using a polynomial-time scheme. Our results also effectively provide a single-letter characterization of the sparse PCA problem.
In sparse principal component analysis we are given noisy observations of a low-rank matrix of dimension $ntimes p$ and seek to reconstruct it under additional sparsity assumptions. In particular, we assume here each of the principal components $mathbf{v}_1,dots,mathbf{v}_r$ has at most $s_0$ non-zero entries. We are particularly interested in the high dimensional regime wherein $p$ is comparable to, or even much larger than $n$. In an influential paper, cite{johnstone2004sparse} introduced a simple algorithm that estimates the support of the principal vectors $mathbf{v}_1,dots,mathbf{v}_r$ by the largest entries in the diagonal of the empirical covariance. This method can be shown to identify the correct support with high probability if $s_0le K_1sqrt{n/log p}$, and to fail with high probability if $s_0ge K_2 sqrt{n/log p}$ for two constants $0<K_1,K_2<infty$. Despite a considerable amount of work over the last ten years, no practical algorithm exists with provably better support recovery guarantees. Here we analyze a covariance thresholding algorithm that was recently proposed by cite{KrauthgamerSPCA}. On the basis of numerical simulations (for the rank-one case), these authors conjectured that covariance thresholding correctly recover the support with high probability for $s_0le Ksqrt{n}$ (assuming $n$ of the same order as $p$). We prove this conjecture, and in fact establish a more general guarantee including higher-rank as well as $n$ much smaller than $p$. Recent lower bounds cite{berthet2013computational, ma2015sum} suggest that no polynomial time algorithm can do significantly better. The key technical component of our analysis develops new bounds on the norm of kernel random matrices, in regimes that were not considered before.
We consider a discriminative learning (regression) problem, whereby the regression function is a convex combination of k linear classifiers. Existing approaches are based on the EM algorithm, or similar techniques, without provable guarantees. We develop a simple method based on spectral techniques and a `mirroring trick, that discovers the subspace spanned by the classifiers parameter vectors. Under a probabilistic assumption on the feature vector distribution, we prove that this approach has nearly optimal statistical efficiency.
Consider an Erdos-Renyi random graph in which each edge is present independently with probability 1/2, except for a subset $sC_N$ of the vertices that form a clique (a completely connected subgraph). We consider the problem of identifying the clique, given a realization of such a random graph. The best known algorithm provably finds the clique in linear time with high probability, provided $|sC_N|ge 1.261sqrt{N}$ cite{dekel2011finding}. Spectral methods can be shown to fail on cliques smaller than $sqrt{N}$. In this paper we describe a nearly linear time algorithm that succeeds with high probability for $|sC_N|ge (1+eps)sqrt{N/e}$ for any $eps>0$. This is the first algorithm that provably improves over spectral methods. We further generalize the hidden clique problem to other background graphs (the standard case corresponding to the complete graph on $N$ vertices). For large girth regular graphs of degree $(Delta+1)$ we prove that `local algorithms succeed if $|sC_N|ge (1+eps)N/sqrt{eDelta}$ and fail if $|sC_N|le(1-eps)N/sqrt{eDelta}$.
We consider a class of nonlinear mappings $mathsf{F}_{A,N}$ in $mathbb{R}^N$ indexed by symmetric random matrices $Ainmathbb{R}^{Ntimes N}$ with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Bolthausen [Comm. Math. Phys. 325 (2014) 333-366]. Within information theory, they are known as approximate message passing algorithms. We study the high-dimensional (large $N$) behavior of the iterates of $mathsf{F}$ for polynomial functions $mathsf{F}$, and prove that it is universal; that is, it depends only on the first two moments of the entries of $A$, under a sub-Gaussian tail condition. As an application, we prove the universality of a certain phase transition arising in polytope geometry and compressed sensing. This solves, for a broad class of random projections, a conjecture by David Donoho and Jared Tanner.
We consider ferromagnetic Ising models on graphs that converge locally to trees. Examples include random regular graphs with bounded degree and uniformly random graphs with bounded average degree. We prove that the cavity prediction for the limiting free energy per spin is correct for any positive temperature and external field. Further, local marginals can be approximated by iterating a set of mean field (cavity) equations. Both results are achieved by proving the local convergence of the Boltzmann distribution on the original graph to the Boltzmann distribution on the appropriate infinite random tree.
We consider the problem of learning a coefficient vector x_0in R^N from noisy linear observation y=Ax_0+w in R^n. In many contexts (ranging from model selection to image processing) it is desirable to construct a sparse estimator x. In this case, a popular approach consists in solving an L1-penalized least squares problem known as the LASSO or Basis Pursuit DeNoising (BPDN). For sequences of matrices A of increasing dimensions, with independent gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical models ideas. Simulations on real data matrices suggest that our results can be relevant in a broad array of practical applications.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا