ﻻ يوجد ملخص باللغة العربية
Finite sample properties of random covariance-type matrices have been the subject of much research. In this paper we focus on the lower tail of such a matrix, and prove that it is subgaussian under a simple fourth moment assumption on the one-dimensional marginals of the random vectors. A similar result holds for more general sums of random positive semidefinite matrices, and the (relatively simple) proof uses a variant of the so-called PAC-Bayesian method for bounding empirical processes. We give two applications of the main result. In the first one we obtain a new finite-sample bound for ordinary least squares estimator in linear regression with random design. Our result is model-free, requires fairly weak moment assumptions and is almost optimal. Our second application is to bounding restricted eigenvalue constants of certain random ensembles with heavy tails. These constants are important in the analysis of problems in Compressed Sensing and High Dimensional Statistics, where one recovers a sparse vector from a small umber of linear measurements. Our result implies that heavy tails still allow for the fast recovery rates found in efficient methods such as the LASSO and the Dantzig selector. Along the way we strengthen, with a fairly short argument, a recent result of Rudelson and Zhou on the restricted eigenvalue property.
We study distributions of random vectors whose components are second order polynomials in Gaussian random variables. Assuming that the law of such a vector is not absolutely continuous with respect to Lebesgue measure, we derive some interesting cons
We propose an efficient algorithm for solving group synchronization under high levels of corruption and noise, while we focus on rotation synchronization. We first describe our recent theoretically guaranteed message passing algorithm that estimates
Given a large data matrix $Ainmathbb{R}^{ntimes n}$, we consider the problem of determining whether its entries are i.i.d. with some known marginal distribution $A_{ij}sim P_0$, or instead $A$ contains a principal submatrix $A_{{sf Q},{sf Q}}$ whose
High-dimensional settings, where the data dimension ($d$) far exceeds the number of observations ($n$), are common in many statistical and machine learning applications. Methods based on $ell_1$-relaxation, such as Lasso, are very popular for sparse
This paper introduces the concept of random context representations for the transition probabilities of a finite-alphabet stochastic process. Processes with these representations generalize context tree processes (a.k.a. variable length Markov chains