ترغب بنشر مسار تعليمي؟ اضغط هنا

Distances to the Span of Sparse Random Matrices, with Applications to Gradient Coding

109   0   0.0 ( 0 )
 نشر من قبل Margalit Glasgow
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We study a natural question about sparse random matrices which arises from an application in distributed computing: what is the distance from a fixed vector to the column span of a sparse random matrix $A in R^{n times m}$? We answer this question for several ensembles of sparse random matrices in which the average number of non-zero entries per column, $d$, is smaller than $log(n)$. Key to our analysis is a new characterization of linear dependencies in sparse random matrices. We show that with high probability, in certain random matrices, including rectangular matrices with i.i.d.~Bernoulli entries and $m geq (1 + epsilon)n$, and symmetric random matrices with Bernoulli entries, any linear dependency must be caused by one of three specific combinatorial structures. We show applications of our result to analyzing and designing em gradient codesem, replication schemes used in distributed machine learning to mitigate the effect of slow machines, called em stragglersem. We give the first known construction for a gradient code that achieves near-optimal error for both random and adversarial choices of stragglers.



قيم البحث

اقرأ أيضاً

Let $Q_{n,d}$ denote the random combinatorial matrix whose rows are independent of one another and such that each row is sampled uniformly at random from the subset of vectors in ${0,1}^n$ having precisely $d$ entries equal to $1$. We present a short proof of the fact that $Pr[det(Q_{n,d})=0] = Oleft(frac{n^{1/2}log^{3/2} n}{d}right)=o(1)$, whenever $d=omega(n^{1/2}log^{3/2} n)$. In particular, our proof accommodates sparse random combinatorial matrices in the sense that $d = o(n)$ is allowed. We also consider the singularity of deterministic integer matrices $A$ randomly perturbed by a sparse combinatorial matrix. In particular, we prove that $Pr[det(A+Q_{n,d})=0]=Oleft(frac{n^{1/2}log^{3/2} n}{d}right)$, again, whenever $d=omega(n^{1/2}log^{3/2} n)$ and $A$ has the property that $(1,-d)$ is not an eigenpair of $A$.
We determine the rank of a random matrix over an arbitrary field with prescribed numbers of non-zero entries in each row and column. As an application we obtain a formula for the rate of low-density parity check codes. This formula vindicates a conje cture of Lelarge (2013). The proofs are based on coupling arguments and a novel random perturbation, applicable to any matrix, that diminishes the number of short linear relations.
Majority dynamics on a graph $G$ is a deterministic process such that every vertex updates its $pm 1$-assignment according to the majority assignment on its neighbor simultaneously at each step. Benjamini, Chan, ODonnel, Tamuz and Tan conjectured tha t, in the ErdH{o}s--Renyi random graph $G(n,p)$, the random initial $pm 1$-assignment converges to a $99%$-agreement with high probability whenever $p=omega(1/n)$. This conjecture was first confirmed for $pgeqlambda n^{-1/2}$ for a large constant $lambda$ by Fountoulakis, Kang and Makai. Although this result has been reproved recently by Tran and Vu and by Berkowitz and Devlin, it was unknown whether the conjecture holds for $p< lambda n^{-1/2}$. We break this $Omega(n^{-1/2})$-barrier by proving the conjecture for sparser random graphs $G(n,p)$, where $lambda n^{-3/5}log n leq p leq lambda n^{-1/2}$ with a large constant $lambda>0$.
187 - R. Glebov , M. Krivelevich 2012
We prove that the number of Hamilton cycles in the random graph G(n,p) is n!p^n(1+o(1))^n a.a.s., provided that pgeq (ln n+ln ln n+omega(1))/n. Furthermore, we prove the hitting-time version of this statement, showing that in the random graph process , the edge that creates a graph of minimum degree 2 creates (ln n/e)^n(1+o(1))^n Hamilton cycles a.a.s.
Kernel methods are an extremely popular set of techniques used for many important machine learning and data analysis applications. In addition to having good practical performances, these methods are supported by a well-developed theory. Kernel metho ds use an implicit mapping of the input data into a high dimensional feature space defined by a kernel function, i.e., a function returning the inner product between the images of two data points in the feature space. Central to any kernel method is the kernel matrix, which is built by evaluating the kernel function on a given sample dataset. In this paper, we initiate the study of non-asymptotic spectral theory of random kernel matrices. These are n x n random matrices whose (i,j)th entry is obtained by evaluating the kernel function on $x_i$ and $x_j$, where $x_1,...,x_n$ are a set of n independent random high-dimensional vectors. Our main contribution is to obtain tight upper bounds on the spectral norm (largest eigenvalue) of random kernel matrices constructed by commonly used kernel functions based on polynomials and Gaussian radial basis. As an application of these results, we provide lower bounds on the distortion needed for releasing the coefficients of kernel ridge regression under attribute privacy, a general privacy notion which captures a large class of privacy definitions. Kernel ridge regression is standard method for performing non-parametric regression that regularly outperforms traditional regression approaches in various domains. Our privacy distortion lower bounds are the first for any kernel technique, and our analysis assumes realistic scenarios for the input, unlike all previous lower bounds for other release problems which only hold under very restrictive input settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا