ترغب بنشر مسار تعليمي؟ اضغط هنا

New Accumulative Score Function Based Bound For Sparsity Level of L1 Minimization

129   0   0.0 ( 0 )
 نشر من قبل Sheng Han
 تاريخ النشر 2014
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper discusses a fundamental problem in compressed sensing: the sparse recoverability of L1 minimization with an arbitrary sensing matrix. We develop an new accumulative score function (ASF) to provide a lower bound for the recoverable sparsity level (SL) of a sensing matrix while preserving a low computational complexity. We first define a score function for each row of a matrix, and then ASF sums up large scores until the total score reaches 0.5. Interestingly, the number of involved rows in the summation is a reliable lower bound of SL. It is further proved that ASF provides a sharper bound for SL than coherence We also investigate the underlying relationship between the new ASF and the classical RIC and achieve a RIC-based bound for SL.

قيم البحث

اقرأ أيضاً

64 - Pierre Alquier 2011
We focus on the high dimensional linear regression $Ysimmathcal{N}(Xbeta^{*},sigma^{2}I_{n})$, where $beta^{*}inmathds{R}^{p}$ is the parameter of interest. In this setting, several estimators such as the LASSO and the Dantzig Selector are known to s atisfy interesting properties whenever the vector $beta^{*}$ is sparse. Interestingly both of the LASSO and the Dantzig Selector can be seen as orthogonal projections of 0 into $mathcal{DC}(s)={betainmathds{R}^{p},|X(Y-Xbeta)|_{infty}leq s}$ - using an $ell_{1}$ distance for the Dantzig Selector and $ell_{2}$ for the LASSO. For a well chosen $s>0$, this set is actually a confidence region for $beta^{*}$. In this paper, we investigate the properties of estimators defined as projections on $mathcal{DC}(s)$ using general distances. We prove that the obtained estimators satisfy oracle properties close to the one of the LASSO and Dantzig Selector. On top of that, it turns out that these estimators can be tuned to exploit a different sparsity or/and slightly different estimation objectives.
179 - Dennis Leung , Qi-Man Shao 2017
Let ${bf R}$ be the Pearson correlation matrix of $m$ normal random variables. The Raos score test for the independence hypothesis $H_0 : {bf R} = {bf I}_m$, where ${bf I}_m$ is the identity matrix of dimension $m$, was first considered by Schott (20 05) in the high dimensional setting. In this paper, we study the asymptotic minimax power function of this test, under an asymptotic regime in which both $m$ and the sample size $n$ tend to infinity with the ratio $m/n$ upper bounded by a constant. In particular, our result implies that the Raos score test is rate-optimal for detecting the dependency signal $|{bf R} - {bf I}_m|_F$ of order $sqrt{m/n}$, where $|cdot|_F$ is the matrix Frobenius norm.
173 - Xueying Tang , Ke Li , Malay Ghosh 2015
This paper considers Bayesian multiple testing under sparsity for polynomial-tailed distributions satisfying a monotone likelihood ratio property. Included in this class of distributions are the Students t, the Pareto, and many other distributions. W e prove some general asymptotic optimality results under fixed and random thresholding. As examples of these general results, we establish the Bayesian asymptotic optimality of several multiple testing procedures in the literature for appropriately chosen false discovery rate levels. We also show by simulation that the Benjamini-Hochberg procedure with a false discovery rate level different from the asymptotically optimal one can lead to high Bayes risk.
Here in this paper, it is tried to obtain and compare the ML estimations based on upper record values and a random sample. In continue, some theorems have been proven about the behavior of these estimations asymptotically.
59 - Jianjun Xu , Wenquan Cui 2020
This article studies global testing of the slope function in functional linear regression model in the framework of reproducing kernel Hilbert space. We propose a new testing statistic based on smoothness regularization estimators. The asymptotic dis tribution of the testing statistic is established under null hypothesis. It is shown that the null asymptotic distribution is determined jointly by the reproducing kernel and the covariance function. Our theoretical analysis shows that the proposed testing is consistent over a class of smooth local alternatives. Despite the generality of the method of regularization, we show the procedure is easily implementable. Numerical examples are provided to demonstrate the empirical advantages over the competing methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا