ترغب بنشر مسار تعليمي؟ اضغط هنا

Estimation of low rank density matrices: bounds in Schatten norms and other distances

162   0   0.0 ( 0 )
 نشر من قبل Dong Xia
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Let ${mathcal S}_m$ be the set of all $mtimes m$ density matrices (Hermitian positively semi-definite matrices of unit trace). Consider a problem of estimation of an unknown density matrix $rhoin {mathcal S}_m$ based on outcomes of $n$ measurements of observables $X_1,dots, X_nin {mathbb H}_m$ (${mathbb H}_m$ being the space of $mtimes m$ Hermitian matrices) for a quantum system identically prepared $n$ times in state $rho.$ Outcomes $Y_1,dots, Y_n$ of such measurements could be described by a trace regression model in which ${mathbb E}_{rho}(Y_j|X_j)={rm tr}(rho X_j), j=1,dots, n.$ The design variables $X_1,dots, X_n$ are often sampled at random from the uniform distribution in an orthonormal basis ${E_1,dots, E_{m^2}}$ of ${mathbb H}_m$ (such as Pauli basis). The goal is to estimate the unknown density matrix $rho$ based on the data $(X_1,Y_1), dots, (X_n,Y_n).$ Let $$ hat Z:=frac{m^2}{n}sum_{j=1}^n Y_j X_j $$ and let $check rho$ be the projection of $hat Z$ onto the convex set ${mathcal S}_m$ of density matrices. It is shown that for estimator $check rho$ the minimax lower bounds in classes of low rank density matrices (established earlier) are attained up logarithmic factors for all Schatten $p$-norm distances, $pin [1,infty]$ and for Bures version of quantum Hellinger distance. Moreover, for a slightly modified version of estimator $check rho$ the same property holds also for quantum relative entropy (Kullback-Leibler) distance between density matrices.



قيم البحث

اقرأ أيضاً

The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. The goal of the paper is to develop minimax lower bounds on error rates of estimation of low rank density matrices in trac e regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the rank and other complexity parameters. Such bounds are established for several statistically relevant distances, including quant
Consider the problem of estimating a low-rank matrix when its entries are perturbed by Gaussian noise. If the empirical distribution of the entries of the spikes is known, optimal estimators that exploit this knowledge can substantially outperform si mple spectral approaches. Recent work characterizes the asymptotic accuracy of Bayes-optimal estimators in the high-dimensional limit. In this paper we present a practical algorithm that can achieve Bayes-optimal accuracy above the spectral threshold. A bold conjecture from statistical physics posits that no polynomial-time algorithm achieves optimal error below the same threshold (unless the best estimator is trivial). Our approach uses Approximate Message Passing (AMP) in conjunction with a spectral initialization. AMP algorithms have proved successful in a variety of statistical estimation tasks, and are amenable to exact asymptotic analysis via state evolution. Unfortunately, state evolution is uninformative when the algorithm is initialized near an unstable fixed point, as often happens in low-rank matrix estimation. We develop a new analysis of AMP that allows for spectral initializations. Our main theorem is general and applies beyond matrix estimation. However, we use it to derive detailed predictions for the problem of estimating a rank-one matrix in noise. Special cases of this problem are closely related---via universality arguments---to the network community detection problem for two asymmetric communities. For general rank-one models, we show that AMP can be used to construct confidence intervals and control false discovery rate. We provide illustrations of the general methodology by considering the cases of sparse low-rank matrices and of block-constant low-rank matrices with symmetric blocks (we refer to the latter as to the `Gaussian Block Model).
We propose a Bayesian methodology for estimating spiked covariance matrices with jointly sparse structure in high dimensions. The spiked covariance matrix is reparametrized in terms of the latent factor model, where the loading matrix is equipped wit h a novel matrix spike-and-slab LASSO prior, which is a continuous shrinkage prior for modeling jointly sparse matrices. We establish the rate-optimal posterior contraction for the covariance matrix with respect to the operator norm as well as that for the principal subspace with respect to the projection operator norm loss. We also study the posterior contraction rate of the principal subspace with respect to the two-to-infinity norm loss, a novel loss function measuring the distance between subspaces that is able to capture element-wise eigenvector perturbations. We show that the posterior contraction rate with respect to the two-to-infinity norm loss is tighter than that with respect to the routinely used projection operator norm loss under certain low-rank and bounded coherence conditions. In addition, a point estimator for the principal subspace is proposed with the rate-optimal risk bound with respect to the projection operator norm loss. These results are based on a collection of concentration and large deviation inequalities for the matrix spike-and-slab LASSO prior. The numerical performance of the proposed methodology is assessed through synthetic examples and the analysis of a real-world face data example.
418 - Clifford Lam 2008
This paper focuses on exploring the sparsity of the inverse covariance matrix $bSigma^{-1}$, or the precision matrix. We form blocks of parameters based on each off-diagonal band of the Cholesky factor from its modified Cholesky decomposition, and pe nalize each block of parameters using the $L_2$-norm instead of individual elements. We develop a one-step estimator, and prove an oracle property which consists of a notion of block sign-consistency and asymptotic normality. In particular, provided the initial estimator of the Cholesky factor is good enough and the true Cholesky has finite number of non-zero off-diagonal bands, oracle property holds for the one-step estimator even if $p_n gg n$, and can even be as large as $log p_n = o(n)$, where the data $y$ has mean zero and tail probability $P(|y_j| > x) leq Kexp(-Cx^d)$, $d > 0$, and $p_n$ is the number of variables. We also prove an operator norm convergence result, showing the cost of dimensionality is just $log p_n$. The advantage of this method over banding by Bickel and Levina (2008) or nested LASSO by Levina emph{et al.} (2007) is that it allows for elimination of weaker signals that precede stronger ones in the Cholesky factor. A method for obtaining an initial estimator for the Cholesky factor is discussed, and a gradient projection algorithm is developed for calculating the one-step estimate. Simulation results are in favor of the newly proposed method and a set of real data is analyzed using the new procedure and the banding method.
We study the role of the constraint set in determining the solution to low-rank, positive semidefinite (PSD) matrix sensing problems. The setting we consider involves rank-one sensing matrices: In particular, given a set of rank-one projections of an approximately low-rank PSD matrix, we characterize the radius of the set of PSD matrices that satisfy the measurements. This result yields a sampling rate to guarantee singleton solution sets when the true matrix is exactly low-rank, such that the choice of the objective function or the algorithm to be used is inconsequential in its recovery. We discuss applications of this contribution and compare it to recent literature regarding implicit regularization for similar problems. We demonstrate practical implications of this result by applying conic projection methods for PSD matrix recovery without incorporating low-rank regularization.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا