Do you want to publish a course? Click here

A refined determinantal inequality for correlation matrices

97   0   0.0 ( 0 )
 Added by Niushan Gao
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

Olkin [3] obtained a neat upper bound for the determinant of a correlation matrix. In this note, we present an extension and improvement of his result.

rate research

Read More

In this paper, we provide a negative answer to a long-standing open problem on the compatibility of Spearmans rho matrices. Following an equivalence of Spearmans rho matrices and linear correlation matrices for dimensions up to 9 in the literature, we show non-equivalence for dimensions 12 or higher. In particular, we connect this problem with the existence of a random vector under some linear projection restrictions in two characterization results.
281 - Minghua Lin 2014
Let $T=begin{bmatrix} X &Y 0 & Zend{bmatrix}$ be an $n$-square matrix, where $X, Z$ are $r$-square and $(n-r)$-square, respectively. Among other determinantal inequalities, it is proved $det(I_n+T^*T)ge det(I_r+X^*X)cdot det(I_{n-r}+Z^*Z)$ with equality holds if and only if $Y=0$.
198 - Song Xi Chen , Bin Guo , Yumou Qiu 2019
We consider testing the equality of two high-dimensional covariance matrices by carrying out a multi-level thresholding procedure, which is designed to detect sparse and faint differences between the covariances. A novel U-statistic composition is developed to establish the asymptotic distribution of the thresholding statistics in conjunction with the matrix blocking and the coupling techniques. We propose a multi-thresholding test that is shown to be powerful in detecting sparse and weak differences between two covariance matrices. The test is shown to have attractive detection boundary and to attain the optimal minimax rate in the signal strength under different regimes of high dimensionality and the sparsity of the signal. Simulation studies are conducted to demonstrate the utility of the proposed test.
For some estimations and predictions, we solve minimization problems with asymmetric loss functions. Usually, we estimate the coefficient of regression for these problems. In this paper, we do not make such the estimation, but rather give a solution by correcting any predictions so that the prediction error follows a general normal distribution. In our method, we can not only minimize the expected value of the asymmetric loss, but also lower the variance of the loss.
Consider a standard white Wishart matrix with parameters $n$ and $p$. Motivated by applications in high-dimensional statistics and signal processing, we perform asymptotic analysis on the maxima and minima of the eigenvalues of all the $m times m$ principal minors, under the asymptotic regime that $n,p,m$ go to infinity. Asymptotic results concerning extreme eigenvalues of principal minors of real Wigner matrices are also obtained. In addition, we discuss an application of the theoretical results to the construction of compressed sensing matrices, which provides insights to compressed sensing in signal processing and high dimensional linear regression in statistics.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا