Do you want to publish a course? Click here

Developing Univariate Neurodegeneration Biomarkers with Low-Rank and Sparse Subspace Decomposition

391   0   0.0 ( 0 )
 Added by Jianfeng Wu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Cognitive decline due to Alzheimers disease (AD) is closely associated with brain structure alterations captured by structural magnetic resonance imaging (sMRI). It supports the validity to develop sMRI-based univariate neurodegeneration biomarkers (UNB). However, existing UNB work either fails to model large group variances or does not capture AD dementia (ADD) induced changes. We propose a novel low-rank and sparse subspace decomposition method capable of stably quantifying the morphological changes induced by ADD. Specifically, we propose a numerically efficient rank minimization mechanism to extract group common structure and impose regularization constraints to encode the original 3D morphometry connectivity. Further, we generate regions-of-interest (ROI) with group difference study between common subspaces of $Abeta+$ AD and $Abeta-$ cognitively unimpaired (CU) groups. A univariate morphometry index (UMI) is constructed from these ROIs by summarizing individual morphological characteristics weighted by normalized difference between $Abeta+$ AD and $Abeta-$ CU groups. We use hippocampal surface radial distance feature to compute the UMIs and validate our work in the Alzheimers Disease Neuroimaging Initiative (ADNI) cohort. With hippocampal UMIs, the estimated minimum sample sizes needed to detect a 25$%$ reduction in the mean annual change with 80$%$ power and two-tailed $P=0.05$ are 116, 279 and 387 for the longitudinal $Abeta+$ AD, $Abeta+$ mild cognitive impairment (MCI) and $Abeta+$ CU groups, respectively. Additionally, for MCI patients, UMIs well correlate with hazard ratio of conversion to AD ($4.3$, $95%$ CI=$2.3-8.2$) within 18 months. Our experimental results outperform traditional hippocampal volume measures and suggest the application of UMI as a potential UNB.



rate research

Read More

We propose LSDAT, an image-agnostic decision-based black-box attack that exploits low-rank and sparse decomposition (LSD) to dramatically reduce the number of queries and achieve superior fooling rates compared to the state-of-the-art decision-based methods under given imperceptibility constraints. LSDAT crafts perturbations in the low-dimensional subspace formed by the sparse component of the input sample and that of an adversarial sample to obtain query-efficiency. The specific perturbation of interest is obtained by traversing the path between the input and adversarial sparse components. It is set forth that the proposed sparse perturbation is the most aligned sparse perturbation with the shortest path from the input sample to the decision boundary for some initial adversarial sample (the best sparse approximation of shortest path, likely to fool the model). Theoretical analyses are provided to justify the functionality of LSDAT. Unlike other dimensionality reduction based techniques aimed at improving query efficiency (e.g, ones based on FFT), LSD works directly in the image pixel domain to guarantee that non-$ell_2$ constraints, such as sparsity, are satisfied. LSD offers better control over the number of queries and provides computational efficiency as it performs sparse decomposition of the input and adversarial images only once to generate all queries. We demonstrate $ell_0$, $ell_2$ and $ell_infty$ bounded attacks with LSDAT to evince its efficiency compared to baseline decision-based attacks in diverse low-query budget scenarios as outlined in the experiments.
We consider the problem of estimating high-dimensional covariance matrices of a particular structure, which is a summation of low rank and sparse matrices. This covariance structure has a wide range of applications including factor analysis and random effects models. We propose a Bayesian method of estimating the covariance matrices by representing the covariance model in the form of a factor model with unknown number of latent factors. We introduce binary indicators for factor selection and rank estimation for the low rank component combined with a Bayesian lasso method for the sparse component estimation. Simulation studies show that our method can recover the rank as well as the sparsity of the two components respectively. We further extend our method to a graphical factor model where the graphical model of the residuals as well as selecting the number of factors is of interest. We employ a hyper-inverse Wishart prior for modeling decomposable graphs of the residuals, and a Bayesian graphical lasso selection method for unrestricted graphs. We show through simulations that the extended models can recover both the number of latent factors and the graphical model of the residuals successfully when the sample size is sufficient relative to the dimension.
Tensor completion refers to the task of estimating the missing data from an incomplete measurement or observation, which is a core problem frequently arising from the areas of big data analysis, computer vision, and network engineering. Due to the multidimensional nature of high-order tensors, the matrix approaches, e.g., matrix factorization and direct matricization of tensors, are often not ideal for tensor completion and recovery. In this paper, we introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion. Our model possesses a sparse regularization term to promote a sparse core tensor of the Tucker decomposition, which is beneficial for tensor data compression. Moreover, we enforce low-rank regularization terms on factor matrices of the Tucker decomposition for inducing the low-rankness of the tensor with a cheap computational cost. Numerically, we propose a customized ADMM with enough easy subproblems to solve the underlying model. It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors. A series of computational experiments on real-world data sets, including internet traffic data sets, color images, and face recognition, demonstrate that our model performs better than many existing state-of-the-art matricization and tensorization approaches in terms of achieving higher recovery accuracy.
We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that nonadjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).
Matrix sensing is the problem of reconstructing a low-rank matrix from a few linear measurements. In many applications such as collaborative filtering, the famous Netflix prize problem, and seismic data interpolation, there exists some prior information about the column and row spaces of the ground-truth low-rank matrix. In this paper, we exploit this prior information by proposing a weighted optimization problem where its objective function promotes both rank and prior subspace information. Using the recent results in conic integral geometry, we obtain the unique optimal weights that minimize the required number of measurements. As simulation results confirm, the proposed convex program with optimal weights requires substantially fewer measurements than the regular nuclear norm minimization.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا