Do you want to publish a course? Click here

Steerable Principal Components for Space-Frequency Localized Images

76   0   0.0 ( 0 )
 Added by Boris Landa
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

This paper describes a fast and accurate method for obtaining steerable principal components from a large dataset of images, assuming the images are well localized in space and frequency. The obtained steerable principal components are optimal for expanding the images in the dataset and all of their rotations. The method relies upon first expanding the images using a series of two-dimensional Prolate Spheroidal Wave Functions (PSWFs), where the expansion coefficients are evaluated using a specially designed numerical integration scheme. Then, the expansion coefficients are used to construct a rotationally-invariant covariance matrix which admits a block-diagonal structure, and the eigen-decomposition of its blocks provides us with the desired steerable principal components. The proposed method is shown to be faster then existing methods, while providing appropriate error bounds which guarantee its accuracy.



rate research

Read More

261 - Zhizhen Zhao , Yoel Shkolnisky , 2014
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2D images as large as a few hundred pixels in each direction. Here we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of two-dimensional images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of $n$ images of size $L times L$ pixels, the computational complexity of our algorithm is $O(nL^3 + L^4)$, while existing algorithms take $O(nL^4)$. The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the non-uniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.
165 - Jochen Einbeck 2007
Often the relation between the variables constituting a multivariate data space might be characterized by one or more of the terms: ``nonlinear, ``branched, ``disconnected, ``bended, ``curved, ``heterogeneous, or, more general, ``complex. In these cases, simple principal component analysis (PCA) as a tool for dimension reduction can fail badly. Of the many alternative approaches proposed so far, local approximations of PCA are among the most promising. This paper will give a short review of localiz
The two-dimensional principal component analysis (2DPCA) has become one of the most powerful tools of artificial intelligent algorithms. In this paper, we review 2DPCA and its variations, and propose a general ridge regression model to extract features from both row and column directions. To enhance the generalization ability of extracted features, a novel relaxed 2DPCA (R2DPCA) is proposed with a new ridge regression model. R2DPCA generates a weighting vector with utilizing the label information, and maximizes a relaxed criterion with applying an optimal algorithm to get the essential features. The R2DPCA-based approaches for face recognition and image reconstruction are also proposed and the selected principle components are weighted to enhance the role of main components. Numerical experiments on well-known standard databases indicate that R2DPCA has high generalization ability and can achieve a higher recognition rate than the state-of-the-art methods, including in the deep learning methods such as CNNs, DBNs, and DNNs.
In this paper we argue that (lexical) meaning in science can be represented in a 13 dimension Meaning Space. This space is constructed using principal component analysis (singular decomposition) on the matrix of word category relative information gains, where the categories are those used by the Web of Science, and the words are taken from a reduced word set from texts in the Web of Science. We show that this reduced word set plausibly represents all texts in the corpus, so that the principal component analysis has some objective meaning with respect to the corpus. We argue that 13 dimensions is adequate to describe the meaning of scientific texts, and hypothesise about the qualitative meaning of the principal components.
Recent works show that deep neural networks trained on image classification dataset bias towards textures. Those models are easily fooled by applying small high-frequency perturbations to clean images. In this paper, we learn robust image classification models by removing high-frequency components. Specifically, we develop a differentiable high-frequency suppression module based on discrete Fourier transform (DFT). Combining with adversarial training, we won the 5th place in the IJCAI-2019 Alibaba Adversarial AI Challenge. Our code is available online.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا