No Arabic abstract
Current orthogonal matching pursuit (OMP) algorithms calculate the correlation between two vectors using the inner product operation and minimize the mean square error, which are both suboptimal when there are non-Gaussian noises or outliers in the observation data. To overcome these problems, a new OMP algorithm is developed based on the information theoretic learning (ITL), which is built on the following new techniques: (1) an ITL-based correlation (ITL-Correlation) is developed as a new similarity measure which can better exploit higher-order statistics of the data, and is robust against many different types of noise and outliers in a sparse representation framework; (2) a non-second order statistic measurement and minimization method is developed to improve the robustness of OMP by overcoming the limitation of Gaussianity inherent in cost function based on second-order moments. The experimental results on both simulated and real-world data consistently demonstrate the superiority of the proposed OMP algorithm in data recovery, image reconstruction, and classification.
Deep neural networks (DNN) have been a de facto standard for nowadays biometric recognition solutions. A serious, but still overlooked problem in these DNN-based recognition systems is their vulnerability against adversarial attacks. Adversarial attacks can easily cause the output of a DNN system to greatly distort with only tiny changes in its input. Such distortions can potentially lead to an unexpected match between a valid biometric and a synthetic one constructed by a strategic attacker, raising security issue. In this work, we show how this issue can be resolved by learning robust biometric features through a deep, information-theoretic framework, which builds upon the recent deep variational information bottleneck method but is carefully adapted to biometric recognition tasks. Empirical evaluation demonstrates that our method not only offers stronger robustness against adversarial attacks but also provides better recognition performance over state-of-the-art approaches.
Sparse approximations using highly over-complete dictionaries is a state-of-the-art tool for many imaging applications including denoising, super-resolution, compressive sensing, light-field analysis, and object recognition. Unfortunately, the applicability of such methods is severely hampered by the computational burden of sparse approximation: these algorithms are linear or super-linear in both the data dimensionality and size of the dictionary. We propose a framework for learning the hierarchical structure of over-complete dictionaries that enables fast computation of sparse representations. Our method builds on tree-based strategies for nearest neighbor matching, and presents domain-specific enhancements that are highly efficient for the analysis of image patches. Contrary to most popular methods for building spatial data structures, out methods rely on shallow, balanced trees with relatively few layers. We show an extensive array of experiments on several applications such as image denoising/superresolution, compressive video/light-field sensing where we practically achieve 100-1000x speedup (with a less than 1dB loss in accuracy).
In this paper, we put forth a new joint sparse recovery algorithm called signal space matching pursuit (SSMP). The key idea of the proposed SSMP algorithm is to sequentially investigate the support of jointly sparse vectors to minimize the subspace distance to the residual space. Our performance guarantee analysis indicates that SSMP accurately reconstructs any row $K$-sparse matrix of rank $r$ in the full row rank scenario if the sampling matrix $mathbf{A}$ satisfies $text{krank}(mathbf{A}) ge K+1$, which meets the fundamental minimum requirement on $mathbf{A}$ to ensure exact recovery. We also show that SSMP guarantees exact reconstruction in at most $K-r+lceil frac{r}{L} rceil$ iterations, provided that $mathbf{A}$ satisfies the restricted isometry property (RIP) of order $L(K-r)+r+1$ with $$delta_{L(K-r)+r+1} < max left { frac{sqrt{r}}{sqrt{K+frac{r}{4}}+sqrt{frac{r}{4}}}, frac{sqrt{L}}{sqrt{K}+1.15 sqrt{L}} right },$$ where $L$ is the number of indices chosen in each iteration. This implies that the requirement on the RIP constant becomes less restrictive when $r$ increases. Such behavior seems to be natural but has not been reported for most of conventional methods. We further show that if $r=1$, then by running more than $K$ iterations, the performance guarantee of SSMP can be improved to $delta_{lfloor 7.8K rfloor} le 0.155$. In addition, we show that under a suitable RIP condition, the reconstruction error of SSMP is upper bounded by a constant multiple of the noise power, which demonstrates the stability of SSMP under measurement noise. Finally, from extensive numerical experiments, we show that SSMP outperforms conventional joint sparse recovery algorithms both in noiseless and noisy scenarios.
Grover discovered a quantum algorithm for identifying a target element in an unstructured search universe of N items in approximately square-root of N queries to a quantum oracle, thus achieving a square-root speed-up over classical algorithms. We present an information-theoretic analysis of Grovers algorithm and show that the square-root speed-up is the best attainable result using Grovers oracle.
Point sets matching method is very important in computer vision, feature extraction, fingerprint matching, motion estimation and so on. This paper proposes a robust point sets matching method. We present an iterative algorithm that is robust to noise case. Firstly, we calculate all transformations between two points. Then similarity matrix are computed to measure the possibility that two transformation are both true. We iteratively update the matching score matrix by using the similarity matrix. By using matching algorithm on graph, we obtain the matching result. Experimental results obtained by our approach show robustness to outlier and jitter.