ترغب بنشر مسار تعليمي؟ اضغط هنا

Fingerprint Classification Based on Depth Neural Network

148   0   0.0 ( 0 )
 نشر من قبل Ruxin Wang
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Fingerprint classification is an effective technique for reducing the candidate numbers of fingerprints in the stage of matching in automatic fingerprint identification system (AFIS). In recent years, deep learning is an emerging technology which has achieved great success in many fields, such as image processing, natural language processing and so on. In this paper, we only choose the orientation field as the input feature and adopt a new method (stacked sparse autoencoders) based on depth neural network for fingerprint classification. For the four-class problem, we achieve a classification of 93.1 percent using the depth network structure which has three hidden layers (with 1.8% rejection) in the NIST-DB4 database. And then we propose a novel method using two classification probabilities for fuzzy classification which can effectively enhance the accuracy of classification. By only adjusting the probability threshold, we get the accuracy of classification is 96.1% (setting threshold is 0.85), 97.2% (setting threshold is 0.90) and 98.0% (setting threshold is 0.95). Using the fuzzy method, we obtain higher accuracy than other methods.



قيم البحث

اقرأ أيضاً

Digital holography enables us to reconstruct objects in three-dimensional space from holograms captured by an imaging device. For the reconstruction, we need to know the depth position of the recoded object in advance. In this study, we propose depth prediction using convolutional neural network (CNN)-based regression. In the previous researches, the depth of an object was estimated through reconstructed images at different depth positions from a hologram using a certain metric that indicates the most focused depth position; however, such a depth search is time-consuming. The CNN of the proposed method can directly predict the depth position with millimeter precision from holograms.
Defect detection and classification technology has changed from traditional artificial visual inspection to current intelligent automated inspection, but most of the current defect detection methods are training related detection models based on a da ta-driven approach, taking into account the difficulty of collecting some sample data in the industrial field. We apply zero-shot learning technology to the industrial field. Aiming at the problem of the existing Latent Feature Guide Attribute Attention (LFGAA) zero-shot image classification network, the output latent attributes and artificially defined attributes are different in the semantic space, which leads to the problem of model performance degradation, proposed an LGFAA network based on semantic feedback, and improved model performance by constructing semantic embedded modules and feedback mechanisms. At the same time, for the common domain shift problem in zero-shot learning, based on the idea of co-training algorithm using the difference information between different views of data to learn from each other, we propose an Ensemble Co-training algorithm, which adaptively reduces the prediction error in image tag embedding from multiple angles. Various experiments conducted on the zero-shot dataset and the cylinder liner dataset in the industrial field provide competitive results.
Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationa lly impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN.
163 - Weiya Fan 2020
Fingerprint image denoising is a very important step in fingerprint identification. to improve the denoising effect of fingerprint image,we have designs a fingerprint denoising algorithm based on deep encoder-decoder network,which encoder subnet to l earn the fingerprint features of noisy images.the decoder subnet reconstructs the original fingerprint image based on the features to achieve denoising, while using the dilated convolution in the network to increase the receptor field without increasing the complexity and improve the network inference speed. In addition, feature fusion at different levels of the network is achieved through the introduction of residual learning, which further restores the detailed features of the fingerprint and improves the denoising effect. Finally, the experimental results show that the algorithm enables better recovery of edge, line and curve features in fingerprint images, with better visual effects and higher peak signal-to-noise ratio (PSNR) compared to other methods.
Extreme learning machine (ELM), which can be viewed as a variant of Random Vector Functional Link (RVFL) network without the input-output direct connections, has been extensively used to create multi-layer (deep) neural networks. Such networks employ randomization based autoencoders (AE) for unsupervised feature extraction followed by an ELM classifier for final decision making. Each randomization based AE acts as an independent feature extractor and a deep network is obtained by stacking several such AEs. Inspired by the better performance of RVFL over ELM, in this paper, we propose several deep RVFL variants by utilizing the framework of stacked autoencoders. Specifically, we introduce direct connections (feature reuse) from preceding layers to the fore layers of the network as in the original RVFL network. Such connections help to regularize the randomization and also reduce the model complexity. Furthermore, we also introduce denoising criterion, recovering clean inputs from their corrupt
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا