ترغب بنشر مسار تعليمي؟ اضغط هنا

A convolutional neural network reaches optimal sensitivity for detecting some, but not all, patterns

135   0   0.0 ( 0 )
 نشر من قبل Brian Wandell
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the performance of modern convolutional neural networks (CNN) and a linear support vector machine (SVM) with respect to spatial contrast sensitivity. Specifically, we compare CNN sensitivity to that of a Bayesian ideal observer (IO) with the signal-known-exactly and noise known statistically. A ResNet-18 reaches optimal performance for harmonic patterns, as well as several classes of real world signals including faces. For these stimuli the CNN substantially outperforms the SVM. We further analyzed the case in which the signal might appear in one of multiple locations and found that CNN spatial sensitivity continues to match the IO. However, the CNN sensitivity was far below optimal at detecting certain complex texture patterns. These measurements show that CNNs can have very large performance differences when detecting the presence of spatial patterns. These differences may have a significant impact on the performance of an imaging system designed to detect low contrast spatial patterns.

قيم البحث

اقرأ أيضاً

This work investigates the problem of detecting gravitational wave (GW) events based on simulated damped sinusoid signals contaminated with white Gaussian noise. It is treated as a classification problem with one class for the interesting events. The proposed scheme consists of the following two successive steps: decomposing the data using a wavelet packet, representing the GW signal and noise using the derived decomposition coefficients; and determining the existence of any GW event using a convolutional neural network (CNN) with a logistic regression output layer. The characteristics of this work is its comprehensive investigations on CNN structure, detection window width, data resolution, wavelet packet decomposition and detection window overlap scheme. Extensive simulation experiments show excellent performances for reliable detection of signals with a range of GW model parameters and signal-to-noise ratios. While we use a simple waveform model in this study, we expect the method to be particularly valuable when the potential GW shapes are too complex to be characterized with a template bank.
In convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. In this paper, we propose a non-random dropout method named FocusedDropout, aiming to make the network focus more on the target. In FocusedDropout, we use a simple but effective way to search for the target-related features, retain these features and discard others, which is contrary to the existing methods. We found that this novel method can improve network performance by making the network more target-focused. Besides, increasing the weight decay while using FocusedDropout can avoid the overfitting and increase accuracy. Experimental results show that even a slight cost, 10% of batches employing FocusedDropout, can produce a nice performance boost over the baselines on multiple datasets of classification, including CIFAR10, CIFAR100, Tiny Imagenet, and has a good versatility for different CNN models.
The parallelism of optics and the miniaturization of optical components using nanophotonic structures, such as metasurfaces present a compelling alternative to electronic implementations of convolutional neural networks. The lack of a low-power optic al nonlinearity, however, requires slow and energy-inefficient
In this work, we present a novel background subtraction system that uses a deep Convolutional Neural Network (CNN) to perform the segmentation. With this approach, feature engineering and parameter tuning become unnecessary since the network paramete rs can be learned from data by training a single CNN that can handle various video scenes. Additionally, we propose a new approach to estimate background model from video. For the training of the CNN, we employed randomly 5 percent video frames and their ground truth segmentations taken from the Change Detection challenge 2014(CDnet 2014). We also utilized spatial-median filtering as the post-processing of the network outputs. Our method is evaluated with different data-sets, and the network outperforms the existing algorithms with respect to the average ranking over different evaluation metrics. Furthermore, due to the network architecture, our CNN is capable of real time processing.
The Capsule Network is widely believed to be more robust than Convolutional Networks. However, there are no comprehensive comparisons between these two networks, and it is also unknown which components in the CapsNet affect its robustness. In this pa per, we first carefully examine the special designs in CapsNet that differ from that of a ConvNet commonly used for image classification. The examination reveals five major new/different components in CapsNet: a transformation process, a dynamic routing layer, a squashing function, a marginal loss other than cross-entropy loss, and an additional class-conditional reconstruction loss for regularization. Along with these major differences, we conduct comprehensive ablation studies on three kinds of robustness, including affine transformation, overlapping digits, and semantic representation. The study reveals that some designs, which are thought critical to CapsNet, actually can harm its robustness, i.e., the dynamic routing layer and the transformation process, while others are beneficial for the robustness. Based on these findings, we propose enhanced ConvNets simply by introducing the essential components behind the CapsNets success. The proposed simple ConvNets can achieve better robustness than the CapsNet.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا