ﻻ يوجد ملخص باللغة العربية
In a recent decade, ImageNet has become the most notable and powerful benchmark database in computer vision and machine learning community. As ImageNet has emerged as a representative benchmark for evaluating the performance of novel deep learning models, its evaluation tends to include only quantitative measures such as error rate, rather than qualitative analysis. Thus, there are few studies that analyze the failure cases of deep learning models in ImageNet, though there are numerous works analyzing the networks themselves and visualizing them. In this abstract, we qualitatively analyze the failure cases of ImageNet classification results from recent deep learning model, and categorize these cases according to the certain image patterns. Through this failure analysis, we believe that it can be discovered what the final challenges are in ImageNet database, which the current deep learning model is still vulnerable to.
Deep neural networks (DNNs) have shown the state-of-the-art level of performances in wide range of complicated tasks. In recent years, the studies have been actively conducted to analyze the black box characteristics of DNNs and to grasp the learning
Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In thi
We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset
Recently it was shown in several papers that backpropagation is able to find the global minimum of the empirical risk on the training data using over-parametrized deep neural networks. In this paper a similar result is shown for deep neural networks
The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary.