ﻻ يوجد ملخص باللغة العربية
The development of deep convolutional neural network architecture is critical to the improvement of image classification task performance. A lot of studies of image classification based on deep convolutional neural network focus on the network structure to improve the image classification performance. Contrary to these studies, we focus on the loss function. Cross-entropy Loss (CEL) is widely used for training a multi-class classification deep convolutional neural network. While CEL has been successfully implemented in image classification tasks, it only focuses on the posterior probability of correct class when the labels of training images are one-hot. It cannot be discriminated against the classes not belong to correct class (wrong classes) directly. In order to solve the problem of CEL, we propose Competing Ratio Loss (CRL), which calculates the posterior probability ratio between the correct class and competing wrong classes to better discriminate the correct class from competing wrong classes, increasing the difference between the negative log likelihood of the correct class and the negative log likelihood of competing wrong classes, widening the difference between the probability of the correct class and the probabilities of wrong classes. To demonstrate the effectiveness of our loss function, we perform some sets of experiments on different types of image classification datasets, including CIFAR, SVHN, CUB200- 2011, Adience and ImageNet datasets. The experimental results show the effectiveness and robustness of our loss function on different deep convolutional neural network architectures and different image classification tasks, such as fine-grained image classification, hard face age estimation and large-scale image classification.
The development of deep convolutional neural network architecture is critical to the improvement of image classification task performance. Many image classification studies use deep convolutional neural network and focus on modifying the network stru
The loss function is a key component in deep learning models. A commonly used loss function for classification is the cross entropy loss, which is a simple yet effective application of information theory for classification problems. Based on this los
In this paper, we present a novel deep metric learning method to tackle the multi-label image classification problem. In order to better learn the correlations among images features, as well as labels, we attempt to explore a latent space, where imag
Softmax loss is arguably one of the most popular losses to train CNN models for image classification. However, recent works have exposed its limitation on feature discriminability. This paper casts a new viewpoint on the weakness of softmax loss. On
In this paper, we propose a novel image process scheme called class-based expansion learning for image classification, which aims at improving the supervision-stimulation frequency for the samples of the confusing classes. Class-based expansion learn