No Arabic abstract
Image classification has achieved unprecedented advance with the the rapid development of deep learning. However, the classification of tiny object images is still not well investigated. In this paper, we first briefly review the development of Convolutional Neural Network and Visual Transformer in deep learning, and introduce the sources and development of conventional noises and adversarial attacks. Then we use various models of Convolutional Neural Network and Visual Transformer to conduct a series of experiments on the image dataset of tiny objects (sperms and impurities), and compare various evaluation metrics in the experimental results to obtain a model with stable performance. Finally, we discuss the problems in the classification of tiny objects and make a prospect for the classification of tiny objects in the future.
In recent years, deep learning has made brilliant achievements in image classification. However, image classification of small datasets is still not obtained good research results. This article first briefly explains the application and characteristics of convolutional neural networks and visual transformers. Meanwhile, the influence of small data set on classification and the solution are introduced. Then a series of experiments are carried out on the small datasets by using various models, and the problems of some models in the experiments are discussed. Through the comparison of experimental results, the recommended deep learning model is given according to the model application environment. Finally, we give directions for future work.
Nowadays, analysis of Transparent Environmental Microorganism Images (T-EM images) in the field of computer vision has gradually become a new and interesting spot. This paper compares different deep learning classification performance for the problem that T-EM images are challenging to analyze. We crop the T-EM images into 8 * 8 and 224 * 224 pixel patches in the same proportion and then divide the two different pixel patches into foreground and background according to ground truth. We also use four convolutional neural networks and a novel ViT network model to compare the foreground and background classification experiments. We conclude that ViT performs the worst in classifying 8 * 8 pixel patches, but it outperforms most convolutional neural networks in classifying 224 * 224 pixel patches.
Cervical cancer is a very common and fatal cancer in women. Cytopathology images are often used to screen this cancer. Since there is a possibility of a large number of errors in manual screening, the computer-aided diagnosis system based on deep learning is developed. The deep learning methods required a fixed size of input images, but the sizes of the clinical medical images are inconsistent. The internal cell ratios of the images are suffered while resizing it directly. Clinically, the ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is illogical to resize directly. However, many existing studies resized the images directly and obtained very robust classification results. To find a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are preprocessed to obtain the standard and scaled datasets. Then, the datasets are resized to 224 $times$ 224 pixels. Finally, twenty-two deep learning models are used to classify standard and scaled datasets. The conclusion is that the deep learning models are robust to changes in the internal cell ratio of cervical cytopathological images. This conclusion is also validated on the Herlev dataset.
In this paper, we present a novel approach that uses deep learning techniques for colorizing grayscale images. By utilizing a pre-trained convolutional neural network, which is originally designed for image classification, we are able to separate content and style of different images and recombine them into a single image. We then propose a method that can add colors to a grayscale image by combining its content with style of a color image having semantic similarity with the grayscale one. As an application, to our knowledge the first of its kind, we use the proposed method to colorize images of ukiyo-e a genre of Japanese painting?and obtain interesting results, showing the potential of this method in the growing field of computer assisted art.
This paper proposes a novel model, named Continuity-Discrimination Convolutional Neural Network (CD-CNN), for visual object tracking. Existing state-of-the-art tracking methods do not deal with temporal relationship in video sequences, which leads to imperfect feature representations. To address this problem, CD-CNN models temporal appearance continuity based on the idea of temporal slowness. Mathematically, we prove that, by introducing temporal appearance continuity into tracking, the upper bound of target appearance representation error can be sufficiently small with high probability. Further, in order to alleviate inaccurate target localization and drifting, we propose a novel notion, object-centroid, to characterize not only objectness but also the relative position of the target within a given patch. Both temporal appearance continuity and object-centroid are jointly learned during offline training and then transferred for online tracking. We evaluate our tracker through extensive experiments on two challenging benchmarks and show its competitive tracking performance compared with state-of-the-art trackers.