No Arabic abstract
Cervical cancer is a very common and fatal cancer in women. Cytopathology images are often used to screen this cancer. Since there is a possibility of a large number of errors in manual screening, the computer-aided diagnosis system based on deep learning is developed. The deep learning methods required a fixed size of input images, but the sizes of the clinical medical images are inconsistent. The internal cell ratios of the images are suffered while resizing it directly. Clinically, the ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is illogical to resize directly. However, many existing studies resized the images directly and obtained very robust classification results. To find a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are preprocessed to obtain the standard and scaled datasets. Then, the datasets are resized to 224 $times$ 224 pixels. Finally, twenty-two deep learning models are used to classify standard and scaled datasets. The conclusion is that the deep learning models are robust to changes in the internal cell ratio of cervical cytopathological images. This conclusion is also validated on the Herlev dataset.
In recent years, deep learning has made brilliant achievements in image classification. However, image classification of small datasets is still not obtained good research results. This article first briefly explains the application and characteristics of convolutional neural networks and visual transformers. Meanwhile, the influence of small data set on classification and the solution are introduced. Then a series of experiments are carried out on the small datasets by using various models, and the problems of some models in the experiments are discussed. Through the comparison of experimental results, the recommended deep learning model is given according to the model application environment. Finally, we give directions for future work.
Nowadays, analysis of Transparent Environmental Microorganism Images (T-EM images) in the field of computer vision has gradually become a new and interesting spot. This paper compares different deep learning classification performance for the problem that T-EM images are challenging to analyze. We crop the T-EM images into 8 * 8 and 224 * 224 pixel patches in the same proportion and then divide the two different pixel patches into foreground and background according to ground truth. We also use four convolutional neural networks and a novel ViT network model to compare the foreground and background classification experiments. We conclude that ViT performs the worst in classifying 8 * 8 pixel patches, but it outperforms most convolutional neural networks in classifying 224 * 224 pixel patches.
Image classification has achieved unprecedented advance with the the rapid development of deep learning. However, the classification of tiny object images is still not well investigated. In this paper, we first briefly review the development of Convolutional Neural Network and Visual Transformer in deep learning, and introduce the sources and development of conventional noises and adversarial attacks. Then we use various models of Convolutional Neural Network and Visual Transformer to conduct a series of experiments on the image dataset of tiny objects (sperms and impurities), and compare various evaluation metrics in the experimental results to obtain a model with stable performance. Finally, we discuss the problems in the classification of tiny objects and make a prospect for the classification of tiny objects in the future.
Microorganisms play a vital role in human life. Therefore, microorganism detection is of great significance to human beings. However, the traditional manual microscopic detection methods have the disadvantages of long detection cycle, low detection accuracy in large orders, and great difficulty in detecting uncommon microorganisms. Therefore, it is meaningful to apply computer image analysis technology to the field of microorganism detection. Computer image analysis can realize high-precision and high-efficiency detection of microorganisms. In this review, first,we analyse the existing microorganism detection methods in chronological order, from traditional image processing and traditional machine learning to deep learning methods. Then, we analyze and summarize these existing methods and introduce some potential methods, including visual transformers. In the end, the future development direction and challenges of microorganism detection are discussed. In general, we have summarized 137 related technical papers from 1985 to the present. This review will help researchers have a more comprehensive understanding of the development process, research status, and future trends in the field of microorganism detection and provide a reference for researchers in other fields.
Image annotation aims to annotate a given image with a variable number of class labels corresponding to diverse visual concepts. In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation suitable for predicting a diverse set of visual concepts ranging from object, scene to abstract concept; 2) how to annotate an image with the optimal number of class labels. To address the first issue, we propose a novel multi-scale deep model for extracting rich and discriminative features capable of representing a wide range of visual concepts. Specifically, a novel two-branch deep neural network architecture is proposed which comprises a very deep main network branch and a companion feature fusion network branch designed for fusing the multi-scale features computed from the main branch. The deep model is also made multi-modal by taking noisy user-provided tags as model input to complement the image input. For tackling the second issue, we introduce a label quantity prediction auxiliary task to the main label prediction task to explicitly estimate the optimal label number for a given image. Extensive experiments are carried out on two large-scale image annotation benchmark datasets and the results show that our method significantly outperforms the state-of-the-art.