In this paper, we present a new algorithm to automate the detection
and extraction of buildings from satellite images, this algorithm is
distinguished since it overcomes some obstacles that limit detecting
within other methods, such as the differe
nce in shape, color, and
height of buildings, and it doesn't need multi-spectral images or other
complex and high cost images.
In recent years, the problem of classifying objects in images has increased by using deep learning as a result of the industrial sector requirements. Despite of many algorithms used in this field, such as Deep Learning Neural Network DNN and Convolut
ional Neural Network CNN, the proposed systems to address this problem Lack of comprehensive solution to the difficulties of long training time and floating memory during the training process, low rating classification. Convolutional Neural Networks (CNNs), which are the most used algorithms for this task, were a mathematical pattern for analyzing images data. A new deep-traversal network pattern was proposed to solve the above problems. The aim of the research is to demonstrate the performance of the recognition system using CNNs networks on the available memory and training time by adapting appropriate variables for the bypass network. The database used in this research is CIFAR10, which consists of 60000 colorful images belonging to ten categories, as every 6,000 images are for a class of these items. Where there are 50,000 training images and 10,000 test tubes. When tested on a sample of selected images from the CIFAR10 database, the model achieved a rating classification of 98.87%.
In this paper, one hundred chest Computed Tomography images of COVID-19 patients were used to build and test Naïve Gaussian Bayes classifier for discriminating normal from abnormal tissues. Infected areas in these images were manually segmented by an
expert radiologist. Pixel grey value, local entropy and Histograms of Oriented Gradients HOG were extracted as features for tissue image classification. Based on five-folds classification experiments, the accuracy score of the classifier in this fold reached around 79.94%. Classification was more precise (85%) in recognizing normal tissue than abnormal tissue (63%). The effectiveness in identifying positive labels was also more evident with normal tissue than the abnormal one.