Brain Computer Interface (BCI), especially systems for recognizing brain signals using deep learning after characterizing these signals as EEG (Electroencephalography), is one of the important research topics that arouse the interest of many research ers currently. Convolutional Neural Nets (CNN) is one of the most important deep learning classifiers used in this recognition process, but the parameters of this classifier have not yet been precisely defined so that it gives the highest recognition rate and the lowest possible training and recognition time. This research proposes a system for recognizing EEG signals using the CNN network, while studying the effect of changing the parameters of this network on the recognition rate, training time, and recognition time of brain signals, as a result the proposed recognition system was achieved 76.38 % recognition rate, And the reduction of classifier training time (3 seconds) by using Common Spatial Pattern (CSP) in the preprocessing of IV2b dataset, and a recognition rate of 76.533% was reached by adding a layer to the proposed classifier.
The project aims primarily to employ the benefits of artificial intelligence, specifically the characteristics of programming a neuronal network where neuronal networks, in turn, are networks that are interested in trainin g and learning from error, and employing this error to achieve optimal results.Convolution NeuralNetworks(CNN)in particular are one of the most important neuronal networks that address classification problems and issues. Thus, this project is to design a convolution neuronal network that classifies vehicles into several types where we will design the network and train them on the database as the database includes pictures of several types of vehicles The network will classify each Image to its type, after adjusting the images, making the appropriate changes, turning them gray, and discovering the edges and lines.After the images are ready, the training process will begin, and after the training process is finished, we will produce classification results, and then we will test with a new set of images.One of the most important applications of this project is to abide by the paving places of cars, trucks, and vehicles in general, as if a picture was entered as a car for the car sample, which is a truck, for example, this will give an error where the network will discover this by examining and classifying it. As a truck, we discover that there is a violation of the paving laws
Image classification with Deep Convolutional Neural Network Using Tensorflow and Transfer of Learning
The deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR ) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Convolutional Neural Network (CNN) has been chosen as a better option for the training process because it produces a high accuracy. The final accuracy has reached 91.18% in five different classes. The results are discussed in terms of the probability of accuracy for each class in the image classification in percentage. Cats class got 99.6 %, while houses class got 100 %.Other types of classes were with an average score of 90 % and above.
In recent years, the problem of classifying objects in images has increased by using deep learning as a result of the industrial sector requirements. Despite of many algorithms used in this field, such as Deep Learning Neural Network DNN and Convolut ional Neural Network CNN, the proposed systems to address this problem Lack of comprehensive solution to the difficulties of long training time and floating memory during the training process, low rating classification. Convolutional Neural Networks (CNNs), which are the most used algorithms for this task, were a mathematical pattern for analyzing images data. A new deep-traversal network pattern was proposed to solve the above problems. The aim of the research is to demonstrate the performance of the recognition system using CNNs networks on the available memory and training time by adapting appropriate variables for the bypass network. The database used in this research is CIFAR10, which consists of 60000 colorful images belonging to ten categories, as every 6,000 images are for a class of these items. Where there are 50,000 training images and 10,000 test tubes. When tested on a sample of selected images from the CIFAR10 database, the model achieved a rating classification of 98.87%.
دراسة الهياكل الجيولوجية المكشوفة على سطح الأرض ذات أهمية كبيرة بشكل عام وخصوصا في التصميم الهندسي والبناء. في هذا البحث ، استخدمنا 2206 صورة مع 12 ملصق للتعرف على الهياكل الجيولوجية بناءً على نموذج Inception-v3. تم اعتماد الصور ذات التدرج الرمادي و اللون في النموذج. كما تم بناء نموذج الشبكة العصبية التلافيفية (CNN) وتم تطبيق خوارزمية أقرب جار (KNN) والشبكة العصبية الاصطناعية (ANN) وتعزيز التدرج الشديد (XGBoost) في تصنيف الهياكل الجيولوجية بناءً على الميزات المستخرجة من مكتبة رؤية الكمبيوتر مفتوحة المصدر (OpenCV). أخيرًا ، تمت مقارنة أداء الطرق الخمس وأظهرت النتائج أن أداء KNN و ANN و XGBoost كان ضعيفًا وبدقة أقل من 40.0٪. أما CNN فعد عانت من فرط التدريب Overfitting. كان للنموذج الذي تم تدريبه باستخدام التعلم بالنقل تأثير كبير على مجموعة بيانات صغيرة من صور التركيب الجيولوجي. وأفضل نموذجين وصلوا إلى دقة 83.3٪ و 90.0٪ على التوالي. هذا يدل على أن النسيج هو السمة الرئيسية في هذا البحث. يمكن أن يستخرج التعلم القائم على نموذج التعلم العميق ميزات بيانات البنية الجيولوجية الصغيرة بشكل فعال ، وهو قوي في تصنيف صور الهيكل الجيولوجي.
relation extraction systems have made extensive use of features generated by linguistic analysis modules. Errors in these features lead to errors of relation detection and classification. In this work, we depart from these traditional approaches w ith complicated feature engineering by introducing a convolutional neural network for relation extraction that automatically learns features from sentences and minimizes the dependence on external toolkits and resources. Our model takes advantages of multiple window sizes for filters and pre-trained word embeddings as an initializer on a nonstatic architecture to improve the performance.