ترغب بنشر مسار تعليمي؟ اضغط هنا

Genetic Algorithm based hyper-parameters optimization for transfer Convolutional Neural Network

374   0   0.0 ( 0 )
 نشر من قبل Chen Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Hyperparameter optimization is a challenging problem in developing deep neural networks. Decision of transfer layers and trainable layers is a major task for design of the transfer convolutional neural networks (CNN). Conventional transfer CNN models are usually manually designed based on intuition. In this paper, a genetic algorithm is applied to select trainable layers of the transfer model. The filter criterion is constructed by accuracy and the counts of the trainable layers. The results show that the method is competent in this task. The system will converge with a precision of 97% in the classification of Cats and Dogs datasets, in no more than 15 generations. Moreover, backward inference according the results of the genetic algorithm shows that our method can capture the gradient features in network layers, which plays a part on understanding of the transfer AI models.



قيم البحث

اقرأ أيضاً

123 - T. Serizawa , H. Fujita 2020
Convolutional neural network (CNN) is one of the most frequently used deep learning techniques. Various forms of models have been proposed and improved for learning at CNN. When learning with CNN, it is necessary to determine the optimal hyperparamet ers. However, the number of hyperparameters is so large that it is difficult to do it manually, so much research has been done on automation. A method that uses metaheuristic algorithms is attracting attention in research on hyperparameter optimization. Metaheuristic algorithms are naturally inspired and include evolution strategies, genetic algorithms, antcolony optimization and particle swarm optimization. In particular, particle swarm optimization converges faster than genetic algorithms, and various models have been proposed. In this paper, we propose CNN hyperparameter optimization with linearly decreasing weight particle swarm optimization (LDWPSO). In the experiment, the MNIST data set and CIFAR-10 data set, which are often used as benchmark data sets, are used. By optimizing CNN hyperparameters with LDWPSO, learning the MNIST and CIFAR-10 datasets, we compare the accuracy with a standard CNN based on LeNet-5. As a result, when using the MNIST dataset, the baseline CNN is 94.02% at the 5th epoch, compared to 98.95% for LDWPSO CNN, which improves accuracy. When using the CIFAR-10 dataset, the Baseline CNN is 28.07% at the 10th epoch, compared to 69.37% for the LDWPSO CNN, which greatly improves accuracy.
Neural networks have shown great potential in many applications like speech recognition, drug discovery, image classification, and object detection. Neural network models are inspired by biological neural networks, but they are optimized to perform m achine learning tasks on digital computers. The proposed work explores the possibilities of using living neural networks in vitro as basic computational elements for machine learning applications. A new supervised STDP-based learning algorithm is proposed in this work, which considers neuron engineering constrains. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten digit recognition.
Federated learning (FL) is a distributed model for deep learning that integrates client-server architecture, edge computing, and real-time intelligence. FL has the capability of revolutionizing machine learning (ML) but lacks in the practicality of i mplementation due to technological limitations, communication overhead, non-IID (independent and identically distributed) data, and privacy concerns. Training a ML model over heterogeneous non-IID data highly degrades the convergence rate and performance. The existing traditional and clustered FL algorithms exhibit two main limitations, including inefficient client training and static hyper-parameter utilization. To overcome these limitations, we propose a novel hybrid algorithm, namely genetic clustered FL (Genetic CFL), that clusters edge devices based on the training hyper-parameters and genetically modifies the parameters cluster-wise. Then, we introduce an algorithm that drastically increases the individual cluster accuracy by integrating the density-based clustering and genetic hyper-parameter optimization. The results are bench-marked using MNIST handwritten digit dataset and the CIFAR-10 dataset. The proposed genetic CFL shows significant improvements and works well with realistic cases of non-IID and ambiguous data.
Transfer Optimization is an incipient research area dedicated to solving multiple optimization tasks simultaneously. Among the different approaches that can address this problem effectively, Evolutionary Multitasking resorts to concepts from Evolutio nary Computation to solve multiple problems within a single search process. In this paper we introduce a novel adaptive metaheuristic algorithm to deal with Evolutionary Multitasking environments coined as Adaptive Transfer-guided Multifactorial Cellular Genetic Algorithm (AT-MFCGA). AT-MFCGA relies on cellular automata to implement mechanisms in order to exchange knowledge among the optimization problems under consideration. Furthermore, our approach is able to explain by itself the synergies among tasks that were encountered and exploited during the search, which helps us to understand interactions between related optimization tasks. A comprehensive experimental setup is designed to assess and compare the performance of AT-MFCGA to that of other renowned evolutionary multitasking alternatives (MFEA and MFEA-II). Experiments comprise 11 multitasking scenarios composed of 20 instances of 4 combinatorial optimization problems, yielding the largest discrete multitasking environment solved to date. Results are conclusive in regard to the superior quality of solutions provided by AT-MFCGA with respect to the rest of the methods, which are complemented by a quantitative examination of the genetic transferability among tasks throughout the search process.
187 - Mohsen Moradi 2017
In this study we determined neural network weights and biases by Imperialist Competitive Algorithm (ICA) in order to train network for predicting earthquake intensity in Richter. For this reason, we used dependent parameters like earthquake occurrenc e time, epicenters latitude and longitude in degree, focal depth in kilometer, and the seismological center distance from epicenter and earthquake focal center in kilometer which has been provided by Berkeley data base. The studied neural network has two hidden layer: its first layer has 16 neurons and the second layer has 24 neurons. By using ICA algorithm, average error for testing data is 0.0007 with a variance equal to 0.318. The earthquake prediction error in Richter by MSE criteria for ICA algorithm is 0.101, but by using GA, the MSE value is 0.115.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا