ﻻ يوجد ملخص باللغة العربية
The choice of parameters, and the design of the network architecture are important factors affecting the performance of deep neural networks. However, there has not been much work on developing an established and systematic way of building the structure and choosing the parameters of a neural network, and this task heavily depends on trial and error and empirical results. Considering that there are many design and parameter choices, such as the number of neurons in each layer, the type of activation function, the choice of using drop out or not, it is very hard to cover every configuration, and find the optimal structure. In this paper, we propose a novel and systematic method that autonomously and simultaneously optimizes multiple parameters of any given deep neural network by using a generative adversarial network (GAN). In our proposed approach, two different models compete and improve each other progressively with a GAN-based strategy. Our proposed approach can be used to autonomously refine the parameters, and improve the accuracy of different deep neural network architectures. Without loss of generality, the proposed method has been tested with three different neural network architectures, and three very different datasets and applications. The results show that the presented approach can simultaneously and successfully optimize multiple neural network parameters, and achieve increased accuracy in all three scenarios.
The choice of parameters, and the design of the network architecture are important factors affecting the performance of deep neural networks. Genetic Algorithms (GA) have been used before to determine parameters of a network. Yet, GAs perform a finit
Recently, increasing works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since i
Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models.Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models.Since
This work addresses the scarcity of annotated hyperspectral data required to train deep neural networks. Especially, we investigate generative adversarial networks and their application to the synthesis of consistent labeled spectra. By training such
Optimization for deep networks is currently a very active area of research. As neural networks become deeper, the ability in manually optimizing the network becomes harder. Mini-batch normalization, identification of effective respective fields, mome