ترغب بنشر مسار تعليمي؟ اضغط هنا

On Optimizing Deep Convolutional Neural Networks by Evolutionary Computing

76   0   0.0 ( 0 )
 نشر من قبل Dedimuni De Silva
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Optimization for deep networks is currently a very active area of research. As neural networks become deeper, the ability in manually optimizing the network becomes harder. Mini-batch normalization, identification of effective respective fields, momentum updates, introduction of residual blocks, learning rate adoption, etc. have been proposed to speed up the rate of convergent in manual training process while keeping the higher accuracy level. However, the problem of finding optimal topological structure for a given problem is becoming a challenging task need to be addressed immediately. Few researchers have attempted to optimize the network structure using evolutionary computing approaches. Among them, few have successfully evolved networks with reinforcement learning and long-short-term memory. A very few has applied evolutionary programming into deep convolution neural networks. These attempts are mainly evolved the network structure and then subsequently optimized the hyper-parameters of the network. However, a mechanism to evolve the deep network structure under the techniques currently being practiced in manual process is still absent. Incorporation of such techniques into chromosomes level of evolutionary computing, certainly can take us to better topological deep structures. The paper concludes by identifying the gap between evolutionary based deep neural networks and deep neural networks. Further, it proposes some insights for optimizing deep neural networks using evolutionary computing techniques.

قيم البحث

اقرأ أيضاً

Due to the nonlinearity of artificial neural networks, designing topologies for deep convolutional neural networks (CNN) is a challenging task and often only heuristic approach, such as trial and error, can be applied. An evolutionary algorithm can s olve optimization problems where the fitness landscape is unknown. However, evolutionary algorithms are computing resource intensive, which makes it difficult for problems when deep CNNs are involved. In this paper, we propose an evolutionary strategy to find better topologies for deep CNNs. Incorporating the concept of knowledge inheritance and knowledge learning, our evolutionary algorithm can be executed with limited computing resources. We applied the proposed algorithm in finding effective topologies of deep CNNs for the image classification task using CIFAR-10 dataset. After the evolution, we analyzed the topologies that performed well for this task. Our studies verify the techniques that have been commonly used in human designed deep CNNs. We also discovered that some of the graph properties greatly affect the system performance. We applied the guidelines learned from the evolution and designed new network topologies that outperform Residual Net with less layers on CIFAR-10, CIFAR-100, and SVHN dataset.
Koopman operator theory, a powerful framework for discovering the underlying dynamics of nonlinear dynamical systems, was recently shown to be intimately connected with neural network training. In this work, we take the first steps in making use of t his connection. As Koopman operator theory is a linear theory, a successful implementation of it in evolving network weights and biases offers the promise of accelerated training, especially in the context of deep networks, where optimization is inherently a non-convex problem. We show that Koopman operator theoretic methods allow for accurate predictions of weights and biases of feedforward, fully connected deep networks over a non-trivial range of training time. During this window, we find that our approach is >10x faster than various gradient descent based methods (e.g. Adam, Adadelta, Adagrad), in line with our complexity analysis. We end by highlighting open questions in this exciting intersection between dynamical systems and neural network theory. We highlight additional methods by which our results could be expanded to broader classes of networks and larger training intervals, which shall be the focus of future work.
Automated machine learning (AutoML) has seen a resurgence in interest with the boom of deep learning over the past decade. In particular, Neural Architecture Search (NAS) has seen significant attention throughout the AutoML research community, and ha s pushed forward the state-of-the-art in a number of neural models to address grid-like data such as texts and images. However, very litter work has been done about Graph Neural Networks (GNN) learning on unstructured network data. Given the huge number of choices and combinations of components such as aggregator and activation function, determining the suitable GNN structure for a specific problem normally necessitates tremendous expert knowledge and laborious trails. In addition, the slight variation of hyper parameters such as learning rate and dropout rate could dramatically hurt the learning capacity of GNN. In this paper, we propose a novel AutoML framework through the evolution of individual models in a large GNN architecture space involving both neural structures and learning parameters. Instead of optimizing only the model structures with fixed parameter settings as existing work, an alternating evolution process is performed between GNN structures and learning parameters to dynamically find the best fit of each other. To the best of our knowledge, this is the first work to introduce and evaluate evolutionary architecture search for GNN models. Experiments and validations demonstrate that evolutionary NAS is capable of matching existing state-of-the-art reinforcement learning approaches for both the semi-supervised transductive and inductive node representation learning and classification.
We introduce a convolutional recurrent neural network (CRNN) for music tagging. CRNNs take advantage of convolutional neural networks (CNNs) for local feature extraction and recurrent neural networks for temporal summarisation of the extracted featur es. We compare CRNN with three CNN structures that have been used for music tagging while controlling the number of parameters with respect to their performance and training time per sample. Overall, we found that CRNNs show a strong performance with respect to the number of parameter and training time, indicating the effectiveness of its hybrid structure in music feature extraction and feature summarisation.
We create a novel optimisation technique inspired by natural ecosystems, where the optimisation works at two levels: a first optimisation, migration of genes which are distributed in a peer-to-peer network, operating continuously in time; this proces s feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. We consider from the domain of computer science distributed evolutionary computing, with the relevant theory from the domain of theoretical biology, including the fields of evolutionary and ecological theory, the topological structure of ecosystems, and evolutionary processes within distributed environments. We then define ecosystem- oriented distributed evolutionary computing, imbibed with the properties of self-organisation, scalability and sustainability from natural ecosystems, including a novel form of distributed evolu- tionary computing. Finally, we conclude with a discussion of the apparent compromises resulting from the hybrid model created, such as the network topology.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا