ترغب بنشر مسار تعليمي؟ اضغط هنا

Unified Framework for the Adaptive Operator Selection of Discrete Parameters

55   0   0.0 ( 0 )
 نشر من قبل Mudita Sharma Afhea
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We conduct an exhaustive survey of adaptive selection of operators (AOS) in Evolutionary Algorithms (EAs). We simplified the AOS structure by adding more components to the framework to built upon the existing categorisation of AOS methods. In addition to simplifying, we looked at the commonality among AOS methods from literature to generalise them. Each component is presented with a number of alternative choices, each represented with a formula. We make three sets of comparisons. First, the methods from literature are tested on the BBOB test bed with their default hyper parameters. Second, the hyper parameters of these methods are tuned using an offline configurator known as IRACE. Third, for a given set of problems, we use IRACE to select the best combination of components and tune their hyper parameters.



قيم البحث

اقرأ أيضاً

104 - Shai Bagon , Meirav Galun 2012
Discrete energy minimization is a ubiquitous task in computer vision, yet is NP-hard in most cases. In this work we propose a multiscale framework for coping with the NP-hardness of discrete optimization. Our approach utilizes algebraic multiscale pr inciples to efficiently explore the discrete solution space, yielding improved results on challenging, non-submodular energies for which current methods provide unsatisfactory approximations. In contrast to popular multiscale methods in computer vision, that builds an image pyramid, our framework acts directly on the energy to construct an energy pyramid. Deriving a multiscale scheme from the energy itself makes our framework application independent and widely applicable. Our framework gives rise to two complementary energy coarsening strategies: one in which coarser scales involve fewer variables, and a more revolutionary one in which the coarser scales involve fewer discrete labels. We empirically evaluated our unified framework on a variety of both non-submodular and submodular energies, including energies from Middlebury benchmark.
Batched network coding is a variation of random linear network coding which has low computational and storage costs. In order to adapt to random fluctuations in the number of erasures in individual batches, it is not optimal to recode and transmit th e same number of packets for all batches. Different distributed optimization models, which are called adaptive recoding schemes, were formulated for this purpose. The key component of these optimization problems is the expected value of the rank distribution of a batch at the next network node, which is also known as the expected rank. In this paper, we put forth a unified adaptive recoding framework with an arbitrary recoding field size. We show that the expected rank functions are concave when the packet loss pattern is a stationary stochastic process, which covers but not limited to independent packet loss and Gilbert-Elliott packet loss model. Under this concavity assumption, we show that there always exists a solution which not only can minimize the randomness on the number of recoded packets but also can tolerate rank distribution errors due to inaccurate measurements or limited precision of the machine. We provide an algorithm to obtain such an optimal optimal solution, and propose tuning schemes that can turn any feasible solution into a desired optimal solution.
In the evolutionary computation research community, the performance of most evolutionary algorithms (EAs) depends strongly on their implemented coordinate system. However, the commonly used coordinate system is fixed and not well suited for different function landscapes, EAs thus might not search efficiently. To overcome this shortcoming, in this paper we propose a framework, named ACoS, to adaptively tune the coordinate systems in EAs. In ACoS, an Eigen coordinate system is established by making use of the cumulative population distribution information, which can be obtained based on a covariance matrix adaptation strategy and an additional archiving mechanism. Since the population distribution information can reflect the features of the function landscape to some extent, EAs in the Eigen coordinate system have the capability to identify the modality of the function landscape. In addition, the Eigen coordinate system is coupled with the original coordinate system, and they are selected according to a probability vector. The probability vector aims to determine the selection ratio of each coordinate system for each individual, and is adaptively updated based on the collected information from the offspring. ACoS has been applied to two of the most popular EA paradigms, i.e., particle swarm optimization (PSO) and differential evolution (DE), for solving 30 test functions with 30 and 50 dimensions at the 2014 IEEE Congress on Evolutionary Computation. The experimental studies demonstrate its effectiveness.
Many model compression techniques of Deep Neural Networks (DNNs) have been investigated, including weight pruning, weight clustering and quantization, etc. Weight pruning leverages the redundancy in the number of weights in DNNs, while weight cluster ing/quantization leverages the redundancy in the number of bit representations of weights. They can be effectively combined in order to exploit the maximum degree of redundancy. However, there lacks a systematic investigation in literature towards this direction. In this paper, we fill this void and develop a unified, systematic framework of DNN weight pruning and clustering/quantization using Alternating Direction Method of Multipliers (ADMM), a powerful technique in optimization theory to deal with non-convex optimization problems. Both DNN weight pruning and clustering/quantization, as well as their combinations, can be solved in a unified manner. For further performance improvement in this framework, we adopt multiple techniques including iterative weight quantization and retraining, joint weight clustering training and centroid updating, weight clustering retraining, etc. The proposed framework achieves significant improvements both in individual weight pruning and clustering/quantization problems, as well as their combinations. For weight pruning alone, we achieve 167x weight reduction in LeNet-5, 24.7x in AlexNet, and 23.4x in VGGNet, without any accuracy loss. For the combination of DNN weight pruning and clustering/quantization, we achieve 1,910x and 210x storage reduction of weight data on LeNet-5 and AlexNet, respectively, without accuracy loss. Our codes and models are released at the link http://bit.ly/2D3F0np
Ultra-Reliable Low-Latency Communications (URLLC) is a novel feature of 5G cellular systems. To satisfy strict URLLC requirements for uplink data transmission, the specifications of 5G systems introduce the grant-free channel access method. According to this method, a User Equipment (UE) performs packet transmission without requesting channel resources from a base station (gNB). With the grant-free channel access, the gNB configures the uplink transmission parameters in a long-term time scale. Since the channel quality can significantly change in time and frequency domains, the gNB should select robust transmission parameters to satisfy the URLLC requirements. Many existing studies consider fixed robust uplink transmission parameter selection that allows satisfying the requirements even for UEs with poor channel conditions. However, the more robust transmission parameters are selected, the lower is the network capacity. In this paper, we propose an adaptive algorithm that selects the transmission parameters depending on the channel quality based on the signal-to-noise ratio statistics analysis at the gNB. Simulation results obtained with NS-3 show that the algorithm allows meeting the URLLC latency and reliability requirements while reducing the channel resource consumption more than twice in comparison with the fixed transmission parameters selection.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا