ترغب بنشر مسار تعليمي؟ اضغط هنا

Differential Evolution with Reversible Linear Transformations

77   0   0.0 ( 0 )
 نشر من قبل Jakub Tomczak
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Differential evolution (DE) is a well-known type of evolutionary algorithms (EA). Similarly to other EA variants it can suffer from small populations and loose diversity too quickly. This paper presents a new approach to mitigate this issue: We propose to generate new candidate solutions by utilizing reversible linear transformation applied to a triplet of solutions from the population. In other words, the population is enlarged by using newly generated individuals without evaluating their fitness. We assess our methods on three problems: (i) benchmark function optimization, (ii) discovering parameter values of the gene repressilator system, (iii) learning neural networks. The empirical results indicate that the proposed approach outperforms vanilla DE and a version of DE with applying differential mutation three times on all testbeds.



قيم البحث

اقرأ أيضاً

We conduct a first fundamental analysis of the working principles of binary differential evolution (BDE), an optimization heuristic for binary decision variables that was derived by Gong and Tuson (2007) from the very successful classic differential evolution (DE) for continuous optimization. We show that unlike most other optimization paradigms, it is stable in the sense that neutral bit values are sampled with probability close to $1/2$ for a long time. This is generally a desirable property, however, it makes it harder to find the optima for decision variables with small influence on the objective function. This can result in an optimization time exponential in the dimension when optimizing simple symmetric functions like OneMax. On the positive side, BDE quickly detects and optimizes the most important decision variables. For example, dominant bits converge to the optimal value in time logarithmic in the population size. This enables BDE to optimize the most important bits very fast. Overall, our results indicate that BDE is an interesting optimization paradigm having characteristics significantly different from classic evolutionary algorithms or estimation-of-distribution algorithms (EDAs). On the technical side, we observe that the strong stochastic dependencies in the random experiment describing a run of BDE prevent us from proving all desired results with the mathematical rigor that was successfully used in the analysis of other evolutionary algorithms. Inspired by mean-field approaches in statistical physics we propose a more independent variant of BDE, show experimentally its similarity to BDE, and prove some statements rigorously only for the independent variant. Such a semi-rigorous approach might be interesting for other problems in evolutionary computation where purely mathematical methods failed so far.
In this work we introduce a new system of partial differential equations as a simplified model for the evolution of reversible martensitic transformations under thermal cycling in low hysteresis alloys. The model is developed in the context of nonlin ear continuum mechanics, where the developed theory is mostly static, and cannot capture the influence of dynamics on martensitic microstructures. First, we prove existence of weak solutions; secondly, we study the physically relevant limit when the interface energy density vanishes, and the elastic constants tend to infinity. The limit problem provides a framework for the moving mask approximation recently introduced by the author. In the last section we study the limit equations in a one-dimensional setting. After closing the equations with a constitutive relation between the phase interface velocity and the temperature of the one-dimensional sample, the equations become a two-phase Stefan problem with a kinetic condition at the free boundary. Under some further assumptions, we show that the phase interface reaches the domain boundary in finite time.
We consider the problem of computing a binary linear transformation using unreliable components when all circuit components are unreliable. Two noise models of unreliable components are considered: probabilistic errors and permanent errors. We introd uce the ENCODED technique that ensures that the error probability of the computation of the linear transformation is kept bounded below a small constant independent of the size of the linear transformation even when all logic gates in the computation are noisy. Further, we show that the scheme requires fewer operations (in order sense) than its uncoded counterpart. By deriving a lower bound, we show that in some cases, the scheme is order-optimal. Using these results, we examine the gain in energy-efficiency from use of voltage-scaling scheme where gate-energy is reduced by lowering the supply voltage. We use a gate energy-reliability model to show that tuning gate-energy appropriately at different stages of the computation (dynamic voltage scaling), in conjunction with ENCODED, can lead to order-sense energy-savings over the classical uncoded approach. Finally, we also examine the problem of computing a linear transformation when noiseless decoders can be used, providing upper and lower bounds to the problem.
196 - Alexandre Chotard 2014
This paper analyses a $(1,lambda)$-Evolution Strategy, a randomised comparison-based adaptive search algorithm, on a simple constraint optimisation problem. The algorithm uses resampling to handle the constraint and optimizes a linear function with a linear constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using path length control. We exhibit for each case a Markov chain whose stability analysis would allow us to deduce the divergence of the algorithm depending on its internal parameters. We show divergence at a constant rate when the step-size is constant. We sketch that with step-size adaptation geometric divergence takes place. Our results complement previous studies where stability was assumed.
This paper addresses the development of a covariance matrix self-adaptation evolution strategy (CMSA-ES) for solving optimization problems with linear constraints. The proposed algorithm is referred to as Linear Constraint CMSA-ES (lcCMSA-ES). It use s a specially built mutation operator together with repair by projection to satisfy the constraints. The lcCMSA-ES evolves itself on a linear manifold defined by the constraints. The objective function is only evaluated at feasible search points (interior point method). This is a property often required in application domains such as simulation optimization and finite element methods. The algorithm is tested on a variety of different test problems revealing considerable results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا