ترغب بنشر مسار تعليمي؟ اضغط هنا

Evolution Strategies Converges to Finite Differences

47   0   0.0 ( 0 )
 نشر من قبل John Raisbeck
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف John C. Raisbeck




اسأل ChatGPT حول البحث

Since the debut of Evolution Strategies (ES) as a tool for Reinforcement Learning by Salimans et al. 2017, there has been interest in determining the exact relationship between the Evolution Strategies gradient and the gradient of a similar class of algorithms, Finite Differences (FD).(Zhang et al. 2017, Lehman et al. 2018) Several investigations into the subject have been performed, investigating the formal motivational differences(Lehman et al. 2018) between ES and FD, as well as the differences in a standard benchmark problem in Machine Learning, the MNIST classification problem(Zhang et al. 2017). This paper proves that while the gradients are different, they converge as the dimension of the vector under optimization increases.



قيم البحث

اقرأ أيضاً

This paper introduces a multi-level (m-lev) mechanism into Evolution Strategies (ESs) in order to address a class of global optimization problems that could benefit from fine discretization of their decision variables. Such problems arise in engineer ing and scientific applications, which possess a multi-resolution control nature, and thus may be formulated either by means of low-resolution variants (providing coarser approximations with presumably lower accuracy for the general problem) or by high-resolution controls. A particular scientific application concerns practical Quantum Control (QC) problems, whose targeted optimal controls may be discretized to increasingly higher resolution, which in turn carries the potential to obtain better control yields. However, state-of-the-art derivative-free optimization heuristics for high-resolution formulations nominally call for an impractically large number of objective function calls. Therefore, an effective algorithmic treatment for such problems is needed. We introduce a framework with an automated scheme to facilitate guided-search over increasingly finer levels of control resolution for the optimization problem, whose on-the-fly learned parameters require careful adaptation. We instantiate the proposed m-lev self-adaptive ES framework by two specific strategies, namely the classical elitist single-child (1+1)-ES and the non-elitist multi-child derandomized $(mu_W,lambda)$-sep-CMA-ES. We first show that the approach is suitable by simulation-based optimization of QC systems which were heretofore viewed as too complex to address. We also present a laboratory proof-of-concept for the proposed approach on a basic experimental QC system objective.
Many applications in machine learning require optimizing a function whose true gradient is unknown, but where surrogate gradient information (directions that may be correlated with, but not necessarily identical to, the true gradient) is available in stead. This arises when an approximate gradient is easier to compute than the full gradient (e.g. in meta-learning or unrolled optimization), or when a true gradient is intractable and is replaced with a surrogate (e.g. in certain reinforcement learning applications, or when using synthetic gradients). We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search. We define a search distribution for evolutionary strategies that is elongated along a guiding subspace spanned by the surrogate gradients. This allows us to estimate a descent direction which can then be passed to a first-order optimizer. We analytically and numerically characterize the tradeoffs that result from tuning how strongly the search distribution is stretched along the guiding subspace, and we use this to derive a setting of the hyperparameters that works well across problems. Finally, we apply our method to example problems, demonstrating an improvement over both standard evolutionary strategies and first-order methods (that directly follow the surrogate gradient). We provide a demo of Guided ES at https://github.com/brain-research/guided-evolutionary-strategies
191 - Alexandre Chotard 2014
This paper analyses a $(1,lambda)$-Evolution Strategy, a randomised comparison-based adaptive search algorithm, on a simple constraint optimisation problem. The algorithm uses resampling to handle the constraint and optimizes a linear function with a linear constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using path length control. We exhibit for each case a Markov chain whose stability analysis would allow us to deduce the divergence of the algorithm depending on its internal parameters. We show divergence at a constant rate when the step-size is constant. We sketch that with step-size adaptation geometric divergence takes place. Our results complement previous studies where stability was assumed.
Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential o ver the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such out-of-equilibrium behavior is a consequence of highly non-isotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims, proven in the appendix.
Traditional optimization algorithms search for a single global optimum that maximizes (or minimizes) the objective function. Multimodal optimization algorithms search for the highest peaks in the search space that can be more than one. Quality-Divers ity algorithms are a recent addition to the evolutionary computation toolbox that do not only search for a single set of local optima, but instead try to illuminate the search space. In effect, they provide a holistic view of how high-performing solutions are distributed throughout a search space. The main differences with multimodal optimization algorithms are that (1) Quality-Diversity typically works in the behavioral space (or feature space), and not in the genotypic (or parameter) space, and (2) Quality-Diversity attempts to fill the whole behavior space, even if the niche is not a peak in the fitness landscape. In this chapter, we provide a gentle introduction to Quality-Diversity optimization, discuss the main representative algorithms, and the main current topics under consideration in the community. Throughout the chapter, we also discuss several successful applications of Quality-Diversity algorithms, including deep learning, robotics, and reinforcement learning.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا