ﻻ يوجد ملخص باللغة العربية
One of the first and easy to use techniques for proving run time bounds for evolutionary algorithms is the so-called method of fitness levels by Wegener. It uses a partition of the search space into a sequence of levels which are traversed by the algorithm in increasing order, possibly skipping levels. An easy, but often strong upper bound for the run time can then be derived by adding the reciprocals of the probabilities to leave the levels (or upper bounds for these). Unfortunately, a similarly effective method for proving lower bounds has not yet been established. The strongest such method, proposed by Sudholt (2013), requires a careful choice of the viscosity parameters $gamma_{i,j}$, $0 le i < j le n$. In this paper we present two new variants of the method, one for upper and one for lower bounds. Besides the level leaving probabilities, they only rely on the probabilities that levels are visited at all. We show that these can be computed or estimated without greater difficulties and apply our method to reprove the following known results in an easy and natural way. (i) The precise run time of the (1+1) EA on textsc{LeadingOnes}. (ii) A lower bound for the run time of the (1+1) EA on textsc{OneMax}, tight apart from an $O(n)$ term. (iii) A lower bound for the run time of the (1+1) EA on long $k$-paths. We also prove a tighter lower bound for the run time of the (1+1) EA on jump functions by showing that, regardless of the jump size, only with probability $O(2^{-n})$ the algorithm can avoid to jump over the valley of low fitness.
Theoreticians have studied distributed algorithms in the radio network model for close to three decades. A significant fraction of this work focuses on lower bounds for basic communication problems such as wake-up (symmetry breaking among an unknown
Linials seminal result shows that any deterministic distributed algorithm that finds a $3$-colouring of an $n$-cycle requires at least $log^*(n)/2 - 1$ communication rounds. We give a new simpler proof of this theorem.
We present skweak, a versatile, Python-based software toolkit enabling NLP developers to apply weak supervision to a wide range of NLP tasks. Weak supervision is an emerging machine learning paradigm based on a simple idea: instead of labelling data
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce
We develop taggers for multi-pronged jets that are simple functions of jet substructure (so-called `subjettiness) variables. These taggers can be approximately decorrelated from the jet mass in a quite simple way. Specifically, we use a Logistic Regr