ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Minimal Set of Inputs Required for Efficient Neuro-Evolved Foraging

79   0   0.0 ( 0 )
 نشر من قبل Abhinav Aggarwal
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we perform an ablation study of eatfa, a neuro-evolved foraging algorithm that has recently been shown to forage efficiently under different resource distributions. Through selective disabling of input signals, we identify a emph{sufficiently} minimal set of input features that contribute the most towards determining search trajectories which favor high resource collection rates. Our experiments reveal that, independent of how the resources are distributed in the arena, the signals involved in imparting the controller the ability to switch from searching of resources to transporting them back to the nest are the most critical. Additionally, we find that pheromones play a key role in boosting performance of the controller by providing signals for informed locomotion in search for unforaged resources.

قيم البحث

اقرأ أيضاً

We present a foraging algorithm, GoldenFA, in which search direction is chosen based on the Golden Ratio. We show both theoretically and empirically that GoldenFA is more efficient for a single searcher than a comparable algorithm where search direct ion is chosen uniformly at random. Moreover, we give a variant of our algorithm that parallelizes linearly with the number of searchers.
We study randomized test-and-set (TAS) implementations from registers in the asynchronous shared memory model with n processes. We introduce the problem of group election, a natural variant of leader election, and propose a framework for the implemen tation of TAS objects from group election objects. We then present two group election algorithms, each yielding an efficient TAS implementation. The first implementation has expected max-step complexity $O(log^ast k)$ in the location-oblivious adversary model, and the second has expected max-step complexity $O(loglog k)$ against any read/write-oblivious adversary, where $kleq n$ is the contention. These algorithms improve the previous upper bound by Alistarh and Aspnes [2] of $O(loglog n)$ expected max-step complexity in the oblivious adversary model. We also propose a modification to a TAS algorithm by Alistarh, Attiya, Gilbert, Giurgiu, and Guerraoui [5] for the strong adaptive adversary, which improves its space complexity from super-linear to linear, while maintaining its $O(log n)$ expected max-step complexity. We then describe how this algorithm can be combined with any randomized TAS algorithm that has expected max-step complexity $T(n)$ in a weaker adversary model, so that the resulting algorithm has $O(log n)$ expected max-step complexity against any strong adaptive adversary and $O(T(n))$ in the weaker adversary model. Finally, we prove that for any randomized 2-process TAS algorithm, there exists a schedule determined by an oblivious adversary such that with probability at least $(1/4)^t$ one of the processes needs at least t steps to finish its TAS operation. This complements a lower bound by Attiya and Censor-Hillel [7] on a similar problem for $ngeq 3$ processes.
Humans have the ability to rapidly understand rich combinatorial concepts from limited data. Here we investigate this ability in the context of auditory signals, which have been evolved in a cultural transmission experiment to study the emergence of combinatorial structure in language. We propose a neuro-symbolic generative model which combines the strengths of previous approaches to concept learning. Our model performs fast inference drawing on neural network methods, while still retaining the interpretability and generalization from limited data seen in structured generative approaches. This model outperforms a purely neural network-based approach on classification as evaluated against both ground truth and human experimental classification preferences, and produces superior reproductions of observed signals as well. Our results demonstrate the power of flexible combined neural-symbolic architectures for human-like generalization in raw perceptual domains and offers a step towards developing precise computational models of inductive biases in language evolution.
In most online problems with delay, clairvoyance (i.e. knowing the future delay of a request upon its arrival) is required for polylogarithmic competitiveness. In this paper, we show that this is not the case for set cover with delay (SCD) -- specifi cally, we present the first non-clairvoyant algorithm, which is $O(log n log m)$-competitive, where $n$ is the number of elements and $m$ is the number of sets. This matches the best known result for the classic online set cover (a special case of non-clairvoyant SCD). Moreover, clairvoyance does not allow for significant improvement - we present lower bounds of $Omega(sqrt{log n})$ and $Omega(sqrt{log m})$ for SCD which apply for the clairvoyant case. In addition, the competitiveness of our algorithm does not depend on the number of requests. Such a guarantee on the size of the universe alone was not previously known even for the clairvoyant case - the only previously-known algorithm (due to Carrasco et al.) is clairvoyant, with competitiveness that grows with the number of requests. For the special case of vertex cover with delay, we show a simpler, deterministic algorithm which is $3$-competitive (and also non-clairvoyant).
We show that there is no deterministic local algorithm (constant-time distributed graph algorithm) that finds a $(7-epsilon)$-approximation of a minimum dominating set on planar graphs, for any positive constant $epsilon$. In prior work, the best low er bound on the approximation ratio has been $5-epsilon$; there is also an upper bound of $52$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا