ﻻ يوجد ملخص باللغة العربية
In this paper, we present a novel neuroevolutionary method to identify the architecture and hyperparameters of convolutional autoencoders. Remarkably, we used a hypervolume indicator in the context of neural architecture search for autoencoders, for the first time to our current knowledge. Results show that images were compressed by a factor of more than 10, while still retaining enough information to achieve image classification for the majority of the tasks. Thus, this new approach can be used to speed up the AutoML pipeline for image compression.
Mixed-precision quantization is a powerful tool to enable memory and compute savings of neural network workloads by deploying different sets of bit-width precisions on separate compute operations. Recent research has shown significant progress in app
Previous theory work on multi-objective evolutionary algorithms considers mostly easy problems that are composed of unimodal objectives. This paper takes a first step towards a deeper understanding of how evolutionary algorithms solve multi-modal mul
Cooperative autonomous approaches to avoiding collisions among small Unmanned Aerial Vehicles (UAVs) is central to safe integration of UAVs within the civilian airspace. One potential online cooperative approach is the concept of reciprocal actions,
The structure and performance of neural networks are intimately connected, and by use of evolutionary algorithms, neural network structures optimally adapted to a given task can be explored. Guiding such neuroevolution with additional objectives rela
Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have d