ترغب بنشر مسار تعليمي؟ اضغط هنا

YAHPO Gym -- Design Criteria and a new Multifidelity Benchmark for Hyperparameter Optimization

79   0   0.0 ( 0 )
 نشر من قبل Lennart Schneider
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

When developing and analyzing new hyperparameter optimization (HPO) methods, it is vital to empirically evaluate and compare them on well-curated benchmark suites. In this work, we list desirable properties and requirements for such benchmarks and propose a new set of challenging and relevant multifidelity HPO benchmark problems motivated by these requirements. For this, we revisit the concept of surrogate-based benchmarks and empirically compare them to more widely-used tabular benchmarks, showing that the latter ones may induce bias in performance estimation and ranking of HPO methods. We present a new surrogate-based benchmark suite for multifidelity HPO methods consisting of 9 benchmark collections that constitute over 700 multifidelity HPO problems in total. All our benchmarks also allow for querying of multiple optimization targets, enabling the benchmarking of multi-objective HPO. We examine and compare our benchmark suite with respect to the defined requirements and show that our benchmarks provide viable additions to existing suites.



قيم البحث

اقرأ أيضاً

This paper presents a novel and lightweight hyperparameter optimization (HPO) method, MOdular FActorial Design (MOFA). MOFA pursues several rounds of HPO, where each round alternates between exploration of hyperparameter space by factorial design and exploitation of evaluation results by factorial analysis. Each round first explores the configuration space by constructing a low-discrepancy set of hyperparameters that cover this space well while de-correlating hyperparameters, and then exploits evaluation results through factorial analysis that determines which hyperparameters should be further explored and which should become fixed in the next round. We prove that the inference of MOFA achieves higher confidence than other sampling schemes. Each individual round is highly parallelizable and hence offers major improvements of efficiency compared to model-based methods. Empirical results show that MOFA achieves better effectiveness and efficiency compared with state-of-the-art methods.
This paper presents a new conditional GAN (named convex relaxing CGAN or crCGAN) to replicate the conventional constrained topology optimization algorithms in an extremely effective and efficient process. The proposed crCGAN consists of a generator a nd a discriminator, both of which are deep convolutional neural networks (CNN) and the topology design constraint can be conditionally set to both the generator and discriminator. In order to improve the training efficiency and accuracy due to the dependency between the training images and the condition, a variety of crCGAN formulation are introduced to relax the non-convex design space. These new formulations were evaluated and validated via a series of comprehensive experiments. Moreover, a minibatch discrimination technique was introduced in the crCGAN training process to stabilize the convergence and avoid the mode collapse problems. Additional verifications were conducted using the state-of-the-art MNIST digits and CIFAR-10 images conditioned by class labels. The experimental evaluations clearly reveal that the new objective formulation with the minibatch discrimination training provides not only the accuracy but also the consistency of the designs.
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, Hyperband, for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that Hyperband can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.
We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural network s, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as ~29 million training curves for these problems and the corresponding hyperparameters.
Can we reduce the search cost of Neural Architecture Search (NAS) from days down to only few hours? NAS methods automate the design of Convolutional Networks (ConvNets) under hardware constraints and they have emerged as key components of AutoML fram eworks. However, the NAS problem remains challenging due to the combinatorially large design space and the significant search time (at least 200 GPU-hours). In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints. We propose a novel differentiable NAS formulation, namely Single-Path NAS, that uses one single-path over-parameterized ConvNet to encode all architectural decisions based on shared convolutional kernel parameters, hence drastically decreasing the search overhead. Single-Path NAS achieves state-of-the-art top-1 ImageNet accuracy (75.62%), hence outperforming existing mobile NAS methods in similar latency settings (~80ms). In particular, we enhance the accuracy-runtime trade-off in differentiable NAS by treating the Squeeze-and-Excitation path as a fully searchable operation with our novel single-path encoding. Our method has an overall cost of only 8 epochs (24 TPU-hours), which is up to 5,000x faster compared to prior work. Moreover, we study how different NAS formulation choices affect the performance of the designed ConvNets. Furthermore, we exploit the efficiency of our method to answer an interesting question: instead of empirically tuning the hyperparameters of the NAS solver (as in prior work), can we automatically find the hyperparameter values that yield the desired accuracy-runtime trade-off? We open-source our entire codebase at: https://github.com/dstamoulis/single-path-nas.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا