ترغب بنشر مسار تعليمي؟ اضغط هنا

Refined bounds for algorithm configuration: The knife-edge of dual class approximability

90   0   0.0 ( 0 )
 نشر من قبل Ellen Vitercik
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Automating algorithm configuration is growing increasingly necessary as algorithms come with more and more tunable parameters. It is common to tune parameters using machine learning, optimizing performance metrics such as runtime and solution quality. The training set consists of problem instances from the specific domain at hand. We investigate a fundamental question about these techniques: how large should the training set be to ensure that a parameters average empirical performance over the training set is close to its expected, future performance? We answer this question for algorithm configuration problems that exhibit a widely-applicable structure: the algorithms performance as a function of its parameters can be approximated by a simple function. We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds. On the flip side, if the approximation holds only under the L-p norm for p smaller than infinity, it is not possible to provide meaningful sample complexity bounds in the worst case. We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science. Via experiments, we obtain sample complexity bounds that are up to 700 times smaller than the previously best-known bounds.



قيم البحث

اقرأ أيضاً

In the $d$-Scattered Set problem we are asked to select at least $k$ vertices of a given graph, so that the distance between any pair is at least $d$. We study the problems (in-)approximability and offer improvements and extensions of known results f or Independent Set, of which the problem is a generalization. Specifically, we show: - A lower bound of $Delta^{lfloor d/2rfloor-epsilon}$ on the approximation ratio of any polynomial-time algorithm for graphs of maximum degree $Delta$ and an improved upper bound of $O(Delta^{lfloor d/2rfloor})$ on the approximation ratio of any greedy scheme for this problem. - A polynomial-time $2sqrt{n}$-approximation for bipartite graphs and even values of $d$, that matches the known lower bound by considering the only remaining case. - A lower bound on the complexity of any $rho$-approximation algorithm of (roughly) $2^{frac{n^{1-epsilon}}{rho d}}$ for even $d$ and $2^{frac{n^{1-epsilon}}{rho(d+rho)}}$ for odd $d$ (under the randomized ETH), complemented by $rho$-approximation algorithms of running times that (almost) match these bounds.
The Southern Robotic Adaptive Optics (SRAO) instrument will bring the proven high-efficiency capabilities of Robo-AO to the Southern-Hemisphere, providing the unique capability to image with high-angular-resolution thousands of targets per year acros s the entire sky. Deployed on the modern 4.1m SOAR telescope located on Cerro Tololo, the NGS-AO system will use an innovative dual-knife-edge wavefront sensor, similar to a pyramid sensor, to enable guiding on targets down to V=16 with diffraction limited resolution in the NIR. The dual-knife-edge wavefront sensor can be up to two orders of magnitude less costly than custom glass pyramids, with similar wavefront error sensitivity and minimal chromatic aberrations. SRAO is capable of observing hundreds of targets a night through automation, allowing confirmation and characterization of the large number of exoplanets produced by current and future missions.
Dynamic Algorithm Configuration (DAC) aims to dynamically control a target algorithms hyperparameters in order to improve its performance. Several theoretical and empirical results have demonstrated the benefits of dynamically controlling hyperparame ters in domains like evolutionary computation, AI Planning or deep learning. Replicating these results, as well as studying new methods for DAC, however, is difficult since existing benchmarks are often specialized and incompatible with the same interfaces. To facilitate benchmarking and thus research on DAC, we propose DACBench, a benchmark library that seeks to collect and standardize existing DAC benchmarks from different AI domains, as well as provide a template for new ones. For the design of DACBench, we focused on important desiderata, such as (i) flexibility, (ii) reproducibility, (iii) extensibility and (iv) automatic documentation and visualization. To show the potential, broad applicability and challenges of DAC, we explore how a set of six initial benchmarks compare in several dimensions of difficulty.
The k-means objective is arguably the most widely-used cost function for modeling clustering tasks in a metric space. In practice and historically, k-means is thought of in a continuous setting, namely where the centers can be located anywhere in the metric space. For example, the popular Lloyds heuristic locates a center at the mean of each cluster. Despite persistent efforts on understanding the approximability of k-means, and other classic clustering problems such as k-median and k-minsum, our knowledge of the hardness of approximation factors of these problems remains quite poor. In this paper, we significantly improve upon the hardness of approximation factors known in the literature for these objectives. We show that if the input lies in a general metric space, it is NP-hard to approximate: $bullet$ Continuous k-median to a factor of $2-o(1)$; this improves upon the previous inapproximability factor of 1.36 shown by Guha and Khuller (J. Algorithms 99). $bullet$ Continuous k-means to a factor of $4- o(1)$; this improves upon the previous inapproximability factor of 2.10 shown by Guha and Khuller (J. Algorithms 99). $bullet$ k-minsum to a factor of $1.415$; this improves upon the APX-hardness shown by Guruswami and Indyk (SODA 03). Our results shed new and perhaps counter-intuitive light on the differences between clustering problems in the continuous setting versus the discrete setting (where the candidate centers are given as part of the input).
363 - Hu Qin , Zizhen Zhang , Yubin Xie 2014
This paper introduces a multi-period inspector scheduling problem (MPISP), which is a new variant of the multi-trip vehicle routing problem with time windows (VRPTW). In the MPISP, each inspector is scheduled to perform a route in a given multi-perio d planning horizon. At the end of each period, each inspector is not required to return to the depot but has to stay at one of the vertices for recuperation. If the remaining time of the current period is insufficient for an inspector to travel from his/her current vertex $A$ to a certain vertex B, he/she can choose either waiting at vertex A until the start of the next period or traveling to a vertex C that is closer to vertex B. Therefore, the shortest transit time between any vertex pair is affected by the length of the period and the departure time. We first describe an approach of computing the shortest transit time between any pair of vertices with an arbitrary departure time. To solve the MPISP, we then propose several local search operators adapted from classical operators for the VRPTW and integrate them into a tabu search framework. In addition, we present a constrained knapsack model that is able to produce an upper bound for the problem. Finally, we evaluate the effectiveness of our algorithm with extensive experiments based on a set of test instances. Our computational results indicate that our approach generates high-quality solutions.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا