ﻻ يوجد ملخص باللغة العربية
We introduce and demonstrate a semi-empirical procedure for determining approximate objective functions suitable for optimizing arbitrarily parameterized proposal distributions in MCMC methods. Our proposed Ab Initio objective functions consist of the weighted combination of functions following constraints on their global optima and of coordinate invariance that we argue should be upheld by general measures of MCMC efficiency for use in proposal optimization. The coefficients of Ab Initio objective functions are determined so as to recover the optimal MCMC behavior prescribed by established theoretical analysis for chosen reference problems. Our experimental results demonstrate that Ab Initio objective functions maintain favorable performance and preferable optimization behavior compared to existing objective functions for MCMC optimization when optimizing highly expressive proposal distributions. We argue that Ab Initio objective functions are sufficiently robust to enable the confident optimization of MCMC proposal distributions parameterized by deep generative networks that extend beyond the traditional limitations of individual MCMC schemes.
We develop parallel predictive entropy search (PPES), a novel algorithm for Bayesian optimization of expensive black-box objective functions. At each iteration, PPES aims to select a batch of points which will maximize the information gain about the
Multi-objective optimization (MOO) is a prevalent challenge for Deep Learning, however, there exists no scalable MOO solution for truly deep neural networks. Prior work either demand optimizing a new network for every point on the Pareto front, or in
Computational design problems arise in a number of settings, from synthetic biology to computer architectures. In this paper, we aim to solve data-driven model-based optimization (MBO) problems, where the goal is to find a design input that maximizes
Multi-objective optimization is key to solving many Engineering Design problems, where design parameters are optimized for several performance indicators. However, optimization results are highly dependent on how the designs are parameterized. Resear
Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and