ﻻ يوجد ملخص باللغة العربية
Adjoint-based optimization methods are attractive for aerodynamic shape design primarily due to their computational costs being independent of the dimensionality of the input space and their ability to generate high-fidelity gradients that can then be used in a gradient-based optimizer. This makes them very well suited for high-fidelity simulation based aerodynamic shape optimization of highly parametrized geometries such as aircraft wings. However, the development of adjoint-based solvers involve careful mathematical treatment and their implementation require detailed software development. Furthermore, they can become prohibitively expensive when multiple optimization problems are being solved, each requiring multiple restarts to circumvent local optima. In this work, we propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver, without compromising on predicting predictive accuracy. Specifically, we first train a deep neural network (DNN) from training data generated from evaluating the high-fidelity simulation model on a model-agnostic, design of experiments on the geometry shape parameters. The optimum shape may then be computed by using a gradient-based optimizer coupled with the trained DNN. Subsequently, we also perform a gradient-free Bayesian optimization, where the trained DNN is used as the prior mean. We observe that the latter framework (DNN-BO) improves upon the DNN-only based optimization strategy for the same computational cost. Overall, this framework predicts the true optimum with very high accuracy, while requiring far fewer high-fidelity function calls compared to the adjoint-based method. Furthermore, we show that multiple optimization problems can be solved with the same machine learning model with high accuracy, to amortize the offline costs associated with constructing our models.
Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization pro
It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference
Recent advances in acquisition equipment is providing experiments with growing amounts of precise yet affordable sensors. At the same time an improved computational power, coming from new hardware resources (GPU, FPGA, ACAP), has been made available
Deep Gaussian processes (DGPs) have struggled for relevance in applications due to the challenges and cost associated with Bayesian inference. In this paper we propose a sparse variational approximation for DGPs for which the approximate posterior me
Numerical simulation of complex optical structures enables their optimization with respect to specific objectives. Often, optimization is done by multiple successive parameter scans, which are time consuming and computationally expensive. We employ h