ترغب بنشر مسار تعليمي؟ اضغط هنا

Enhanced data efficiency using deep neural networks and Gaussian processes for aerodynamic design optimization

399   0   0.0 ( 0 )
 نشر من قبل S. Ashwin Renganathan
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Adjoint-based optimization methods are attractive for aerodynamic shape design primarily due to their computational costs being independent of the dimensionality of the input space and their ability to generate high-fidelity gradients that can then be used in a gradient-based optimizer. This makes them very well suited for high-fidelity simulation based aerodynamic shape optimization of highly parametrized geometries such as aircraft wings. However, the development of adjoint-based solvers involve careful mathematical treatment and their implementation require detailed software development. Furthermore, they can become prohibitively expensive when multiple optimization problems are being solved, each requiring multiple restarts to circumvent local optima. In this work, we propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver, without compromising on predicting predictive accuracy. Specifically, we first train a deep neural network (DNN) from training data generated from evaluating the high-fidelity simulation model on a model-agnostic, design of experiments on the geometry shape parameters. The optimum shape may then be computed by using a gradient-based optimizer coupled with the trained DNN. Subsequently, we also perform a gradient-free Bayesian optimization, where the trained DNN is used as the prior mean. We observe that the latter framework (DNN-BO) improves upon the DNN-only based optimization strategy for the same computational cost. Overall, this framework predicts the true optimum with very high accuracy, while requiring far fewer high-fidelity function calls compared to the adjoint-based method. Furthermore, we show that multiple optimization problems can be solved with the same machine learning model with high accuracy, to amortize the offline costs associated with constructing our models.



قيم البحث

اقرأ أيضاً

Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization pro blem. For deterministic bilevel optimization, we provide a comprehensive convergence rate analysis for two popular algorithms respectively based on approximate implicit differentiation (AID) and iterative differentiation (ITD). For the AID-based method, we orderwisely improve the previous convergence rate analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate. Our analysis also provides a quantitative comparison between ITD and AID based approaches. For stochastic bilevel optimization, we propose a novel algorithm named stocBiO, which features a sample-efficient hypergradient estimator using efficient Jacobian- and Hessian-vector product computations. We provide the convergence rate guarantee for stocBiO, and show that stocBiO outperforms the best known computational complexities orderwisely with respect to the condition number $kappa$ and the target accuracy $epsilon$. We further validate our theoretical results and demonstrate the efficiency of bilevel optimization algorithms by the experiments on meta-learning and hyperparameter optimization.
It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide deep networks and GPs. We further develop a computationally efficient pipeline to compute the covariance function for these GPs. We then use the resulting GPs to perform Bayesian inference for wide deep neural networks on MNIST and CIFAR-10. We observe that trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks. Finally we connect the performance of these GPs to the recent theory of signal propagation in random neural networks.
Recent advances in acquisition equipment is providing experiments with growing amounts of precise yet affordable sensors. At the same time an improved computational power, coming from new hardware resources (GPU, FPGA, ACAP), has been made available at relatively low costs. This led us to explore the possibility of completely renewing the chain of acquisition for a fusion experiment, where many high-rate sources of data, coming from different diagnostics, can be combined in a wide framework of algorithms. If on one hand adding new data sources with different diagnostics enriches our knowledge about physical aspects, on the other hand the dimensions of the overall model grow, making relations among variables more and more opaque. A new approach for the integration of such heterogeneous diagnostics, based on composition of deep variational autoencoders, could ease this problem, acting as a structural sparse regularizer. This has been applied to RFX-mod experiment data, integrating the soft X-ray linear images of plasma temperature with the magnetic state. However to ensure a real-time signal analysis, those algorithmic techniques must be adapted to run in well suited hardware. In particular it is shown that, attempting a quantization of neurons transfer functions, such models can be modified to create an embedded firmware. This firmware, approximating the deep inference model to a set of simple operations, fits well with the simple logic units that are largely abundant in FPGAs. This is the key factor that permits the use of affordable hardware with complex deep neural topology and operates them in real-time.
Deep Gaussian processes (DGPs) have struggled for relevance in applications due to the challenges and cost associated with Bayesian inference. In this paper we propose a sparse variational approximation for DGPs for which the approximate posterior me an has the same mathematical structure as a Deep Neural Network (DNN). We make the forward pass through a DGP equivalent to a ReLU DNN by finding an interdomain transformation that represents the GP posterior mean as a sum of ReLU basis functions. This unification enables the initialisation and training of the DGP as a neural network, leveraging the well established practice in the deep learning community, and so greatly aiding the inference task. The experiments demonstrate improved accuracy and faster training compared to current DGP methods, while retaining favourable predictive uncertainties.
Numerical simulation of complex optical structures enables their optimization with respect to specific objectives. Often, optimization is done by multiple successive parameter scans, which are time consuming and computationally expensive. We employ h ere Bayesian optimization with Gaussian processes in order to automatize and speed up the optimization process. As a toy example, we demonstrate optimization of the shape of a free-form reflective meta surface such that it diffracts light into a specific diffraction order. For this example, we compare the performance of six different Bayesian optimization approaches with various acquisition functions and various kernels of the Gaussian process.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا