ترغب بنشر مسار تعليمي؟ اضغط هنا

Algorithms of Data Development For Deep Learning and Feedback Design

176   0   0.0 ( 0 )
 نشر من قبل Wei Kang
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Recent research reveals that deep learning is an effective way of solving high dimensional Hamilton-Jacobi-Bellman equations. The resulting feedback control law in the form of a neural network is computationally efficient for real-time applications of optimal control. A critical part of this design method is to generate data for training the neural network and validating its accuracy. In this paper, we provide a survey of existing algorithms that can be used to generate data. All the algorithms surveyed in this paper are causality-free, i.e., the solution at a point is computed without using the value of the function at any other points. At the end of the paper, an illustrative example of optimal feedback design using deep learning is given.

قيم البحث

اقرأ أيضاً

The understanding of nonlinear, high dimensional flows, e.g, atmospheric and ocean flows, is critical to address the impacts of global climate change. Data Assimilation techniques combine physical models and observational data, often in a Bayesian fr amework, to predict the future state of the model and the uncertainty in this prediction. Inherent in these systems are noise (Gaussian and non-Gaussian), nonlinearity, and high dimensionality that pose challenges to making accurate predictions. To address these issues we investigate the use of both model and data dimension reduction based on techniques including Assimilation in Unstable Subspaces, Proper Orthogonal Decomposition, and Dynamic Mode Decomposition. Algorithms that take advantage of projected physical and data models may be combined with Data Analysis techniques such as Ensemble Kalman Filter and Particle Filter variants. The projected Data Assimilation techniques are developed for the optimal proposal particle filter and applied to the Lorenz96 and Shallow Water Equations to test the efficacy of our techniques in high dimensional, nonlinear systems.
Adjoint-based optimization methods are attractive for aerodynamic shape design primarily due to their computational costs being independent of the dimensionality of the input space and their ability to generate high-fidelity gradients that can then b e used in a gradient-based optimizer. This makes them very well suited for high-fidelity simulation based aerodynamic shape optimization of highly parametrized geometries such as aircraft wings. However, the development of adjoint-based solvers involve careful mathematical treatment and their implementation require detailed software development. Furthermore, they can become prohibitively expensive when multiple optimization problems are being solved, each requiring multiple restarts to circumvent local optima. In this work, we propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver, without compromising on predicting predictive accuracy. Specifically, we first train a deep neural network (DNN) from training data generated from evaluating the high-fidelity simulation model on a model-agnostic, design of experiments on the geometry shape parameters. The optimum shape may then be computed by using a gradient-based optimizer coupled with the trained DNN. Subsequently, we also perform a gradient-free Bayesian optimization, where the trained DNN is used as the prior mean. We observe that the latter framework (DNN-BO) improves upon the DNN-only based optimization strategy for the same computational cost. Overall, this framework predicts the true optimum with very high accuracy, while requiring far fewer high-fidelity function calls compared to the adjoint-based method. Furthermore, we show that multiple optimization problems can be solved with the same machine learning model with high accuracy, to amortize the offline costs associated with constructing our models.
We consider a variation on the classical finance problem of optimal portfolio design. In our setting, a large population of consumers is drawn from some distribution over risk tolerances, and each consumer must be assigned to a portfolio of lower ris k than her tolerance. The consumers may also belong to underlying groups (for instance, of demographic properties or wealth), and the goal is to design a small number of portfolios that are fair across groups in a particular and natural technical sense. Our main results are algorithms for optimal and near-optimal portfolio design for both social welfare and fairness objectives, both with and without assumptions on the underlying group structure. We describe an efficient algorithm based on an internal two-player zero-sum game that learns near-optimal fair portfolios ex ante and show experimentally that it can be used to obtain a small set of fair portfolios ex post as well. For the special but natural case in which group structure coincides with risk tolerances (which models the reality that wealthy consumers generally tolerate greater risk), we give an efficient and optimal fair algorithm. We also provide generalization guarantees for the underlying risk distribution that has no dependence on the number of portfolios and illustrate the theory with simulation results.
Motivated by the high-frequency data streams continuously generated, real-time learning is becoming increasingly important. These data streams should be processed sequentially with the property that the stream may change over time. In this streaming setting, we propose techniques for minimizing a convex objective through unbiased estimates of its gradients, commonly referred to as stochastic approximation problems. Our methods rely on stochastic approximation algorithms due to their computationally advantage as they only use the previous iterate as a parameter estimate. The reasoning includes iterate averaging that guarantees optimal statistical efficiency under classical conditions. Our non-asymptotic analysis shows accelerated convergence by selecting the learning rate according to the expected data streams. We show that the average estimate converges optimally and robustly to any data stream rate. In addition, noise reduction can be achieved by processing the data in a specific pattern, which is advantageous for large-scale machine learning. These theoretical results are illustrated for various data streams, showing the effectiveness of the proposed algorithms.
72 - Chao Ning , Fengqi You 2019
This paper reviews recent advances in the field of optimization under uncertainty via a modern data lens, highlights key research challenges and promise of data-driven optimization that organically integrates machine learning and mathematical program ming for decision-making under uncertainty, and identifies potential research opportunities. A brief review of classical mathematical programming techniques for hedging against uncertainty is first presented, along with their wide spectrum of applications in Process Systems Engineering. A comprehensive review and classification of the relevant publications on data-driven distributionally robust optimization, data-driven chance constrained program, data-driven robust optimization, and data-driven scenario-based optimization is then presented. This paper also identifies fertile avenues for future research that focuses on a closed-loop data-driven optimization framework, which allows the feedback from mathematical programming to machine learning, as well as scenario-based optimization leveraging the power of deep learning techniques. Perspectives on online learning-based data-driven multistage optimization with a learning-while-optimizing scheme is presented.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا