ﻻ يوجد ملخص باللغة العربية
Scenario optimization is by now a well established technique to perform designs in the presence of uncertainty. It relies on domain knowledge integrated with first-hand information that comes from data and generates solutions that are also accompanied by precise statements of reliability. In this paper, following recent developments in (Garatti and Campi, 2019), we venture beyond the traditional set-up of scenario optimization by analyzing the concept of constraints relaxation. By a solid theoretical underpinning, this new paradigm furnishes fundamental tools to perform designs that meet a proper compromise between robustness and performance. After suitably expanding the scope of constraints relaxation as proposed in (Garatti and Campi, 2019), we focus on various classical Support Vector methods in machine learning - including SVM (Support Vector Machine), SVR (Support Vector Regression) and SVDD (Support Vector Data Description) - and derive new results for the ability of these methods to generalize.
Machine learning (ML) based smart meter data analytics is very promising for energy management and demand-response applications in the advanced metering infrastructure(AMI). A key challenge in developing distributed ML applications for AMI is to pres
Bilevel optimization has become a powerful framework in various machine learning applications including meta-learning, hyperparameter optimization, and network architecture search. There are generally two classes of bilevel optimization formulations
Stochastic model predictive control (SMPC) has been a promising solution to complex control problems under uncertain disturbances. However, traditional SMPC approaches either require exact knowledge of probabilistic distributions, or rely on massive
In this paper, we propose a relaxation to the stochastic ruler method originally described by Yan and Mukai in 1992 for asymptotically determining the global optima of discrete simulation optimization problems. We show that our proposed variant of th
This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous $p$ decisi