No Arabic abstract
The goal of this paper is to make Optimal Experimental Design (OED) computationally feasible for problems involving significant computational expense. We focus exclusively on the Mean Objective Cost of Uncertainty (MOCU), which is a specific methodology for OED, and we propose extensions to MOCU that leverage surrogates and adaptive sampling. We focus on reducing the computational expense associated with evaluating a large set of control policies across a large set of uncertain variables. We propose reducing the computational expense of MOCU by approximating intermediate calculations associated with each parameter/control pair with a surrogate. This surrogate is constructed from sparse sampling and (possibly) refined adaptively through a combination of sensitivity estimation and probabilistic knowledge gained directly from the experimental measurements prescribed from MOCU. We demonstrate our methods on example problems and compare performance relative to surrogate-approximated MOCU with no adaptive sampling and to full MOCU. We find evidence that adaptive sampling does improve performance, but the decision on whether to use surrogate-approximated MOCU versus full MOCU will depend on the relative expense of computation versus experimentation. If computation is more expensive than experimentation, then one should consider using our approach.
Multistage risk-averse optimal control problems with nested conditional risk mappings are gaining popularity in various application domains. Risk-averse formulations interpolate between the classical expectation-based stochastic and minimax optimal control. This way, risk-averse problems aim at hedging against extreme low-probability events without being overly conservative. At the same time, risk-based constraints may be employed either as surrogates for chance (probabilistic) constraints or as a robustification of expectation-based constraints. Such multistage problems, however, have been identified as particularly hard to solve. We propose a decomposition method for such nested problems that allows us to solve them via efficient numerical optimization methods. Alongside, we propose a new form of risk constraints which accounts for the propagation of uncertainty in time.
Bayesian optimal experimental design (BOED) is a principled framework for making efficient use of limited experimental resources. Unfortunately, its applicability is hampered by the difficulty of obtaining accurate estimates of the expected information gain (EIG) of an experiment. To address this, we introduce several classes of fast EIG estimators by building on ideas from amortized variational inference. We show theoretically and empirically that these estimators can provide significant gains in speed and accuracy over previous approaches. We further demonstrate the practicality of our approach on a number of end-to-end experiments.
Experimentation has become an increasingly prevalent tool for guiding decision-making and policy choices. A common hurdle in designing experiments is the lack of statistical power. In this paper, we study the optimal multi-period experimental design under the constraint that the treatment cannot be easily removed once implemented; for example, a government might implement a public health intervention in different geographies at different times, where the treatment cannot be easily removed due to practical constraints. The treatment design problem is to select which geographies (referred by units) to treat at which time, intending to test hypotheses about the effect of the treatment. When the potential outcome is a linear function of unit and time effects, and discrete observed/latent covariates, we provide an analytically feasible solution to the optimal treatment design problem where the variance of the treatment effect estimator is at most 1+O(1/N^2) times the variance using the optimal treatment design, where N is the number of units. This solution assigns units in a staggered treatment adoption pattern - if the treatment only affects one period, the optimal fraction of treated units in each period increases linearly in time; if the treatment affects multiple periods, the optimal fraction increases non-linearly in time, smaller at the beginning and larger at the end. In the general setting where outcomes depend on latent covariates, we show that historical data can be utilized in designing experiments. We propose a data-driven local search algorithm to assign units to treatment times. We demonstrate that our approach improves upon benchmark experimental designs via synthetic interventions on the influenza occurrence rate and synthetic experiments on interventions for in-home medical services and grocery expenditure.
In this paper, we consider robust control using randomized algorithms. We extend the existing order statistics distribution theory to the general case in which the distribution of population is not assumed to be continuous and the order statistics is associated with certain constraints. In particular, we derive an inequality on distribution for related order statistics. Moreover, we also propose two different approaches in searching reliable solutions to the robust analysis and optimal synthesis problems under constraints. Furthermore, minimum computational effort is investigated and bounds for sample size are derived.
An approach to optimal actuator design based on shape and topology optimisation techniques is presented. For linear diffusion equations, two scenarios are considered. For the first one, best actuators are determined depending on a given initial condition. In the second scenario, optimal actuators are determined based on all initial conditions not exceeding a chosen norm. Shape and topological sensitivities of these cost functionals are determined. A numerical algorithm for optimal actuator design based on the sensitivities and a level-set method is presented. Numerical results support the proposed methodology.