No Arabic abstract
In this paper, we briefly review the development of ranking-and-selection (R&S) in the past 70 years, especially the theoretical achievements and practical applications in the last 20 years. Different from the frequentist and Bayesian classifications adopted by Kim and Nelson (2006b) and Chick (2006) in their review articles, we categorize existing R&S procedures into fixed-precision and fixed-budget procedures, as in Hunter and Nelson (2017). We show that these two categories of procedures essentially differ in the underlying methodological formulations, i.e., they are built on hypothesis testing and dynamic-programming, respectively. In light of this variation, we review in detail some well-known procedures in the literature and show how they fit into these two formulations. In addition, we discuss the use of R&S procedures in solving various practical problems and propose what we think are the important research questions in the field.
The use of persistently exciting data has recently been popularized in the context of data-driven analysis and control. Such data have been used to assess system theoretic properties and to construct control laws, without using a system model. Persistency of excitation is a strong condition that also allows unique identification of the underlying dynamical system from the data within a given model class. In this paper, we develop a new framework in order to work with data that are not necessarily persistently exciting. Within this framework, we investigate necessary and sufficient conditions on the informativity of data for several data-driven analysis and control problems. For certain analysis and design problems, our results reveal that persistency of excitation is not necessary. In fact, in these cases data-driven analysis/control is possible while the combination of (unique) system identification and model-based control is not. For certain other control problems, our results justify the use of persistently exciting data as data-driven control is possible only with data that are informative for system identification.
The problem of ranking is a multi-billion dollar problem. In this paper we present an overview of several production quality ranking systems. We show that due to conflicting goals of employing the most effective machine learning models and responding to users in real time, ranking systems have evolved into a system of systems, where each subsystem can be viewed as a component layer. We view these layers as being data processing, representation learning, candidate selection and online inference. Each layer employs different algorithms and tools, with every end-to-end ranking system spanning multiple architectures. Our goal is to familiarize the general audience with a working knowledge of ranking at scale, the tools and algorithms employed and the challenges introduced by adopting a layered approach.
We present a dynamical system framework for understanding Nesterovs accelerated gradient method. In contrast to earlier work, our derivation does not rely on a vanishing step size argument. We show that Nesterov acceleration arises from discretizing an ordinary differential equation with a semi-implicit Euler integration scheme. We analyze both the underlying differential equation as well as the discretization to obtain insights into the phenomenon of acceleration. The analysis suggests that a curvature-dependent damping term lies at the heart of the phenomenon. We further establish connections between the discretized and the continuous-time dynamics.
A formal approach to rephrase nonlinear filtering of stochastic differential equations is the Kushner setting in applied mathematics and dynamical systems. Thanks to the ability of the Carleman linearization, the nonlinear stochastic differential equation can be equivalently expressed as a finite system of bilinear stochastic differential equations with the augmented state under the finite closure. Interestingly, the novelty of this paper is to embed the Carleman linearization into a stochastic evolution of the Markov process. To illustrate the Carleman linearization of the Markov process, this paper embeds the Carleman linearization into a nonlinear swing stochastic differential equation. Furthermore, we achieve the nonlinear swing equation filtering in the Carleman setting. Filtering in the Carleman setting has simplified algorithmic procedure. The concerning augmented state accounts for the nonlinearity as well as stochasticity. We show that filtering of the nonlinear stochastic swing equation in the Carleman framework is more refined as well as sharper in contrast to benchmark nonlinear EKF. This paper suggests the usefulness of the Carleman embedding into the stochastic differential equation to filter the concerning nonlinear stochastic differential system. This paper will be of interest to nonlinear stochastic dynamists exploring and unfolding linearization embedding techniques to their research.
Bell suggested that a new perspective on quantum mechanics was needed. We propose a solution of the measurement problem based on a reconsideration of the nature of particles. The solution is presented with an idealized model involving non-locality or non-separability, identified in 1927 by Einstein and implicit in the standard interpretation of single slit (or hole) diffraction. Considering particles as being localizable entities leads to an `induced collapse model, a parameter-free alternative to spontaneous collapse models, that affords a new perspective on, emph{inter alia}, nuclear decay.