Do you want to publish a course? Click here

Safely and Quickly Deploying New Features with a Staged Rollout Framework Using Sequential Test and Adaptive Experimental Design

60   0   0.0 ( 0 )
 Added by Zhenyu Zhao
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

During the rapid development cycle for Internet products (websites and mobile apps), new features are developed and rolled out to users constantly. Features with code defects or design flaws can cause outages and significant degradation of user experience. The traditional method of code review and change management can be time-consuming and error-prone. In order to make the feature rollout process safe and fast, this paper proposes a methodology for rolling out features in an automated way using an adaptive experimental design. Under this framework, a feature is gradually ramped up from a small proportion of users to a larger population based on real-time evaluation of the performance of important metrics. If there are any regression detected during the ramp-up step, the ramp-up process stops and the feature developer is alerted. There are two main algorithm components powering this framework: 1) a continuous monitoring algorithm - using a variant of the sequential probability ratio test (SPRT) to monitor the feature performance metrics and alert feature developers when a metric degradation is detected, 2) an automated ramp-up algorithm - deciding when and how to ramp up to the next stage with larger sample size. This paper presents one monitoring algorithm and three ramping up algorithms including time-based, power-based, and risk-based (a Bayesian approach) schedules. These algorithms are evaluated and compared on both simulated data and real data. There are three benefits provided by this framework for feature rollout: 1) for defective features, it can detect the regression early and reduce negative effect, 2) for healthy features, it rolls out the feature quickly, 3) it reduces the need for manual intervention via the automation of the feature rollout process.



rate research

Read More

We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. Traditional sequential Bayesian optimal experimental design approaches require substantial computation at each stage of the experiment. This makes them unsuitable for most real-world applications, where decisions must typically be made quickly. DAD addresses this restriction by learning an amortized design network upfront and then using this to rapidly run (multiple) adaptive experiments at deployment time. This network represents a design policy which takes as input the data from previous steps, and outputs the next design using a single forward pass; these design decisions can be made in milliseconds during the live experiment. To train the network, we introduce contrastive information bounds that are suitable objectives for the sequential setting, and propose a customized network architecture that exploits key symmetries. We demonstrate that DAD successfully amortizes the process of experimental design, outperforming alternative strategies on a number of problems.
In the case of a linear state space model, we implement an MCMC sampler with two phases. In the learning phase, a self-tuning sampler is used to learn the parameter mean and covariance structure. In the estimation phase, the parameter mean and covariance structure informs the proposed mechanism and is also used in a delayed-acceptance algorithm. Information on the resulting state of the system is given by a Gaussian mixture. In on-line mode, the algorithm is adaptive and uses a sliding window approach to accelerate sampling speed and to maintain appropriate acceptance rates. We apply the algorithm to joined state and parameter estimation in the case of irregularly sampled GPS time series data.
In this paper, we aim at solving a class of multiple testing problems under the Bayesian sequential decision framework. Our motivating application comes from binary labeling tasks in crowdsourcing, where the requestor needs to simultaneously decide which worker to choose to provide the label and when to stop collecting labels under a certain budget constraint. We start with the binary hypothesis testing problem to determine the true label of a single object, and provide an optimal solution by casting it under the adaptive sequential probability ratio test (Ada-SPRT) framework. We characterize the structure of the optimal solution, i.e., optimal adaptive sequential design, which minimizes the Bayes risk through log-likelihood ratio statistic. We also develop a dynamic programming algorithm that can efficiently approximate the optimal solution. For the multiple testing problem, we further propose to adopt an empirical Bayes approach for estimating class priors and show that our method has an averaged loss that converges to the minimal Bayes risk under the true model. The experiments on both simulated and real data show the robustness of our method and its superiority in labeling accuracy as compared to several other recently proposed approaches.
175 - Jing Yu , Mihai Anitescu 2019
We propose an optimization algorithm to compute the optimal sensor locations in experimental design in the formulation of Bayesian inverse problems, where the parameter-to-observable mapping is described through an integral equation and its discretization results in a continuously indexed matrix whose size depends on the mesh size n. By approximating the gradient and Hessian of the objective design criterion from Chebyshev interpolation, we solve a sequence of quadratic programs and achieve the complexity $mathcal{O}(nlog^2(n))$. An error analysis guarantees the integrality gap shrinks to zero as $ntoinfty$, and we apply the algorithm on a two-dimensional advection-diffusion equation, to determine the LIDARs optimal sensing directions for data collection.
The current work is motivated by the need for robust statistical methods for precision medicine; as such, we address the need for statistical methods that provide actionable inference for a single unit at any point in time. We aim to learn an optimal, unknown choice of the controlled components of the design in order to optimize the expected outcome; with that, we adapt the randomization mechanism for future time-point experiments based on the data collected on the individual over time. Our results demonstrate that one can learn the optimal rule based on a single sample, and thereby adjust the design at any point t with valid inference for the mean target parameter. This work provides several contributions to the field of statistical precision medicine. First, we define a general class of averages of conditional causal parameters defined by the current context for the single unit time-series data. We define a nonparametric model for the probability distribution of the time-series under few assumptions, and aim to fully utilize the sequential randomization in the estimation procedure via the double robust structure of the efficient influence curve of the proposed target parameter. We present multiple exploration-exploitation strategies for assigning treatment, and methods for estimating the optimal rule. Lastly, we present the study of the data-adaptive inference on the mean under the optimal treatment rule, where the target parameter adapts over time in response to the observed context of the individual. Our target parameter is pathwise differentiable with an efficient influence function that is doubly robust - which makes it easier to estimate than previously proposed variations. We characterize the limit distribution of our estimator under a Donsker condition expressed in terms of a notion of bracketing entropy adapted to martingale settings.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا