No Arabic abstract
This paper deals with the optimization of industrial asset management strategies, whose profitability is characterized by the Net Present Value (NPV) indicator which is assessed by a Monte Carlo simulator. The developed method consists in building a metamodel of this stochastic simulator, allowing to get, for a given model input, the NPV probability distribution without running the simulator. The present work is concentrated on the emulation of the quantile function of the stochastic simulator by interpolating well chosen basis functions and metamodeling their coefficients (using the Gaussian process metamodel). This quantile function metamodel is then used to treat a problem of strategy maintenance optimization (four systems installed on different plants), in order to optimize an NPV quantile. Using the Gaussian process framework, an adaptive design method (called QFEI) is defined by extending in our case the well known EGO algorithm. This allows to obtain an optimal solution using a small number of simulator runs.
Gaussian processes (GP) are widely used as a metamodel for emulating time-consuming computer codes. We focus on problems involving categorical inputs, with a potentially large number L of levels (typically several tens), partitioned in G << L groups of various sizes. Parsimonious covariance functions, or kernels, can then be defined by block covariance matrices T with constant covariances between pairs of blocks and within blocks. We study the positive definiteness of such matrices to encourage their practical use. The hierarchical group/level structure, equivalent to a nested Bayesian linear model, provides a parameterization of valid block matrices T. The same model can then be used when the assumption within blocks is relaxed, giving a flexible parametric family of valid covariance matrices with constant covariances between pairs of blocks. The positive definiteness of T is equivalent to the positive definiteness of a smaller matrix of size G, obtained by averaging each block. The model is applied to a problem in nuclear waste analysis, where one of the categorical inputs is atomic number, which has more than 90 levels.
In this paper, we introduce single acceptance sampling inspection plan (SASIP) for transmuted Rayleigh (TR) distribution when the lifetime experiment is truncated at a prefixed time. Establish the proposed plan for different choices of confidence level, acceptance number and ratio of true mean lifetime to specified mean lifetime. Minimum sample size necessary to ensure a certain specified lifetime is obtained. Operating characteristic(OC) values and producers risk of proposed plan are presented. Two real life example has been presented to show the applicability of proposed SASIP.
In this work, we study the event occurrences of user activities on online social network platforms. To characterize the social activity interactions among network users, we propose a network group Hawkes (NGH) process model. Particularly, the observed network structure information is employed to model the users dynamic posting behaviors. Furthermore, the users are clustered into latent groups according to their dynamic behavior patterns. To estimate the model, a constraint maximum likelihood approach is proposed. Theoretically, we establish the consistency and asymptotic normality of the estimators. In addition, we show that the group memberships can be identified consistently. To conduct estimation, a branching representation structure is firstly introduced, and a stochastic EM (StEM) algorithm is developed to tackle the computational problem. Lastly, we apply the proposed method to a social network data collected from Sina Weibo, and identify the infuential network users as an interesting application.
This work is motivated by learning the individualized minimal clinically important difference, a vital concept to assess clinical importance in various biomedical studies. We formulate the scientific question into a high-dimensional statistical problem where the parameter of interest lies in an individualized linear threshold. The goal of this paper is to develop a hypothesis testing procedure for the significance of a single element in this high-dimensional parameter as well as for the significance of a linear combination of this parameter. The difficulty dues to the high-dimensionality of the nuisance component in developing such a testing procedure, and also stems from the fact that this high-dimensional threshold model is nonregular and the limiting distribution of the corresponding estimator is nonstandard. To deal with these challenges, we construct a test statistic via a new bias corrected smoothed decorrelated score approach, and establish its asymptotic distributions under both the null and local alternative hypotheses. In addition, we propose a double-smoothing approach to select the optimal bandwidth parameter in our test statistic and provide theoretical guarantees for the selected bandwidth. We conduct comprehensive simulation studies to demonstrate how our proposed procedure can be applied in empirical studies. Finally, we apply the proposed method to a clinical trial where the scientific goal is to assess the clinical importance of a surgery procedure.
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar (1987). The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Duembgen, Samworth and Schuhmacher (2011) on regression models with log-concave error distributions.