Do you want to publish a course? Click here

Unions of Orthogonal Arrays and their aberrations via Hilbert bases

185   0   0.0 ( 0 )
 Added by Fabio Rapallo
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We generate all the Orthogonal Arrays (OAs) of a given size n and strength t as the union of a collection of OAs which belong to an inclusion-minimal set of OAs. We derive a formula for computing the (Generalized) Word Length Pattern of a union of OAs that makes use of their polynomial counting functions. In this way the best OAs according to the Generalized Minimum Aberration criterion can be found by simply exploring a relatively small set of counting functions. The classes of OAs with 5 binary factors, strength 2, and sizes 16 and 20 are fully described.



rate research

Read More

Given an Orthogonal Array we analyze the aberrations of the sub-fractions which are obtained by the deletion of some of its points. We provide formulae to compute the Generalized Word-Length Pattern of any sub-fraction. In the case of the deletion of one single point, we provide a simple methodology to find which the best sub-fractions are according to the Generalized Minimum Aberration criterion. We also study the effect of the deletion of 1, 2 or 3 points on some examples. The methodology does not put any restriction on the number of levels of each factor. It follows that any mixed level Orthogonal Array can be considered.
Over the last two decades, many exciting variable selection methods have been developed for finding a small group of covariates that are associated with the response from a large pool. Can the discoveries from these data mining approaches be spurious due to high dimensionality and limited sample size? Can our fundamental assumptions about the exogeneity of the covariates needed for such variable selection be validated with the data? To answer these questions, we need to derive the distributions of the maximum spurious correlations given a certain number of predictors, namely, the distribution of the correlation of a response variable $Y$ with the best $s$ linear combinations of $p$ covariates $mathbf{X}$, even when $mathbf{X}$ and $Y$ are independent. When the covariance matrix of $mathbf{X}$ possesses the restricted eigenvalue property, we derive such distributions for both a finite $s$ and a diverging $s$, using Gaussian approximation and empirical process techniques. However, such a distribution depends on the unknown covariance matrix of $mathbf{X}$. Hence, we use the multiplier bootstrap procedure to approximate the unknown distributions and establish the consistency of such a simple bootstrap approach. The results are further extended to the situation where the residuals are from regularized fits. Our approach is then used to construct the upper confidence limit for the maximum spurious correlation and to test the exogeneity of the covariates. The former provides a baseline for guarding against false discoveries and the latter tests whether our fundamental assumptions for high-dimensional model selection are statistically valid. Our techniques and results are illustrated with both numerical examples and real data analysis.
We propose two families of scale-free exponentiality tests based on the recent characterization of exponentiality by Arnold and Villasenor. The test statistics are based on suitable functionals of U-empirical distribution functions. The family of integral statistics can be reduced to V- or U-statistics with relatively simple non-degenerate kernels. They are asymptotically normal and have reasonably high local Bahadur efficiency under common alternatives. This efficiency is compared with simulated powers of new tests. On the other hand, the Kolmogorov type tests demonstrate very low local Bahadur efficiency and rather moderate power for common alternatives,and can hardly be recommended to practitioners. We also explore the conditions of local asymptotic optimality of new tests and describe for both families special most favorable alternatives for which the tests are fully efficient.
273 - Xiongtao Dai 2021
The infinite-dimensional Hilbert sphere $S^infty$ has been widely employed to model density functions and shapes, extending the finite-dimensional counterpart. We consider the Frechet mean as an intrinsic summary of the central tendency of data lying on $S^infty$. To break a path for sound statistical inference, we derive properties of the Frechet mean on $S^infty$ by establishing its existence and uniqueness as well as a root-$n$ central limit theorem (CLT) for the sample version, overcoming obstructions from infinite-dimensionality and lack of compactness on $S^infty$. Intrinsic CLTs for the estimated tangent vectors and covariance operator are also obtained. Asymptotic and bootstrap hypothesis tests for the Frechet mean based on projection and norm are then proposed and are shown to be consistent. The proposed two-sample tests are applied to make inference for daily taxi demand patterns over Manhattan modeled as densities, of which the square roots are analyzed on the Hilbert sphere. Numerical properties of the proposed hypothesis tests which utilize the spherical geometry are studied in the real data application and simulations, where we demonstrate that the tests based on the intrinsic geometry compare favorably to those based on an extrinsic or flat geometry.
217 - Kan Chen , Zhiqi Bu , Shiyun Xu 2021
Sparse Group LASSO (SGL) is a regularized model for high-dimensional linear regression problems with grouped covariates. SGL applies $l_1$ and $l_2$ penalties on the individual predictors and group predictors, respectively, to guarantee sparse effects both on the inter-group and within-group levels. In this paper, we apply the approximate message passing (AMP) algorithm to efficiently solve the SGL problem under Gaussian random designs. We further use the recently developed state evolution analysis of AMP to derive an asymptotically exact characterization of SGL solution. This allows us to conduct multiple fine-grained statistical analyses of SGL, through which we investigate the effects of the group information and $gamma$ (proportion of $ell_1$ penalty). With the lens of various performance measures, we show that SGL with small $gamma$ benefits significantly from the group information and can outperform other SGL (including LASSO) or regularized models which does not exploit the group information, in terms of the recovery rate of signal, false discovery rate and mean squared error.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا