ترغب بنشر مسار تعليمي؟ اضغط هنا

Sampling with Costs

208   0   0.0 ( 0 )
 نشر من قبل Joseph Skufca
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of choosing the best of $n$ samples, out of a large random pool, when the sampling of each member is associated with a certain cost. The quality (worth) of the best sample clearly increases with $n$, but so do the sampling costs, and one important question is how many to sample for optimal gain (worth minus costs). If, in addition, the assessment of worth for each sample is associated with some measurement error, the perceived best out of $n$ might not be the actual best, complicating the issue. Situations like this are typical in mate selection, job hiring, and food foraging, to name just a few. We tackle the problem by standard order statistics, yielding suggestions for optimal strategies, as well as some unexpected insights.

قيم البحث

اقرأ أيضاً

We consider the problem of optimal transportation with general cost between a empirical measure and a general target probability on R d , with d $ge$ 1. We extend results in [19] and prove asymptotic stability of both optimal transport maps and poten tials for a large class of costs in R d. We derive a central limit theorem (CLT) towards a Gaussian distribution for the empirical transportation cost under minimal assumptions, with a new proof based on the Efron-Stein inequality and on the sequential compactness of the closed unit ball in L 2 (P) for the weak topology. We provide also CLTs for empirical Wassertsein distances in the special case of potential costs | $bullet$ | p , p > 1.
In this paper, we consider the information content of maximum ranked set sampling procedure with unequal samples (MRSSU) in terms of Tsallis entropy which is a nonadditive generalization of Shannon entropy. We obtain several results of Tsallis entrop y including bounds, monotonic properties, stochastic orders, and sharp bounds under some assumptions. We also compare the uncertainty and information content of MRSSU with its counterpart in the simple random sampling (SRS) data. Finally, we develop some characterization results in terms of cumulative Tsallis entropy and residual Tsallis entropy of MRSSU and SRS data.
We consider the problem of selective inference after solving a (randomized) convex statistical learning program in the form of a penalized or constrained loss function. Our first main result is a change-of-measure formula that describes many conditio nal sampling problems of interest in selective inference. Our approach is model-agnostic in the sense that users may provide their own statistical model for inference, we simply provide the modification of each distribution in the model after the selection. Our second main result describes the geometric structure in the Jacobian appearing in the change of measure, drawing connections to curvature measures appearing in Weyl-Steiner volume-of-tubes formulae. This Jacobian is necessary for problems in which the convex penalty is not polyhedral, with the prototypical example being group LASSO or the nuclear norm. We derive explicit formulae for the Jacobian of the group LASSO. To illustrate the generality of our method, we consider many examples throughout, varying both the penalty or constraint in the statistical learning problem as well as the loss function, also considering selective inference after solving multiple statistical learning programs. Penalties considered include LASSO, forward stepwise, stagewise algorithms, marginal screening and generalized LASSO. Loss functions considered include squared-error, logistic, and log-det for covariance matrix estimation. Having described the appropriate distribution we wish to sample from through our first two results, we outline a framework for sampling using a projected Langevin sampler in the (commonly occuring) case that the distribution is log-concave.
165 - Qiyang Han , Jon A. Wellner 2019
In this paper, we develop a general approach to proving global and local uniform limit theorems for the Horvitz-Thompson empirical process arising from complex sampling designs. Global theorems such as Glivenko-Cantelli and Donsker theorems, and loca l theorems such as local asymptotic modulus and related ratio-type limit theorems are proved for both the Horvitz-Thompson empirical process, and its calibrated version. Limit theorems of other variants and their condition
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consis ts only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGMs expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا