Do you want to publish a course? Click here

Cumulative Tsallis Entropy for Maximum Ranked Set Sampling with Unequal Samples

88   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we consider the information content of maximum ranked set sampling procedure with unequal samples (MRSSU) in terms of Tsallis entropy which is a nonadditive generalization of Shannon entropy. We obtain several results of Tsallis entropy including bounds, monotonic properties, stochastic orders, and sharp bounds under some assumptions. We also compare the uncertainty and information content of MRSSU with its counterpart in the simple random sampling (SRS) data. Finally, we develop some characterization results in terms of cumulative Tsallis entropy and residual Tsallis entropy of MRSSU and SRS data.



rate research

Read More

298 - Guillaume Chauvet 2016
We prove that any implementation of pivotal sampling is more efficient than multinomial sampling. This property entails the weak consistency of the Horvitz-Thompson estimator and the existence of a conservative variance estimator. A small simulation study supports our findings.
We determine a general link between two different solutions of the MaxEnt variational problem, namely, the ones that correspond to using either Shannons or Tsallis entropies in the concomitant variational problem. It is shown that the two variations lead to equivalent solutions that take different appearances but contain the same information. These solutions are linked by our transformation.
We consider the problem of choosing the best of $n$ samples, out of a large random pool, when the sampling of each member is associated with a certain cost. The quality (worth) of the best sample clearly increases with $n$, but so do the sampling costs, and one important question is how many to sample for optimal gain (worth minus costs). If, in addition, the assessment of worth for each sample is associated with some measurement error, the perceived best out of $n$ might not be the actual best, complicating the issue. Situations like this are typical in mate selection, job hiring, and food foraging, to name just a few. We tackle the problem by standard order statistics, yielding suggestions for optimal strategies, as well as some unexpected insights.
Historically, to bound the mean for small sample sizes, practitioners have had to choose between using methods with unrealistic assumptions about the unknown distribution (e.g., Gaussianity) and methods like Hoeffdings inequality that use weaker assumptions but produce much looser (wider) intervals. In 1969, Anderson (1969) proposed a mean confidence interval strictly better than or equal to Hoeffdings whose only assumption is that the distributions support is contained in an interval $[a,b]$. For the first time since then, we present a new family of bounds that compares favorably to Andersons. We prove that each bound in the family has {em guaranteed coverage}, i.e., it holds with probability at least $1-alpha$ for all distributions on an interval $[a,b]$. Furthermore, one of the bounds is tighter than or equal to Andersons for all samples. In simulations, we show that for many distributions, the gain over Andersons bound is substantial.
Estimating the matrix of connections probabilities is one of the key questions when studying sparse networks. In this work, we consider networks generated under the sparse graphon model and the in-homogeneous random graph model with missing observations. Using the Stochastic Block Model as a parametric proxy, we bound the risk of the maximum likelihood estimator of network connections probabilities , and show that it is minimax optimal. When risk is measured in Frobenius norm, no estimator running in polynomial time has been shown to attain the minimax optimal rate of convergence for this problem. Thus, maximum likelihood estimation is of particular interest as computationally efficient approximations to it have been proposed in the literature and are often used in practice.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا