Do you want to publish a course? Click here

Cytometry inference through adaptive atomic deconvolution

89   0   0.0 ( 0 )
 Added by Manon Costa
 Publication date 2017
and research's language is English
 Authors Manon Costa




Ask ChatGPT about the research

In this paper we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. In these experiments, biologists measure the fluorescence emission of treated cells and compare them with their natural emission to study the presence of specific molecules on the cells surface. They observe a signal which is composed of a noise (the natural fluorescence) plus some additional signal related to the quantity of molecule present on the surface if any. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adap-tive estimation procedure based on a previous deconvolution procedure introduced by [vEGS08, GvES11]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some concentration inequalities for our estimators and obtain the convergence rates, that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on simulated and real biological data.



rate research

Read More

In this work, we focus on variational Bayesian inference on the sparse Deep Neural Network (DNN) modeled under a class of spike-and-slab priors. Given a pre-specified sparse DNN structure, the corresponding variational posterior contraction rate is characterized that reveals a trade-off between the variational error and the approximation error, which are both determined by the network structural complexity (i.e., depth, width and sparsity). However, the optimal network structure, which strikes the balance of the aforementioned trade-off and yields the best rate, is generally unknown in reality. Therefore, our work further develops an {em adaptive} variational inference procedure that can automatically select a reasonably good (data-dependent) network structure that achieves the best contraction rate, without knowing the optimal network structure. In particular, when the true function is H{o}lder smooth, the adaptive variational inference is capable to attain (near-)optimal rate without the knowledge of smoothness level. The above rate still suffers from the curse of dimensionality, and thus motivates the teacher-student setup, i.e., the true function is a sparse DNN model, under which the rate only logarithmically depends on the input dimension.
158 - Rafa{l} Kulik 2008
We consider the nonparametric estimation of the density function of weakly and strongly dependent processes with noisy observations. We show that in the ordinary smooth case the optimal bandwidth choice can be influenced by long range dependence, as opposite to the standard case, when no noise is present. In particular, if the dependence is moderate the bandwidth, the rates of mean-square convergence and, additionally, central limit theorem are the same as in the i.i.d. case. If the dependence is strong enough, then the bandwidth choice is influenced by the strength of dependence, which is different when compared to the non-noisy case. Also, central limit theorem are influenced by the strength of dependence. On the other hand, if the density is supersmooth, then long range dependence has no effect at all on the optimal bandwidth choice.
79 - A.J. van Es , H.-W. Uh 2001
We derive asymptotic normality of kernel type deconvolution estimators of the density, the distribution function at a fixed point, and of the probability of an interval. We consider the so called super smooth case where the characteristic function of the known distribution decreases exponentially. It turns out that the limit behavior of the pointwise estimators of the density and distribution function is relatively straightforward while the asymptotics of the estimator of the probability of an interval depends in a complicated way on the sequence of bandwidths.
362 - Jiexiang Li 2014
The paper discusses the estimation of a continuous density function of the target random field $X_{bf{i}}$, $bf{i}in mathbb {Z}^N$ which is contaminated by measurement errors. In particular, the observed random field $Y_{bf{i}}$, $bf{i}in mathbb {Z}^N$ is such that $Y_{bf{i}}=X_{bf{i}}+epsilon_{bf{i}}$, where the random error $epsilon_{bf{i}}$ is from a known distribution and independent of the target random field. Compared to the existing results, the paper is improved in two directions. First, the random vectors in contrast to univariate random variables are investigated. Second, a random field with a certain spatial interactions instead of i. i. d. random variables is studied. Asymptotic normality of the proposed estimator is established under appropriate conditions.
104 - Jer^ome Dedecker 2014
This paper deals with the estimation of a probability measure on the real line from data observed with an additive noise. We are interested in rates of convergence for the Wasserstein metric of order $pgeq 1$. The distribution of the errors is assumed to be known and to belong to a class of supersmooth or ordinary smooth distributions. We obtain in the univariate situation an improved upper bound in the ordinary smooth case and less restrictive conditions for the existing bound in the supersmooth one. In the ordinary smooth case, a lower bound is also provided, and numerical experiments illustrating the rates of convergence are presented.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا