Do you want to publish a course? Click here

Histogram Transform Ensembles for Density Estimation

128   0   0.0 ( 0 )
 Added by Hanyuan Hang
 Publication date 2019
and research's language is English
 Authors Hanyuan Hang




Ask ChatGPT about the research

We investigate an algorithm named histogram transform ensembles (HTE) density estimator whose effectiveness is supported by both solid theoretical analysis and significant experimental performance. On the theoretical side, by decomposing the error term into approximation error and estimation error, we are able to conduct the following analysis: First of all, we establish the universal consistency under $L_1(mu)$-norm. Secondly, under the assumption that the underlying density function resides in the H{o}lder space $C^{0,alpha}$, we prove almost optimal convergence rates for both single and ensemble density estimators under $L_1(mu)$-norm and $L_{infty}(mu)$-norm for different tail distributions, whereas in contrast, for its subspace $C^{1,alpha}$ consisting of smoother functions, almost optimal convergence rates can only be established for the ensembles and the lower bound of the single estimators illustrates the benefits of ensembles over single density estimators. In the experiments, we first carry out simulations to illustrate that histogram transform ensembles surpass single histogram transforms, which offers powerful evidence to support the theoretical results in the space $C^{1,alpha}$. Moreover, to further exert the experimental performances, we propose an adaptive version of HTE and study the parameters by generating several synthetic datasets with diversities in dimensions and distributions. Last but not least, real data experiments with other state-of-the-art density estimators demonstrate the accuracy of the adaptive HTE algorithm.



rate research

Read More

In this paper, we propose a density estimation algorithm called textit{Gradient Boosting Histogram Transform} (GBHT), where we adopt the textit{Negative Log Likelihood} as the loss function to make the boosting procedure available for the unsupervised tasks. From a learning theory viewpoint, we first prove fast convergence rates for GBHT with the smoothness assumption that the underlying density function lies in the space $C^{0,alpha}$. Then when the target density function lies in spaces $C^{1,alpha}$, we present an upper bound for GBHT which is smaller than the lower bound of its corresponding base learner, in the sense of convergence rates. To the best of our knowledge, we make the first attempt to theoretically explain why boosting can enhance the performance of its base learners for density estimation problems. In experiments, we not only conduct performance comparisons with the widely used KDE, but also apply GBHT to anomaly detection to showcase a further application of GBHT.
We propose a novel algorithm for large-scale regression problems named histogram transform ensembles (HTE), composed of random rotations, stretchings, and translations. First of all, we investigate the theoretical properties of HTE when the regression function lies in the H{o}lder space $C^{k,alpha}$, $k in mathbb{N}_0$, $alpha in (0,1]$. In the case that $k=0, 1$, we adopt the constant regressors and develop the na{i}ve histogram transforms (NHT). Within the space $C^{0,alpha}$, although almost optimal convergence rates can be derived for both single and ensemble NHT, we fail to show the benefits of ensembles over single estimators theoretically. In contrast, in the subspace $C^{1,alpha}$, we prove that if $d geq 2(1+alpha)/alpha$, the lower bound of the convergence rates for single NHT turns out to be worse than the upper bound of the convergence rates for ensemble NHT. In the other case when $k geq 2$, the NHT may no longer be appropriate in predicting smoother regression functions. Instead, we apply kernel histogram transforms (KHT) equipped with smoother regressors such as support vector machines (SVMs), and it turns out that both single and ensemble KHT enjoy almost optimal convergence rates. Then we validate the above theoretical results by numerical experiments. On the one hand, simulations are conducted to elucidate that ensemble NHT outperform single NHT. On the other hand, the effects of bin sizes on accuracy of both NHT and KHT also accord with theoretical analysis. Last but not least, in the real-data experiments, comparisons between the ensemble KHT, equipped with adaptive histogram transforms, and other state-of-the-art large-scale regression estimators verify the effectiveness and accuracy of our algorithm.
A new approach to $L_2$-consistent estimation of a general density functional using $k$-nearest neighbor distances is proposed, where the functional under consideration is in the form of the expectation of some function $f$ of the densities at each point. The estimator is designed to be asymptotically unbiased, using the convergence of the normalized volume of a $k$-nearest neighbor ball to a Gamma distribution in the large-sample limit, and naturally involves the inverse Laplace transform of a scaled version of the function $f.$ Some instantiations of the proposed estimator recover existing $k$-nearest neighbor based estimators of Shannon and Renyi entropies and Kullback--Leibler and Renyi divergences, and discover new consistent estimators for many other functionals such as logarithmic entropies and divergences. The $L_2$-consistency of the proposed estimator is established for a broad class of densities for general functionals, and the convergence rate in mean squared error is established as a function of the sample size for smooth, bounded densities.
This paper aims to build an estimate of an unknown density of the data with measurement error as a linear combination of functions from a dictionary. Inspired by the penalization approach, we propose the weighted Elastic-net penalized minimal $ell_2$-distance method for sparse coefficients estimation, where the adaptive weights come from sharp concentration inequalities. The optimal weighted tuning parameters are obtained by the first-order conditions holding with a high probability. Under local coherence or minimal eigenvalue assumptions, non-asymptotical oracle inequalities are derived. These theoretical results are transposed to obtain the support recovery with a high probability. Then, some numerical experiments for discrete and continuous distributions confirm the significant improvement obtained by our procedure when compared with other conventional approaches. Finally, the application is performed in a meteorology data set. It shows that our method has potency and superiority of detecting the shape of multi-mode density compared with other conventional approaches.
Consider a setting with $N$ independent individuals, each with an unknown parameter, $p_i in [0, 1]$ drawn from some unknown distribution $P^star$. After observing the outcomes of $t$ independent Bernoulli trials, i.e., $X_i sim text{Binomial}(t, p_i)$ per individual, our objective is to accurately estimate $P^star$. This problem arises in numerous domains, including the social sciences, psychology, health-care, and biology, where the size of the population under study is usually large while the number of observations per individual is often limited. Our main result shows that, in the regime where $t ll N$, the maximum likelihood estimator (MLE) is both statistically minimax optimal and efficiently computable. Precisely, for sufficiently large $N$, the MLE achieves the information theoretic optimal error bound of $mathcal{O}(frac{1}{t})$ for $t < clog{N}$, with regards to the earth movers distance (between the estimated and true distributions). More generally, in an exponentially large interval of $t$ beyond $c log{N}$, the MLE achieves the minimax error bound of $mathcal{O}(frac{1}{sqrt{tlog N}})$. In contrast, regardless of how large $N$ is, the naive plug-in estimator for this problem only achieves the sub-optimal error of $Theta(frac{1}{sqrt{t}})$.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا