We study minimax density estimation on the product space $mathbb{R}^{d_1}timesmathbb{R}^{d_2}$. We consider $L^p$-risk for probability density functions defined over regularity spaces that allow for different level of smoothness in each of the variables. Precisely, we study probabilities on Sobolev spaces with dominating mixed-smoothness. We provide the rate of convergence that is optimal even for the classical Sobolev spaces.
This paper studies the minimax rate of nonparametric conditional density estimation under a weighted absolute value loss function in a multivariate setting. We first demonstrate that conditional density estimation is impossible if one only requires t
hat $p_{X|Z}$ is smooth in $x$ for all values of $z$. This motivates us to consider a sub-class of absolutely continuous distributions, restricting the conditional density $p_{X|Z}(x|z)$ to not only be Holder smooth in $x$, but also be total variation smooth in $z$. We propose a corresponding kernel-based estimator and prove that it achieves the minimax rate. We give some simple examples of densities satisfying our assumptions which imply that our results are not vacuous. Finally, we propose an estimator which achieves the minimax optimal rate adaptively, i.e., without the need to know the smoothness parameter values in advance. Crucially, both of our estimators (the adaptive and non-adaptive ones) impose no assumptions on the marginal density $p_Z$, and are not obtained as a ratio between two kernel smoothing estimators which may sound like a go to approach in this problem.
We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $mathbb{R}^d$, for $d geq 1$. Both spaces are Hilbert spaces involving weight
functions, which determine the behaviour as different variables tend to $pm infty$, and weight parameters, which represent the influence of different subsets of variables. The unanchored ANOVA space on $mathbb{R}^d$ was initially introduced by Nichols & Kuo in 2014 to analyse the error of quasi-Monte Carlo (QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo & Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with kinks or jumps coming from option pricing problems. In this same setting, Griewank, Kuo, Leovey & Sloan in 2018 subsequently extended these ideas by developing a practical smoothing by preintegration technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using $N$ points converges at a rate close to $1/N$.
We continue our investigations on pointwise multipliers for Besov spaces of dominating mixed smoothness. This time we study the algebra property of the classes $S^r_{p,q}B(mathbb{R}^d)$ with respect to pointwise multiplication. In addition if $pleq q
$, we are able to describe the space of all pointwise multipliers for $S^r_{p,q}B(mathbb{R}^d)$.
We address the problem of adaptive minimax density estimation on $bR^d$ with $bL_p$--loss on the anisotropic Nikolskii classes. We fully characterize behavior of the minimax risk for different relationships between regularity parameters and norm inde
xes in definitions of the functional class and of the risk. In particular, we show that there are four different regimes with respect to the behavior of the minimax risk. We develop a single estimator which is (nearly) optimal in orderover the complete scale of the anisotropic Nikolskii classes. Our estimation procedure is based on a data-driven selection of an estimator from a fixed family of kernel estimators.
This paper presents minimax rates for density estimation when the data dimension $d$ is allowed to grow with the number of observations $n$ rather than remaining fixed as in previous analyses. We prove a non-asymptotic lower bound which gives the wor
st-case rate over standard classes of smooth densities, and we show that kernel density estimators achieve this rate. We also give oracle choices for the bandwidth and derive the fastest rate $d$ can grow with $n$ to maintain estimation consistency.