Do you want to publish a course? Click here

SURE shrinkage of Gaussian paths and signal identification

45   0   0.0 ( 0 )
 Added by Nicolas Privault
 Publication date 2009
  fields Financial
and research's language is English




Ask ChatGPT about the research

Using integration by parts on Gaussian space we construct a Stein Unbiased Risk Estimator (SURE) for the drift of Gaussian processes using their local and occupation times. By almost-sure minimization of the SURE risk of shrinkage estimators we derive an estimation and de-noising procedure for an input signal perturbed by a continuous-time Gaussian noise.

rate research

Read More

Statistical inference for sparse covariance matrices is crucial to reveal dependence structure of large multivariate data sets, but lacks scalable and theoretically supported Bayesian methods. In this paper, we propose beta-mixture shrinkage prior, computationally more efficient than the spike and slab prior, for sparse covariance matrices and establish its minimax optimality in high-dimensional settings. The proposed prior consists of beta-mixture shrinkage and gamma priors for off-diagonal and diagonal entries, respectively. To ensure positive definiteness of the resulting covariance matrix, we further restrict the support of the prior to a subspace of positive definite matrices. We obtain the posterior convergence rate of the induced posterior under the Frobenius norm and establish a minimax lower bound for sparse covariance matrices. The class of sparse covariance matrices for the minimax lower bound considered in this paper is controlled by the number of nonzero off-diagonal elements and has more intuitive appeal than those appeared in the literature. The obtained posterior convergence rate coincides with the minimax lower bound unless the true covariance matrix is extremely sparse. In the simulation study, we show that the proposed method is computationally more efficient than competitors, while achieving comparable performance. Advantages of the shrinkage prior are demonstrated based on two real data sets.
We propose the novel augmented Gaussian random field (AGRF), which is a universal framework incorporating the data of observable and derivatives of any order. Rigorous theory is established. We prove that under certain conditions, the observable and its derivatives of any order are governed by a single Gaussian random field, which is the aforementioned AGRF. As a corollary, the statement ``the derivative of a Gaussian process remains a Gaussian process is validated, since the derivative is represented by a part of the AGRF. Moreover, a computational method corresponding to the universal AGRF framework is constructed. Both noiseless and noisy scenarios are considered. Formulas of the posterior distributions are deduced in a nice closed form. A significant advantage of our computational method is that the universal AGRF framework provides a natural way to incorporate arbitrary order derivatives and deal with missing data. We use four numerical examples to demonstrate the effectiveness of the computational method. The numerical examples are composite function, damped harmonic oscillator, Korteweg-De Vries equation, and Burgers equation.
Gaussian double Markovian models consist of covariance matrices constrained by a pair of graphs specifying zeros simultaneously in the covariance matrix and its inverse. We study the semi-algebraic geometry of these models, in particular their dimension, smoothness and connectedness. Results on their vanishing ideals and conditional independence ideals are also included, and we put them into the general framework of conditional independence models. We end with several open questions and conjectures.
142 - JaeHoan Kim , Jaeyong Lee 2021
Gaussian process regression (GPR) model is a popular nonparametric regression model. In GPR, features of the regression function such as varying degrees of smoothness and periodicities are modeled through combining various covarinace kernels, which are supposed to model certain effects. The covariance kernels have unknown parameters which are estimated by the EM-algorithm or Markov Chain Monte Carlo. The estimated parameters are keys to the inference of the features of the regression functions, but identifiability of these parameters has not been investigated. In this paper, we prove identifiability of covariance kernel parameters in two radial basis mixed kernel GPR and radial basis and periodic mixed kernel GPR. We also provide some examples about non-identifiable cases in such mixed kernel GPRs.
Unmeasured confounding is a threat to causal inference and individualized decision making. Similar to Cui and Tchetgen Tchetgen (2020); Qiu et al. (2020); Han (2020a), we consider the problem of identification of optimal individualized treatment regimes with a valid instrumental variable. Han (2020a) provided an alternative identifying condition of optimal treatment regimes using the conditional Wald estimand of Cui and Tchetgen Tchetgen (2020); Qiu et al. (2020) when treatment assignment is subject to endogeneity and a valid binary instrumental variable is available. In this note, we provide a necessary and sufficient condition for identification of optimal treatment regimes using the conditional Wald estimand. Our novel condition is necessarily implied by those of Cui and Tchetgen Tchetgen (2020); Qiu et al. (2020); Han (2020a) and may continue to hold in a variety of potential settings not covered by prior results.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا