Do you want to publish a course? Click here

Bayesian testing of linear versus nonlinear effects using Gaussian process priors

108   0   0.0 ( 0 )
 Added by Joris Mulder
 Publication date 2021
and research's language is English
 Authors Joris Mulder




Ask ChatGPT about the research

A Bayes factor is proposed for testing whether the effect of a key predictor variable on the dependent variable is linear or nonlinear, possibly while controlling for certain covariates. The test can be used (i) when one is interested in quantifying the relative evidence in the data of a linear versus a nonlinear relationship and (ii) to quantify the evidence in the data in favor of a linear relationship (useful when building linear models based on transformed variables). Under the nonlinear model, a Gaussian process prior is employed using a parameterization similar to Zellners $g$ prior resulting in a scale-invariant test. Moreover a Bayes factor is proposed for one-sided testing of whether the nonlinear effect is consistently positive, consistently negative, or neither. Applications are provides from various fields including social network research and education.



rate research

Read More

Traditionally, Hawkes processes are used to model time--continuous point processes with history dependence. Here we propose an extended model where the self--effects are of both excitatory and inhibitory type and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and our approach dispenses with the necessity of estimating a branching structure for the posterior, as we perform inference on an aggregated sum of Gaussian Processes. Efficient approximate Bayesian inference is achieved via data augmentation, and we describe a mean--field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from three different domains and compare it to previously reported results.
235 - Zichen Ma , Ernest Fokoue 2015
In this paper, we introduce a new methodology for Bayesian variable selection in linear regression that is independent of the traditional indicator method. A diagonal matrix $mathbf{G}$ is introduced to the prior of the coefficient vector $boldsymbol{beta}$, with each of the $g_j$s, bounded between $0$ and $1$, on the diagonal serves as a stabilizer of the corresponding $beta_j$. Mathematically, a promising variable has a $g_j$ value that is close to $0$, whereas the value of $g_j$ corresponding to an unpromising variable is close to $1$. This property is proven in this paper under orthogonality together with other asymptotic properties. Computationally, the sample path of each $g_j$ is obtained through Metropolis-within-Gibbs sampling method. Also, in this paper we give two simulations to verify the capability of this methodology in variable selection.
Since the seminal work of Venkatakrishnan et al. (2013), Plug & Play (PnP) methods have become ubiquitous in Bayesian imaging. These methods derive Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimators for inverse problems in imaging by combining an explicit likelihood function with a prior that is implicitly defined by an image denoising algorithm. The PnP algorithms proposed in the literature mainly differ in the iterative schemes they use for optimisation or for sampling. In the case of optimisation schemes, some recent works guarantee the convergence to a fixed point, albeit not necessarily a MAP estimate. In the case of sampling schemes, to the best of our knowledge, there is no known proof of convergence. There also remain important open questions regarding whether the underlying Bayesian models and estimators are well defined, well-posed, and have the basic regularity properties required to support these numerical schemes. To address these limitations, this paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with PnP priors. We introduce two algorithms: 1) PnP-ULA (Unadjusted Langevin Algorithm) for Monte Carlo sampling and MMSE inference; and 2) PnP-SGD (Stochastic Gradient Descent) for MAP inference. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for these two algorithms under realistic assumptions on the denoising operators used, with special attention to denoisers based on deep neural networks. We also show that these algorithms approximately target a decision-theoretically optimal Bayesian model that is well-posed. The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and quantification.
Two schemes are proposed to compute the nonlinear electro-optic (EO) tensor for the first time. In the first scheme, we compute the linear EO tensor of the structure under a finite electric field, while we compute the refractive index of the structure under a finite electric field in the second scheme. Such schemes are applied to Pb(Zr,Ti)O$_{3}$ and BaTiO$_{3}$ ferroelectric oxides. It is found to reproduce a recently observed feature, namely why Pb(Zr$_{0.52}$Ti$_{0.48}$)O$_{3}$ adopts a mostly linear EO response while BaTiO$_{3}$ exhibits a strongly nonlinear conversion between electric and optical properties. Furthermore, the atomistic insight provided by the proposed ab-initio scheme reveals the origin of such qualitatively different responses, in terms of the field-induced behavior of the frequencies of some phonon modes and of some force constants.
This paper develops Bayesian sample size formulae for experiments comparing two groups. We assume the experimental data will be analysed in the Bayesian framework, where pre-experimental information from multiple sources can be represented into robust priors. In particular, such robust priors account for preliminary belief about the pairwise commensurability between parameters that underpin the historical and new experiments, to permit flexible borrowing of information. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances where the common variance in the new experiment is known or unknown. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our Bayesian sample size formulae in the setting of designing a clinical trial. Hypothetical data examples, motivated by a rare-disease trial with elicitation of expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا