ترغب بنشر مسار تعليمي؟ اضغط هنا

Foreground modelling via Gaussian process regression: an application to HERA data

63   0   0.0 ( 0 )
 نشر من قبل Abhik Ghosh
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The key challenge in the observation of the redshifted 21-cm signal from cosmic reionization is its separation from the much brighter foreground emission. Such separation relies on the different spectral properties of the two components, although, in real life, the foreground intrinsic spectrum is often corrupted by the instrumental response, inducing systematic effects that can further jeopardize the measurement of the 21-cm signal. In this paper, we use Gaussian Process Regression to model both foreground emission and instrumental systematics in $sim 2$ hours of data from the Hydrogen Epoch of Reionization Array. We find that a simple co-variance model with three components matches the data well, giving a residual power spectrum with white noise properties. These consist of an intrinsic and instrumentally corrupted component with a coherence-scale of 20 MHz and 2.4 MHz respectively (dominating the line of sight power spectrum over scales $k_{parallel} le 0.2$ h cMpc$^{-1}$) and a baseline dependent periodic signal with a period of $sim 1$ MHz (dominating over $k_{parallel} sim 0.4 - 0.8$h cMpc$^{-1}$) which should be distinguishable from the 21-cm EoR signal whose typical coherence-scales is $sim 0.8$ MHz.



قيم البحث

اقرأ أيضاً

160 - Aaron Ho 2021
This paper outlines an approach towards improved rigour in tokamak turbulence transport model validation within integrated modelling. Gaussian process regression (GPR) techniques were applied for profile fitting during the preparation of integrated m odelling simulations allowing for rigourous sensitivity tests of prescribed initial and boundary conditions as both fit and derivative uncertainties are provided. This was demonstrated by a JETTO integrated modelling simulation of the JET ITER-like-wall H-mode baseline discharge #92436 with the QuaLiKiz quasilinear turbulent transport model, which is the subject of extrapolation towards a deuterium-tritium plasma. The simulation simultaneously evaluates the time evolution of heat, particle, and momentum fluxes over $sim10$ confinement times, with a simulation boundary condition at $rho_{tor} = 0.85$. Routine inclusion of momentum transport prediction in multi-channel flux-driven transport modelling is not standard and is facilitated here by recent developments within the QuaLiKiz model. Excellent agreement was achieved between the fitted and simulated profiles for $n_e$, $T_e$, $T_i$, and $Omega_{tor}$ within $2sigma$, but the simulation underpredicts the mid-radius $T_i$ and overpredicts the core $n_e$ and $T_e$ profiles for this discharge. Despite this, it was shown that this approach is capable of deriving reasonable inputs, including derivative quantities, to tokamak models from experimental data. Furthermore, multiple figures-of-merit were defined to quantitatively assess the agreement of integrated modelling predictions to experimental data within the GPR profile fitting framework.
We introduce Latent Gaussian Process Regression which is a latent variable extension allowing modelling of non-stationary multi-modal processes using GPs. The approach is built on extending the input space of a regression problem with a latent variab le that is used to modulate the covariance function over the training data. We show how our approach can be used to model multi-modal and non-stationary processes. We exemplify the approach on a set of synthetic data and provide results on real data from motion capture and geostatistics.
We apply Gaussian process (GP) regression, which provides a powerful non-parametric probabilistic method of relating inputs to outputs, to survival data consisting of time-to-event and covariate measurements. In this context, the covariates are regar ded as the `inputs and the event times are the `outputs. This allows for highly flexible inference of non-linear relationships between covariates and event times. Many existing methods, such as the ubiquitous Cox proportional hazards model, focus primarily on the hazard rate which is typically assumed to take some parametric or semi-parametric form. Our proposed model belongs to the class of accelerated failure time models where we focus on directly characterising the relationship between covariates and event times without any explicit assumptions on what form the hazard rates take. It is straightforward to include various types and combinations of censored and truncated observations. We apply our approach to both simulated and experimental data. We then apply multiple output GP regression, which can handle multiple potentially correlated outputs for each input, to competing risks survival data where multiple event types can occur. By tuning one of the model parameters we can control the extent to which the multiple outputs (the time-to-event for each risk) are dependent thus allowing the specification of correlated risks. Simulation studies suggest that in some cases assuming dependence can lead to more accurate predictions.
In this paper we introduce a novel model for Gaussian process (GP) regression in the fully Bayesian setting. Motivated by the ideas of sparsification, localization and Bayesian additive modeling, our model is built around a recursive partitioning (RP ) scheme. Within each RP partition, a sparse GP (SGP) regression model is fitted. A Bayesian additive framework then combines multiple layers of partitioned SGPs, capturing both global trends and local refinements with efficient computations. The model addresses both the problem of efficiency in fitting a full Gaussian process regression model and the problem of prediction performance associated with a single SGP. Our approach mitigates the issue of pseudo-input selection and avoids the need for complex inter-block correlations in existing methods. The crucial trade-off becomes choosing between many simpler local model components or fewer complex global model components, which the practitioner can sensibly tune. Implementation is via a Metropolis-Hasting Markov chain Monte-Carlo algorithm with Bayesian back-fitting. We compare our model against popular alternatives on simulated and real datasets, and find the performance is competitive, while the fully Bayesian procedure enables the quantification of model uncertainties.
224 - Gecheng Chen , Rui Tuo 2020
A primary goal of computer experiments is to reconstruct the function given by the computer code via scattered evaluations. Traditional isotropic Gaussian process models suffer from the curse of dimensionality, when the input dimension is high. Gauss ian process models with additive correlation functions are scalable to dimensionality, but they are very restrictive as they only work for additive functions. In this work, we consider a projection pursuit model, in which the nonparametric part is driven by an additive Gaussian process regression. The dimension of the additive function is chosen to be higher than the original input dimension. We show that this dimension expansion can help approximate more complex functions. A gradient descent algorithm is proposed to maximize the likelihood function. Simulation studies show that the proposed method outperforms the traditional Gaussian process models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا