ترغب بنشر مسار تعليمي؟ اضغط هنا

A maximum likelihood approach to the destriping technique

51   0   0.0 ( 0 )
 نشر من قبل Hannu Kurki-Suonio
 تاريخ النشر 2003
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The destriping technique is a viable tool for removing different kinds of systematic effects in CMB related experiments. It has already been proven to work for gain instabilities that produce the so-called 1/f noise and periodic fluctuations due to e.g. thermal instability. Both effects when coupled with the observing strategy result in stripes on the observed sky region. Here we present a maximum-likelihood approach to this type of technique and provide also a useful generalization. As a working case we consider a data set similar to what the Planck satellite will produce in its Low Frequency Instrument (LFI). We compare our method to those presented in the literature and find some improvement in performance. Our approach is also more general and allows for different base functions to be used when fitting the systematic effect under consideration. We study the effect of increasing the number of these base functions on the quality of signal cleaning and reconstruction. This study is related to Planck LFI activities.

قيم البحث

اقرأ أيضاً

We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov- chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.
53 - Alisha Chromey 2021
Gamma-ray observations ranging from hundreds of MeV to tens of TeV are a valuable tool for studying particle acceleration and diffusion within our galaxy. Supernova remnants, pulsar wind nebulae, and star-forming regions are the main particle acceler ators in our local Galaxy. Constructing a coherent physical picture of these astrophysical objects requires the ability to distinguish extended regions of gamma-ray emission, the ability to analyze small-scale spatial variation within these regions, and methods to synthesize data from multiple observatories across multiple wavebands. Imaging Atmospheric Cherenkov Telescopes (IACTs) provide fine angular resolution (<0.1 degree) for gamma-rays above 100 GeV. Typical data reduction methods rely on source-free regions in the field of view to estimate cosmic-ray background. This presents difficulties for sources with unknown extent or those which encompass a large portion of the IACT field of view (3.5 degrees for VERITAS). Maximum-likelihood-based techniques are well-suited for analysis of fields with multiple overlapping sources, diffuse background components, and combining data from multiple observatories. Such methods also offer an alternative approach to estimating the IACT cosmic-ray background and consequently an enhanced sensitivity to largely extended sources. In this proceeding, we report on the current status and performance of a maximum likelihood technique for the IACT VERITAS. In particular, we focus on how our method framework employs a dimension for gamma-hadron separation parameters in order to improve sensitivity on extended sources.
We present a novel method for identifying a biochemical reaction network based on multiple sets of estimated reaction rates in the corresponding reaction rate equations arriving from various (possibly different) experiments. The current method, unlik e some of the graphical approaches proposed in the literature, uses the values of the experimental measurements only relative to the geometry of the biochemical reactions under the assumption that the underlying reaction network is the same for all the experiments. The proposed approach utilizes algebraic statistical methods in order to parametrize the set of possible reactions so as to identify the most likely network structure, and is easily scalable to very complicated biochemical systems involving a large number of species and reactions. The method is illustrated with a numerical example of a hypothetical network arising form a mass transfer-type model.
Suppose an online platform wants to compare a treatment and control policy, e.g., two different matching algorithms in a ridesharing system, or two different inventory management algorithms in an online retail site. Standard randomized controlled tri als are typically not feasible, since the goal is to estimate policy performance on the entire system. Instead, the typical current practice involves dynamically alternating between the two policies for fixed lengths of time, and comparing the average performance of each over the intervals in which they were run as an estimate of the treatment effect. However, this approach suffers from *temporal interference*: one algorithm alters the state of the system as seen by the second algorithm, biasing estimates of the treatment effect. Further, the simple non-adaptive nature of such designs implies they are not sample efficient. We develop a benchmark theoretical model in which to study optimal experimental design for this setting. We view testing the two policies as the problem of estimating the steady state difference in reward between two unknown Markov chains (i.e., policies). We assume estimation of the steady state reward for each chain proceeds via nonparametric maximum likelihood, and search for consistent (i.e., asymptotically unbiased) experimental designs that are efficient (i.e., asymptotically minimum variance). Characterizing such designs is equivalent to a Markov decision problem with a minimum variance objective; such problems generally do not admit tractable solutions. Remarkably, in our setting, using a novel application of classical martingale analysis of Markov chains via Poissons equation, we characterize efficient designs via a succinct convex optimization problem. We use this characterization to propose a consistent, efficient online experimental design that adaptively samples the two Markov chains.
We describe a simple but effective iterative procedure specifically designed to destripe Q and U Stokes parameter data as those collected by the SPOrt experiment onboard the International Space Station (ISS). The method is general enough to be useful for other experiments, both in polarization and total intensity. The only requirement for the algorithm to work properly is that the receiver knee frequency must be lower than the signal modulation frequency, corresponding in our case to the ISS orbit period. Detailed performances of the technique are presented in the context of the SPOrt experiment, both in terms of added rms noise and residual correlated noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا