ترغب بنشر مسار تعليمي؟ اضغط هنا

A maximum likelihood method for bidimensional experimental distributions, and its application to the galaxy merger fraction

53   0   0.0 ( 0 )
 نشر من قبل Carlos L\\'opez-Sanjuan
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The determination of galaxy merger fraction of field galaxies using automatic morphological indices and photometric redshifts is affected by several biases if observational errors are not properly treated. Here, we correct these biases using maximum likelihood techniques. The method takes into account the observational errors to statistically recover the real shape of the bidimensional distribution of galaxies in redshift - asymmetry space, needed to infer the redshift evolution of galaxy merger fraction. We test the method with synthetic catalogs and show its applicability limits. The accuracy of the method depends on catalog characteristics such as the number of sources or the experimental error sizes. We show that the maximum likelihood method recovers the real distribution of galaxies in redshift and asymmetry space even when binning is such that bin sizes approach the size of the observational errors. We provide a step-by-step guide to applying maximum likelihood techniques to recover any one- or bidimensional distribution subject to observational errors.

قيم البحث

اقرأ أيضاً

122 - C. Lopez-Sanjuan 2009
Aims: We study the major merger fraction in a SPITZER/IRAC-selected catalogue in the GOODS-S field up to z ~ 1 for luminosity- and mass-limited samples. Methods: We select disc-disc merger remnants on the basis of morphological asymmetries, and add ress three main sources of systematic errors: (i) we explicitly apply morphological K-corrections, (ii) we measure asymmetries in galaxies artificially redshifted to z_d = 1.0 to deal with loss of morphological information with redshift, and (iii) we take into account the observational errors in z and A, which tend to overestimate the merger fraction, though use of maximum likelihood techniques. Results: We obtain morphological merger fractions (f_m) below 0.06 up to z ~ 1. Parameterizing the merger fraction evolution with redshift as f_m(z) = f_m(0) (1+z)^m, we find that m = 1.8 +/- 0.5 for M_B <= -20 galaxies, while m = 5.4 +/- 0.4 for M_star >= 10^10 M_Sun galaxies. When we translate our merger fractions to merger rates (R_m), their evolution, parameterized as R_m(z) = R_m(0) (1+z)^n, is quite similar in both cases: n = 3.3 +/- 0.8 and n = 3.5 +/- 0.4, respectively. Conclusions: Our results imply that only ~8% of todays M_star >= 10^10 M_Sun galaxies have undergone a disc-disc major merger since z ~ 1. In addition, ~21% of this mass galaxies at z ~ 1 have undergone one of these mergers since z ~ 1.5. This suggests that disc-disc major mergers are not the dominant process in the evolution of M_star >= 10^10 M_Sun galaxies since z ~ 1, but may be an important process at z > 1.
The asymptotic variance of the maximum likelihood estimate is proved to decrease when the maximization is restricted to a subspace that contains the true parameter value. Maximum likelihood estimation allows a systematic fitting of covariance models to the sample, which is important in data assimilation. The hierarchical maximum likelihood approach is applied to the spectral diagonal covariance model with different parameterizations of eigenvalue decay, and to the sparse inverse covariance model with specified parameter values on different sets of nonzero entries. It is shown computationally that using smaller sets of parameters can decrease the sampling noise in high dimension substantially.
120 - Yuehong Xie 2009
This paper presents a statistical method to subtract background in maximum likelihood fit, without relying on any separate sideband or simulation for background modeling. The method, called sFit, is an extension to the sPlot technique originally deve loped to reconstruct true distribution for each date component. The sWeights defined for the sPlot technique allow to construct a modified likelihood function using only the signal probability density function and events in the signal region. Contribution of background events in the signal region to the likelihood function cancels out on a statistical basis. Maximizing this likelihood function leads to unbiased estimates of the fit parameters in the signal probability density function.
Suppose an online platform wants to compare a treatment and control policy, e.g., two different matching algorithms in a ridesharing system, or two different inventory management algorithms in an online retail site. Standard randomized controlled tri als are typically not feasible, since the goal is to estimate policy performance on the entire system. Instead, the typical current practice involves dynamically alternating between the two policies for fixed lengths of time, and comparing the average performance of each over the intervals in which they were run as an estimate of the treatment effect. However, this approach suffers from *temporal interference*: one algorithm alters the state of the system as seen by the second algorithm, biasing estimates of the treatment effect. Further, the simple non-adaptive nature of such designs implies they are not sample efficient. We develop a benchmark theoretical model in which to study optimal experimental design for this setting. We view testing the two policies as the problem of estimating the steady state difference in reward between two unknown Markov chains (i.e., policies). We assume estimation of the steady state reward for each chain proceeds via nonparametric maximum likelihood, and search for consistent (i.e., asymptotically unbiased) experimental designs that are efficient (i.e., asymptotically minimum variance). Characterizing such designs is equivalent to a Markov decision problem with a minimum variance objective; such problems generally do not admit tractable solutions. Remarkably, in our setting, using a novel application of classical martingale analysis of Markov chains via Poissons equation, we characterize efficient designs via a succinct convex optimization problem. We use this characterization to propose a consistent, efficient online experimental design that adaptively samples the two Markov chains.
The destriping technique is a viable tool for removing different kinds of systematic effects in CMB related experiments. It has already been proven to work for gain instabilities that produce the so-called 1/f noise and periodic fluctuations due to e .g. thermal instability. Both effects when coupled with the observing strategy result in stripes on the observed sky region. Here we present a maximum-likelihood approach to this type of technique and provide also a useful generalization. As a working case we consider a data set similar to what the Planck satellite will produce in its Low Frequency Instrument (LFI). We compare our method to those presented in the literature and find some improvement in performance. Our approach is also more general and allows for different base functions to be used when fitting the systematic effect under consideration. We study the effect of increasing the number of these base functions on the quality of signal cleaning and reconstruction. This study is related to Planck LFI activities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا