ترغب بنشر مسار تعليمي؟ اضغط هنا

Robust, data-driven inference in non-linear cosmostatistics

389   0   0.0 ( 0 )
 نشر من قبل Benjamin D. Wandelt
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We discuss two projects in non-linear cosmostatistics applicable to very large surveys of galaxies. The first is a Bayesian reconstruction of galaxy redshifts and their number density distribution from approximate, photometric redshift data. The second focuses on cosmic voids and uses them to construct cosmic spheres that allow reconstructing the expansion history of the Universe using the Alcock-Paczynski test. In both cases we find that non-linearities enable the methods or enhance the results: non-linear gravitational evolution creates voids and our photo-z reconstruction works best in the highest density (and hence most non-linear) portions of our simulations.



قيم البحث

اقرأ أيضاً

We study the statistical inference of the cosmological dark matter density field from non-Gaussian, non-linear and non-Poisson biased distributed tracers. We have implemented a Bayesian posterior sampling computer-code solving this problem and tested it with mock data based on N-body simulations.
We study safe, data-driven control of (Markov) jump linear systems with unknown transition probabilities, where both the discrete mode and the continuous state are to be inferred from output measurements. To this end, we develop a receding horizon es timator which uniquely identifies a sub-sequence of past mode transitions and the corresponding continuous state, allowing for arbitrary switching behavior. Unlike traditional approaches to mode estimation, we do not require an offline exhaustive search over mode sequences to determine the size of the observation window, but rather select it online. If the system is weakly mode observable, the window size will be upper bounded, leading to a finite-memory observer. We integrate the estimation procedure with a simple distributionally robust controller, which hedges against misestimations of the transition probabilities due to finite sample sizes. As additional mode transitions are observed, the used ambiguity sets are updated, resulting in continual improvements of the control performance. The practical applicability of the approach is illustrated on small numerical examples.
We present a model-free data-driven inference method that enables inferences on system outcomes to be derived directly from empirical data without the need for intervening modeling of any type, be it modeling of a material law or modeling of a prior distribution of material states. We specifically consider physical systems with states characterized by points in a phase space determined by the governing field equations. We assume that the system is characterized by two likelihood measures: one $mu_D$ measuring the likelihood of observing a material state in phase space; and another $mu_E$ measuring the likelihood of states satisfying the field equations, possibly under random actuation. We introduce a notion of intersection between measures which can be interpreted to quantify the likelihood of system outcomes. We provide conditions under which the intersection can be characterized as the athermal limit $mu_infty$ of entropic regularizations $mu_beta$, or thermalizations, of the product measure $mu = mu_Dtimes mu_E$ as $beta to +infty$. We also supply conditions under which $mu_infty$ can be obtained as the athermal limit of carefully thermalized $(mu_{h,beta_h})$ sequences of empirical data sets $(mu_h)$ approximating weakly an unknown likelihood function $mu$. In particular, we find that the cooling sequence $beta_h to +infty$ must be slow enough, corresponding to quenching, in order for the proper limit $mu_infty$ to be delivered. Finally, we derive explicit analytic expressions for expectations $mathbb{E}[f]$ of outcomes $f$ that are explicit in the data, thus demonstrating the feasibility of the model-free data-driven paradigm as regards making convergent inferences directly from the data without recourse to intermediate modeling steps.
We use identification robust tests to show that difference, level and non-linear moment conditions, as proposed by Arellano and Bond (1991), Arellano and Bover (1995), Blundell and Bond (1998) and Ahn and Schmidt (1995) for the linear dynamic panel d ata model, do not separately identify the autoregressive parameter when its true value is close to one and the variance of the initial observations is large. We prove that combinations of these moment conditions, however, do so when there are more than three time series observations. This identification then solely results from a set of, so-called, robust moment conditions. These robust moments are spanned by the combined difference, level and non-linear moment conditions and only depend on differenced data. We show that, when only the robust moments contain identifying information on the autoregressive parameter, the discriminatory power of the Kleibergen (2005) LM test using the combined moments is identical to the largest rejection frequencies that can be obtained from solely using the robust moments. This shows that the KLM test implicitly uses the robust moments when only they contain information on the autoregressive parameter.
Classical semiparametric inference with missing outcome data is not robust to contamination of the observed data and a single observation can have arbitrarily large influence on estimation of a parameter of interest. This sensitivity is exacerbated w hen inverse probability weighting methods are used, which may overweight contaminated observations. We introduce inverse probability weighted, double robust and outcome regression estimators of location and scale parameters, which are robust to contamination in the sense that their influence function is bounded. We give asymptotic properties and study finite sample behaviour. Our simulated experiments show that contamination can be more serious a threat to the quality of inference than model misspecification. An interesting aspect of our results is that the auxiliary outcome model used to adjust for ignorable missingness by some of the estimators, is also useful to protect against contamination. We also illustrate through a case study how both adjustment to ignorable missingness and protection against contamination are achieved through weighting schemes, which can be contrasted to gain further insights.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا