ترغب بنشر مسار تعليمي؟ اضغط هنا

Two-stage approaches to the analysis of occupancy data I: The homogeneous case

75   0   0.0 ( 0 )
 نشر من قبل Natalie Karavarsamis
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Occupancy models are used in statistical ecology to estimate species dispersion. The two components of an occupancy model are the detection and occupancy probabilities, with the main interest being in the occupancy probabilities. We show that for the homogeneous occupancy model there is an orthogonal transformation of the parameters that gives a natural two-stage inference procedure based on a conditional likelihood. We then extend this to a partial likelihood that gives explicit estimators of the model parameters. By allowing the separate modelling of the detection and occupancy probabilities, the extension of the two-stage approach to more general models has the potential to simplify the computational routines used there.



قيم البحث

اقرأ أيضاً

This paper considers the modeling of zero-inflated circular measurements concerning real case studies from medical sciences. Circular-circular regression models have been discussed in the statistical literature and illustrated with various real-life applications. However, there are no models to deal with zero-inflated response as well as a covariate simultaneously. The Mobius transformation based two-stage circular-circular regression model is proposed, and the Bayesian estimation of the model parameters is suggested using the MCMC algorithm. Simulation results show the superiority of the performance of the proposed method over the existing competitors. The method is applied to analyse real datasets on astigmatism due to cataract surgery and abnormal gait related to orthopaedic impairment. The methodology proposed can assist in efficient decision making during treatment or post-operative care.
The two-stage process of propensity score analysis (PSA) includes a design stage where propensity scores are estimated and implemented to approximate a randomized experiment and an analysis stage where treatment effects are estimated conditional upon the design. This paper considers how uncertainty associated with the design stage impacts estimation of causal effects in the analysis stage. Such design uncertainty can derive from the fact that the propensity score itself is an estimated quantity, but also from other features of the design stage tied to choice of propensity score implementation. This paper offers a procedure for obtaining the posterior distribution of causal effects after marginalizing over a distribution of design-stage outputs, lending a degree of formality to Bayesian methods for PSA (BPSA) that have gained attention in recent literature. Formulation of a probability distribution for the design-stage output depends on how the propensity score is implemented in the design stage, and propagation of uncertainty into causal estimates depends on how the treatment effect is estimated in the analysis stage. We explore these differences within a sample of commonly-used propensity score implementations (quantile stratification, nearest-neighbor matching, caliper matching, inverse probability of treatment weighting, and doubly robust estimation) and investigate in a simulation study the impact of statistician choice in PS model and implementation on the degree of between- and within-design variability in the estimated treatment effect. The methods are then deployed in an investigation of the association between levels of fine particulate air pollution and elevated exposure to emissions from coal-fired power plants.
This paper discusses the challenges presented by tall data problems associated with Bayesian classification (specifically binary classification) and the existing methods to handle them. Current methods include parallelizing the likelihood, subsamplin g, and consensus Monte Carlo. A new method based on the two-stage Metropolis-Hastings algorithm is also proposed. The purpose of this algorithm is to reduce the exact likelihood computational cost in the tall data situation. In the first stage, a new proposal is tested by the approximate likelihood based model. The full likelihood based posterior computation will be conducted only if the proposal passes the first stage screening. Furthermore, this method can be adopted into the consensus Monte Carlo framework. The two-stage method is applied to logistic regression, hierarchical logistic regression, and Bayesian multivariate adaptive regression splines.
Non-homogeneous Poisson processes are used in a wide range of scientific disciplines, ranging from the environmental sciences to the health sciences. Often, the central object of interest in a point process is the underlying intensity function. Here, we present a general model for the intensity function of a non-homogeneous Poisson process using measure transport. The model is built from a flexible bijective mapping that maps from the underlying intensity function of interest to a simpler reference intensity function. We enforce bijectivity by modeling the map as a composition of multiple simple bijective maps, and show that the model exhibits an important approximation property. Estimation of the flexible mapping is accomplished within an optimization framework, wherein computations are efficiently done using recent technological advances in deep learning and a graphics processing unit. Although we find that intensity function estimates obtained with our method are not necessarily superior to those obtained using conventional methods, the modeling representation brings with it other advantages such as facilitated point process simulation and uncertainty quantification. Modeling point processes in higher dimensions is also facilitated using our approach. We illustrate the use of our model on both simulated data, and a real data set containing the locations of seismic events near Fiji since 1964.
We give an expository review of applications of computational algebraic statistics to design and analysis of fractional factorial experiments based on our recent works. For the purpose of design, the techniques of Grobner bases and indicator function s allow us to treat fractional factorial designs without distinction between regular designs and non-regular designs. For the purpose of analysis of data from fractional factorial designs, the techniques of Markov bases allow us to handle discrete observations. Thus the approach of computational algebraic statistics greatly enlarges the scope of fractional factorial designs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا