ترغب بنشر مسار تعليمي؟ اضغط هنا

A bootstrap method for estimating bias and variance in statistical multispecies models using highly disparate data sets

80   0   0.0 ( 0 )
 نشر من قبل Gunnar Stefansson
 تاريخ النشر 2012
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Statistical multispecies models of multiarea marine ecosystems use a variety of data sources to estimate parameters using composite or weighted likelihood functions with associated weighting issues and questions on how to obtain variance estimates. Regardless of the method used to obtain point estimates, a method is needed for variance estimation. A bootstrap technique is introduced for the evaluation of uncertainty in such models, taking into account inherent spatial and temporal correlations in the data sets thus avoiding many model--specification issues, which are commonly transferred as assumptions from a likelihood estimation procedure into Hessian--based variance estimation procedures. The technique is demonstrated on a real data set and used to look for estimation bias and the effects of different aggregation levels in population dynamics models.



قيم البحث

اقرأ أيضاً

Anonymized smartphone-based mobility data has been widely adopted in devising and evaluating COVID-19 response strategies such as the targeting of public health resources. Yet little attention has been paid to measurement validity and demographic bia s, due in part to the lack of documentation about which users are represented as well as the challenge of obtaining ground truth data on unique visits and demographics. We illustrate how linking large-scale administrative data can enable auditing mobility data for bias in the absence of demographic information and ground truth labels. More precisely, we show that linking voter roll data -- containing individual-level voter turnout for specific voting locations along with race and age -- can facilitate the construction of rigorous bias and reliability tests. These tests illuminate a sampling bias that is particularly noteworthy in the pandemic context: older and non-white voters are less likely to be captured by mobility data. We show that allocating public health resources based on such mobility data could disproportionately harm high-risk elderly and minority groups.
Environmental processes resolved at a sufficiently small scale in space and time will inevitably display non-stationary behavior. Such processes are both challenging to model and computationally expensive when the data size is large. Instead of model ing the global non-stationarity explicitly, local models can be applied to disjoint regions of the domain. The choice of the size of these regions is dictated by a bias-variance trade-off; large regions will have smaller variance and larger bias, whereas small regions will have higher variance and smaller bias. From both the modeling and computational point of view, small regions are preferable to better accommodate the non-stationarity. However, in practice, large regions are necessary to control the variance. We propose a novel Bayesian three-step approach that allows for smaller regions without compromising the increase of the variance that would follow. We are able to propagate the uncertainty from one step to the next without issues caused by reusing the data. The improvement in inference also results in improved prediction, as our simulated example shows. We illustrate this new approach on a data set of simulated high-resolution wind speed data over Saudi Arabia.
71 - Jianfeng Wang , Jun Yu 2021
This study investigated the effect of harsh winter climate on the performance of high speed passenger trains in northern Sweden. Novel approaches based on heterogeneous statistical models were introduced to analyse the train performance in order to t ake the time-varying risks of train delays into consideration. Specifically, stratified Cox model and heterogeneous Markov chain model were used for modelling primary delays and arrival delays, respectively. Our results showed that the weather variables including temperature, humidity, snow depth, and ice/snow precipitation have significant impact on the train performance.
Segmented regression is a standard statistical procedure used to estimate the effect of a policy intervention on time series outcomes. This statistical method assumes the normality of the outcome variable, a large sample size, no autocorrelation in t he observations, and a linear trend over time. Also, segmented regression is very sensitive to outliers. In a small sample study, if the outcome variable does not follow a Gaussian distribution, then using segmented regression to estimate the intervention effect leads to incorrect inferences. To address the small sample problem and non-normality in the outcome variable, including outliers, we describe and develop a robust statistical method to estimate the policy intervention effect in a series of longitudinal data. A simulation study is conducted to demonstrate the effect of outliers and non-normality in the outcomes by calculating the power of the test statistics with the segmented regression and the proposed robust statistical methods. Moreover, since finding the sampling distribution of the proposed robust statistic is analytically difficult, we use a nonparametric bootstrap technique to study the properties of the sampling distribution and make statistical inferences. Simulation studies show that the proposed method has more power than the standard t-test used in segmented regression analysis under the non-normality error distribution. Finally, we use the developed technique to estimate the intervention effect of the Istanbul Declaration on illegal organ activities. The robust method detected more significant effects compared to the standard method and provided shorter confidence intervals.
Data competitions rely on real-time leaderboards to rank competitor entries and stimulate algorithm improvement. While such competitions have become quite popular and prevalent, particularly in supervised learning formats, their implementations by th e host are highly variable. Without careful planning, a supervised learning competition is vulnerable to overfitting, where the winning solutions are so closely tuned to the particular set of provided data that they cannot generalize to the underlying problem of interest to the host. This paper outlines some important considerations for strategically designing relevant and informative data sets to maximize the learning outcome from hosting a competition based on our experience. It also describes a post-competition analysis that enables robust and efficient assessment of the strengths and weaknesses of solutions from different competitors, as well as greater understanding of the regions of the input space that are well-solved. The post-competition analysis, which complements the leaderboard, uses exploratory data analysis and generalized linear models (GLMs). The GLMs not only expand the range of results we can explore, they also provide more detailed analysis of individual sub-questions including similarities and differences between algorithms across different types of scenarios, universally easy or hard regions of the input space, and different learning objectives. When coupled with a strategically planned data generation approach, the methods provide richer and more informative summaries to enhance the interpretation of results beyond just the rankings on the leaderboard. The methods are illustrated with a recently completed competition to evaluate algorithms capable of detecting, identifying, and locating radioactive materials in an urban environment.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا