Do you want to publish a course? Click here

A Bayesian approach for the analysis of error rate studies in forensic science

96   0   0.0 ( 0 )
 Added by Jessie Hendricks
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Over the past decade, the field of forensic science has received recommendations from the National Research Council of the U.S. National Academy of Sciences, the U.S. National Institute of Standards and Technology, and the U.S. Presidents Council of Advisors on Science and Technology to study the validity and reliability of forensic analyses. More specifically, these committees recommend estimation of the rates of occurrence of erroneous conclusions drawn from forensic analyses. Black box studies for the various subjective feature-based comparison methods are intended for this purpose. In general, black box studies often have unbalanced designs, comparisons that are not independent, and missing data. These aspects pose difficulty in the analysis of the results and are often ignored. Instead, interpretation of the data relies on methods that assume independence between observations and a balanced experiment. Furthermore, all of these projects are interpreted within the frequentist framework and result in point estimates associated with confidence intervals that are confusing to communicate and understand. We propose to use an existing likelihood-free Bayesian inference method, called Approximate Bayesian Computation (ABC), that is capable of handling unbalanced designs, dependencies among the observations, and missing data. ABC allows for studying the parameters of interest without recourse to incoherent and misleading measures of uncertainty such as confidence intervals. By taking into account information from all decision categories for a given examiner and information from the population of examiners, our method also allows for quantifying the risk of error for the given examiner, even when no error has been recorded for that examiner. We illustrate our proposed method by reanalysing the results of the Noblis Black Box study by Ulery et al. in 2011.



rate research

Read More

When a latent shoeprint is discovered at a crime scene, forensic analysts inspect it for distinctive patterns of wear such as scratches and holes (known as accidentals) on the source shoes sole. If its accidentals correspond to those of a suspects shoe, the print can be used as forensic evidence to place the suspect at the crime scene. The strength of this evidence depends on the random match probability---the chance that a shoe chosen at random would match the crime scene prints accidentals. Evaluating random match probabilities requires an accurate model for the spatial distribution of accidentals on shoe soles. A recent report by the Presidents Council of Advisors in Science and Technology criticized existing models in the literature, calling for new empirically validated techniques. We respond to this request with a new spatial point process model for accidental locations, developed within a hierarchical Bayesian framework. We treat the tread pattern of each shoe as a covariate, allowing us to pool information across large heterogeneous databases of shoes. Existing models ignore this information; our results show that including it leads to significantly better model fit. We demonstrate this by fitting our model to one such database.
With a majority of Yes votes in the Constitutional Referendum of 2017, Turkey continues its transition from democracy to autocracy. By the will of the Turkish people, this referendum transferred practically all executive power to president Erdogan. However, the referendum was confronted with a substantial number of allegations of electoral misconducts and irregularities, ranging from state coercion of No supporters to the controversial validity of unstamped ballots. In this note we report the results of an election forensic analysis of the 2017 referendum to clarify to what extent these voting irregularities were present and if they were able to influence the outcome of the referendum. We specifically apply novel statistical forensics tests to further identify the specific nature of electoral malpractices. In particular, we test whether the data contains fingerprints for ballot-stuffing (submission of multiple ballots per person during the vote) and voter rigging (coercion and intimidation of voters). Additionally, we perform tests to identify numerical anomalies in the election results. We find systematic and highly significant support for the presence of both, ballot-stuffing and voter rigging. In 6% of stations we find signs for ballot-stuffing with an error (probability of ballot-stuffing not happening) of 0.15% (3 sigma event). The influence of these vote distortions were large enough to tip the overall balance from No to a majority of Yes votes.
Evergreens in science are papers that display a continual rise in annual citations without decline, at least within a sufficiently long time period. Aiming to better understand evergreens in particular and patterns of citation trajectory in general, this paper develops a functional data analysis method to cluster citation trajectories of a sample of 1699 research papers published in 1980 in the American Physical Society (APS) journals. We propose a functional Poisson regression model for individual papers citation trajectories, and fit the model to the observed 30-year citations of individual papers by functional principal component analysis and maximum likelihood estimation. Based on the estimated paper-specific coefficients, we apply the K-means clustering algorithm to cluster papers into different groups, for uncovering general types of citation trajectories. The result demonstrates the existence of an evergreen cluster of papers that do not exhibit any decline in annual citations over 30 years.
We propose a hierarchical Bayesian model to estimate the proportional contribution of source populations to a newly founded colony. Samples are derived from the first generation offspring in the colony, but mating may occur preferentially among migrants from the same source population. Genotypes of the newly founded colony and source populations are used to estimate the mixture proportions, and the mixture proportions are related to environmental and demographic factors that might affect the colonizing process. We estimate an assortative mating coefficient, mixture proportions, and regression relationships between environmental factors and the mixture proportions in a single hierarchical model. The first-stage likelihood for genotypes in the newly founded colony is a mixture multinomial distribution reflecting the colonizing process. The environmental and demographic data are incorporated into the model through a hierarchical prior structure. A simulation study is conducted to investigate the performance of the model by using different levels of population divergence and number of genetic markers included in the analysis. We use Markov chain Monte Carlo (MCMC) simulation to conduct inference for the posterior distributions of model parameters. We apply the model to a data set derived from grey seals in the Orkney Islands, Scotland. We compare our model with a similar model previously used to analyze these data. The results from both the simulation and application to real data indicate that our model provides better estimates for the covariate effects.
Studying the determinants of adverse pregnancy outcomes like stillbirth and preterm birth is of considerable interest in epidemiology. Understanding the role of both individual and community risk factors for these outcomes is crucial for planning appropriate clinical and public health interventions. With this goal, we develop geospatial mixed effects logistic regression models for adverse pregnancy outcomes. Our models account for both spatial autocorrelation and heterogeneity between neighborhoods. To mitigate the low incidence of stillbirth and preterm births in our data, we explore using class rebalancing techniques to improve predictive power. To assess the informative value of the covariates in our models, we use posterior distributions of their coefficients to gauge how well they can be distinguished from zero. As a case study, we model stillbirth and preterm birth in the city of Philadelphia, incorporating both patient-level data from electronic health records (EHR) data and publicly available neighborhood data at the census tract level. We find that patient-level features like self-identified race and ethnicity were highly informative for both outcomes. Neighborhood-level factors were also informative, with poverty important for stillbirth and crime important for preterm birth. Finally, we identify the neighborhoods in Philadelphia at highest risk of stillbirth and preterm birth.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا