ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Fusion: Scalable unification of distributed statistical analyses

117   0   0.0 ( 0 )
 نشر من قبل Hongsheng Dai PhD
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

There has recently been considerable interest in addressing the problem of unifying distributed statistical analyses into a single coherent inference. This problem naturally arises in a number of situations, including in big-data settings, when working under privacy constraints, and in Bayesian model choice. The majority of existing approaches have relied upon convenient approximations of the distributed analyses. Although typically being computationally efficient, and readily scaling with respect to the number of analyses being unified, approximate approaches can have significant shortcomings -- the quality of the inference can degrade rapidly with the number of analyses being unified, and can be substantially biased even when unifying a small number of analyses that do not concur. In contrast, the recent Fusion approach of Dai et al. (2019) is a rejection sampling scheme which is readily parallelisable and is exact (avoiding any form of approximation other than Monte Carlo error), albeit limited in applicability to unifying a small number of low-dimensional analyses. In this paper we introduce a practical Bayesian Fusion approach. We extend the theory underpinning the Fusion methodology and, by embedding it within a sequential Monte Carlo algorithm, we are able to recover the correct target distribution. By means of extensive guidance on the implementation of the approach, we demonstrate theoretically and empirically that Bayesian Fusion is robust to increasing numbers of analyses, and coherently unifying analyses which do not concur. This is achieved while being computationally competitive with approximate schemes.



قيم البحث

اقرأ أيضاً

Bayesian causal inference offers a principled approach to policy evaluation of proposed interventions on mediators or time-varying exposures. We outline a general approach to the estimation of causal quantities for settings with time-varying confound ing, such as exposure-induced mediator-outcome confounders. We further extend this approach to propose two Bayesian data fusion (BDF) methods for unmeasured confounding. Using informative priors on quantities relating to the confounding bias parameters, our methods incorporate data from an external source where the confounder is measured in order to make inferences about causal estimands in the main study population. We present results from a simulation study comparing our data fusion methods to two common frequentist correction methods for unmeasured confounding bias in the mediation setting. We also demonstrate our method with an investigation of the role of stage at cancer diagnosis in contributing to Black-White colorectal cancer survival disparities.
Modern audience measurement requires combining observations from disparate panel datasets. Connecting and relating such panel datasets is a process termed panel fusion. This paper formalizes the panel fusion problem and presents a novel approach to s olve it. We cast the panel fusion as a network flow problem, allowing the application of a rich body of research. In the context of digital audience measurement, where panel sizes can grow into the tens of millions, we propose an efficient algorithm to partition the network into sub-problems. While the algorithm solves a relaxed version of the original problem, we provide conditions under which it guarantees optimality. We demonstrate our approach by fusing two real-world panel datasets in a distributed computing environment.
Handling big data has largely been a major bottleneck in traditional statistical models. Consequently, when accurate point prediction is the primary target, machine learning models are often preferred over their statistical counterparts for bigger pr oblems. But full probabilistic statistical models often outperform other models in quantifying uncertainties associated with model predictions. We develop a data-driven statistical modeling framework that combines the uncertainties from an ensemble of statistical models learned on smaller subsets of data carefully chosen to account for imbalances in the input space. We demonstrate this method on a photometric redshift estimation problem in cosmology, which seeks to infer a distribution of the redshift -- the stretching effect in observing the light of far-away galaxies -- given multivariate color information observed for an object in the sky. Our proposed method performs balanced partitioning, graph-based data subsampling across the partitions, and training of an ensemble of Gaussian process models.
161 - Qipeng Liu , Jiuhua Zhao , 2014
In this paper, we discuss a class of distributed detection algorithms which can be viewed as implementations of Bayes law in distributed settings. Some of the algorithms are proposed in the literature most recently, and others are first developed in this paper. The common feature of these algorithms is that they all combine (i) certain kinds of consensus protocols with (ii) Bayesian updates. They are different mainly in the aspect of the type of consensus protocol and the order of the two operations. After discussing their similarities and differences, we compare these distributed algorithms by numerical examples. We focus on the rate at which these algorithms detect the underlying true state of an object. We find that (a) The algorithms with consensus via geometric average is more efficient than that via arithmetic average; (b) The order of consensus aggregation and Bayesian update does not apparently influence the performance of the algorithms; (c) The existence of communication delay dramatically slows down the rate of convergence; (d) More communication between agents with different signal structures improves the rate of convergence.
Verifying that a statistically significant result is scientifically meaningful is not only good scientific practice, it is a natural way to control the Type I error rate. Here we introduce a novel extension of the p-value - a second-generation p-valu e - that formally accounts for scientific relevance and leverages this natural Type I Error control. The approach relies on a pre-specified interval null hypothesis that represents the collection of effect sizes that are scientifically uninteresting or are practically null. The second-generation p-value is the proportion of data-supported hypotheses that are also null hypotheses. As such, second-generation p-values indicate when the data are compatible with null hypotheses, or with alternative hypotheses, or when the data are inconclusive. Moreover, second-generation p-values provide a proper scientific adjustment for multiple comparisons and reduce false discovery rates. This is an advance for environments rich in data, where traditional p-value adjustments are needlessly punitive. Second-generation p-values promote transparency, rigor and reproducibility of scientific results by a priori specifying which candidate hypotheses are practically meaningful and by providing a more reliable statistical summary of when the data are compatible with alternative or null hypotheses.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا