Do you want to publish a course? Click here

A Python Library For Empirical Calibration

67   0   0.0 ( 0 )
 Added by Xiaojing Wang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Dealing with biased data samples is a common task across many statistical fields. In survey sampling, bias often occurs due to unrepresentative samples. In causal studies with observational data, the treated versus untreated group assignment is often correlated with covariates, i.e., not random. Empirical calibration is a generic weighting method that presents a unified view on correcting or reducing the data biases for the tasks mentioned above. We provide a Python library EC to compute the empirical calibration weights. The problem is formulated as convex optimization and solved efficiently in the dual form. Compared to existing software, EC is both more efficient and robust. EC also accommodates different optimization objectives, supports weight clipping, and allows inexact calibration, which improves usability. We demonstrate its usage across various experiments with both simulated and real-world data.



rate research

Read More

We introduce hyppo, a unified library for performing multivariate hypothesis testing, including independence, two-sample, and k-sample testing. While many multivariate independence tests have R packages available, the interfaces are inconsistent and most are not available in Python. hyppo includes many state of the art multivariate testing procedures. The package is easy-to-use and is flexible enough to enable future extensions. The documentation and all releases are available at https://hyppo.neurodata.io.
240 - Han Fu 2021
Health economic evaluations often require predictions of survival rates beyond the follow-up period. Parametric survival models can be more convenient for economic modelling than the Cox model. The generalized gamma (GG) and generalized F (GF) distributions are extensive families that contain almost all commonly used distributions with various hazard shapes and arbitrary complexity. In this study, we present a new SAS macro for implementing a wide variety of flexible parametric models including the GG and GF distributions and their special cases, as well as the Gompertz distribution. Proper custom distributions are also supported. Different from existing SAS procedures, this macro not only supports regression on the location parameter but also on ancillary parameters, which greatly increases model flexibility. In addition, the SAS macro supports weighted regression, stratified regression and robust inference. This study demonstrates with several examples how the SAS macro can be used for flexible survival modeling and extrapolation.
Many modern statistical applications involve inference for complicated stochastic models for which the likelihood function is difficult or even impossible to calculate, and hence conventional likelihood-based inferential echniques cannot be used. In such settings, Bayesian inference can be performed using Approximate Bayesian Computation (ABC). However, in spite of many recent developments to ABC methodology, in many applications the computational cost of ABC necessitates the choice of summary statistics and tolerances that can potentially severely bias the estimate of the posterior. We propose a new piecewise ABC approach suitable for discretely observed Markov models that involves writing the posterior density of the parameters as a product of factors, each a function of only a subset of the data, and then using ABC within each factor. The approach has the advantage of side-stepping the need to choose a summary statistic and it enables a stringent tolerance to be set, making the posterior less approximate. We investigate two methods for estimating the posterior density based on ABC samples for each of the factors: the first is to use a Gaussian approximation for each factor, and the second is to use a kernel density estimate. Both methods have their merits. The Gaussian approximation is simple, fast, and probably adequate for many applications. On the other hand, using instead a kernel density estimate has the benefit of consistently estimating the true ABC posterior as the number of ABC samples tends to infinity. We illustrate the piecewise ABC approach for three examples; in each case, the approach enables exact matching between simulations and data and offers fast and accurate inference.
The R package CVEK introduces a suite of flexible machine learning models and robust hypothesis tests for learning the joint nonlinear effects of multiple covariates in limited samples. It implements the Cross-validated Ensemble of Kernels (CVEK)(Liu and Coull 2017), an ensemble-based kernel machine learning method that adaptively learns the joint nonlinear effect of multiple covariates from data, and provides powerful hypothesis tests for both main effects of features and interactions among features. The R Package CVEK provides a flexible, easy-to-use implementation of CVEK, and offers a wide range of choices for the kernel family (for instance, polynomial, radial basis functions, Matern, neural network, and others), model selection criteria, ensembling method (averaging, exponential weighting, cross-validated stacking), and the type of hypothesis test (asymptotic or parametric bootstrap). Through extensive simulations we demonstrate the validity and robustness of this approach, and provide practical guidelines on how to design an estimation strategy for optimal performance in different data scenarios.
231 - K. L. Mengersen 2012
Approximate Bayesian computation (ABC) has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the ABC parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The BCel algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا