Do you want to publish a course? Click here

Generalized inverse xgamma distribution: A non-monotone hazard rate model

112   0   0.0 ( 0 )
 Added by Sumit Kumar
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In this article, a generalized inverse xgamma distribution (GIXGD) has been introduced as the generalized version of the inverse xgamma distribution. The proposed model exhibits the pattern of non-monotone hazard rate and belongs to family of positively skewed models. The explicit expressions of some distributional properties, such as, moments, inverse moments, conditional moments, mean deviation, quantile function have been derived. The maximum likelihood estimation procedure has been used to estimate the unknown model parameters as well as survival characteristics of GIXGD. The practical applicability of the proposed model has been illustrated through a survival data of guinea pigs.



rate research

Read More

This paper proposed a new probability distribution named as inverse xgamma distribution (IXGD). Different mathematical and statistical properties,viz., reliability characteristics, moments, inverse moments, stochastic ordering and order statistics of the proposed distribution have been derived and discussed. The estimation of the parameter of IXGD has been approached by different methods of estimation, namely, maximum likelihood method of estimation (MLE), Least square method of estimation (LSE), Weighted least square method of estimation (WLSE), Cramer-von-Mises method of estimation (CME) and maximum product spacing method of estimation (MPSE). Asymptotic confidence interval (ACI) of the parameter is also obtained. A simulation study has been carried out to compare the performance of the obtained estimators and corresponding ACI in terms of average widths and corresponding coverage probabilities. Finally, two real data sets have been used to demonstrate the applicability of IXGD in real life situations.
A population-averaged additive subdistribution hazard model is proposed to assess the marginal effects of covariates on the cumulative incidence function to analyze correlated failure time data subject to competing risks. This approach extends the population-averaged additive hazard model by accommodating potentially dependent censoring due to competing events other than the event of interest. Assuming an independent working correlation structure, an estimating equations approach is considered to estimate the regression coefficients and a sandwich variance estimator is proposed. The sandwich variance estimator accounts for both the correlations between failure times as well as the those between the censoring times, and is robust to misspecification of the unknown dependency structure within each cluster. We further develop goodness-of-fit tests to assess the adequacy of the additive structure of the subdistribution hazard for each covariate, as well as for the overall model. Simulation studies are carried out to investigate the performance of the proposed methods in finite samples; and we illustrate our methods by analyzing the STrategies to Reduce Injuries and Develop confidence in Elders (STRIDE) study.
376 - Vivien Goepp 2018
In epidemiological or demographic studies, with variable age at onset, a typical quantity of interest is the incidence of a disease (for example the cancer incidence). In these studies, the individuals are usually highly heterogeneous in terms of dates of birth (the cohort) and with respect to the calendar time (the period) and appropriate estimation methods are needed. In this article a new estimation method is presented which extends classical age-period-cohort analysis by allowing interactions between age, period and cohort effects. This paper introduces a bidimensional regularized estimate of the hazard rate where a penalty is introduced on the likelihood of the model. This penalty can be designed either to smooth the hazard rate or to enforce consecutive values of the hazard to be equal, leading to a parsimonious representation of the hazard rate. In the latter case, we make use of an iterative penalized likelihood scheme to approximate the L0 norm, which makes the computation tractable. The method is evaluated on simulated data and applied on breast cancer survival data from the SEER program.
In many applications there is interest in estimating the relation between a predictor and an outcome when the relation is known to be monotone or otherwise constrained due to the physical processes involved. We consider one such application--inferring time-resolved aerosol concentration from a low-cost differential pressure sensor. The objective is to estimate a monotone function and make inference on the scaled first derivative of the function. We proposed Bayesian nonparametric monotone regression which uses a Bernstein polynomial basis to construct the regression function and puts a Dirichlet process prior on the regression coefficients. The base measure of the Dirichlet process is a finite mixture of a mass point at zero and a truncated normal. This construction imposes monotonicity while clustering the basis functions. Clustering the basis functions reduces the parameter space and allows the estimated regression function to be linear. With the proposed approach we can make closed-formed inference on the derivative of the estimated function including full quantification of uncertainty. In a simulation study the proposed method performs similar to other monotone regression approaches when the true function is wavy but performs better when the true function is linear. We apply the method to estimate time-resolved aerosol concentration with a newly-developed portable aerosol monitor. The R package bnmr is made available to implement the method.
The Cox regression model and its associated hazard ratio (HR) are frequently used for summarizing the effect of treatments on time to event outcomes. However, the HRs interpretation strongly depends on the assumed underlying survival model. The challenge of interpreting the HR has been the focus of a number of recent works. Besides, several alternative measures have been proposed in order to deal with these concerns. The marginal Cox regression models include an identifiable hazard ratio without individual but populational causal interpretation. In this work, we study the properties of one particular marginal Cox regression model and consider its estimation in the presence of omitted confounder. We prove the large sample consistency of an estimation score which allows non-binary treatments. Our Monte Carlo simulations suggest that finite sample behavior of the procedure is adequate. The studied estimator is more robust than its competitors for weak instruments although it is slightly more biased for large effects of the treatment. The practical use of the presented techniques is illustrated through a real practical example using data from the vascular quality initiative registry. The used R code is provided as Supplementary Material.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا