Do you want to publish a course? Click here

A New Non-Linear Conjugate Gradient Algorithm for Destructive Cure Rate Model and a Simulation Study: Illustration with Negative Binomial Competing Risks

130   0   0.0 ( 0 )
 Added by Souvik Roy
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a new estimation methodology based on a projected non-linear conjugate gradient (PNCG) algorithm with an efficient line search technique. We develop a general PNCG algorithm for a survival model incorporating a proportion cure under a competing risks setup, where the initial number of competing risks are exposed to elimination after an initial treatment (known as destruction). In the literature, expectation maximization (EM) algorithm has been widely used for such a model to estimate the model parameters. Through an extensive Monte Carlo simulation study, we compare the performance of our proposed PNCG with that of the EM algorithm and show the advantages of our proposed method. Through simulation, we also show the advantages of our proposed methodology over other optimization algorithms (including other conjugate gradient type methods) readily available as R software packages. To show these we assume the initial number of competing risks to follow a negative binomial distribution although our general algorithm allows one to work with any competing risks distribution. Finally, we apply our proposed algorithm to analyze a well-known melanoma data.



rate research

Read More

132 - Suvra Pal 2020
In this paper, a long-term survival model under competing risks is considered. The unobserved number of competing risks is assumed to follow a negative binomial distribution that can capture both over- and under-dispersion. Considering the latent competing risks as missing data, a variation of the well-known expectation maximization (EM) algorithm, called the stochastic EM algorithm (SEM), is developed. It is shown that the SEM algorithm avoids calculation of complicated expectations, which is a major advantage of the SEM algorithm over the EM algorithm. The proposed procedure also allows the objective function to be split into two simpler functions, one corresponding to the parameters associated with the cure rate and the other corresponding to the parameters associated with the progression times. The advantage of this approach is that each simple function, with lower parameter dimension, can be maximized independently. An extensive Monte Carlo simulation study is carried out to compare the performances of the SEM and EM algorithms. Finally, a breast cancer survival data is analyzed and it is shown that the SEM algorithm performs better than the EM algorithm.
126 - Suvra Pal , Souvik Roy 2019
In this paper, we develop a new estimation procedure based on the non-linear conjugate gradient (NCG) algorithm for the Box-Cox transformation cure rate model. We compare the performance of the NCG algorithm with the well-known expectation maximization (EM) algorithm through a simulation study and show the advantages of the NCG algorithm over the EM algorithm. In particular, we show that the NCG algorithm allows simultaneous maximization of all model parameters when the likelihood surface is flat with respect to a Box-Cox model parameter. This is a big advantage over the EM algorithm, where a profile likelihood approach has been proposed in the literature that may not provide satisfactory results. We finally use the NCG algorithm to analyze a well-known melanoma data and show that it results in a better fit.
We apply Gaussian process (GP) regression, which provides a powerful non-parametric probabilistic method of relating inputs to outputs, to survival data consisting of time-to-event and covariate measurements. In this context, the covariates are regarded as the `inputs and the event times are the `outputs. This allows for highly flexible inference of non-linear relationships between covariates and event times. Many existing methods, such as the ubiquitous Cox proportional hazards model, focus primarily on the hazard rate which is typically assumed to take some parametric or semi-parametric form. Our proposed model belongs to the class of accelerated failure time models where we focus on directly characterising the relationship between covariates and event times without any explicit assumptions on what form the hazard rates take. It is straightforward to include various types and combinations of censored and truncated observations. We apply our approach to both simulated and experimental data. We then apply multiple output GP regression, which can handle multiple potentially correlated outputs for each input, to competing risks survival data where multiple event types can occur. By tuning one of the model parameters we can control the extent to which the multiple outputs (the time-to-event for each risk) are dependent thus allowing the specification of correlated risks. Simulation studies suggest that in some cases assuming dependence can lead to more accurate predictions.
Optimization algorithms for solving nonconvex inverse problem have attracted significant interests recently. However, existing methods require the nonconvex regularization to be smooth or simple to ensure convergence. In this paper, we propose a novel gradient descent type algorithm, by leveraging the idea of residual learning and Nesterovs smoothing technique, to solve inverse problems consisting of general nonconvex and nonsmooth regularization with provable convergence. Moreover, we develop a neural network architecture intimating this algorithm to learn the nonlinear sparsity transformation adaptively from training data, which also inherits the convergence to accommodate the general nonconvex structure of this learned transformation. Numerical results demonstrate that the proposed network outperforms the state-of-the-art methods on a variety of different image reconstruction problems in terms of efficiency and accuracy.
300 - Zhou Shao , Tong Lin 2021
Adaptive gradient methods, especially Adam-type methods (such as Adam, AMSGrad, and AdaBound), have been proposed to speed up the training process with an element-wise scaling term on learning rates. However, they often generalize poorly compared with stochastic gradient descent (SGD) and its accelerated schemes such as SGD with momentum (SGDM). In this paper, we propose a new adaptive method called DecGD, which simultaneously achieves good generalization like SGDM and obtain rapid convergence like Adam-type methods. In particular, DecGD decomposes the current gradient into the product of two terms including a surrogate gradient and a loss based vector. Our method adjusts the learning rates adaptively according to the current loss based vector instead of the squared gradients used in Adam-type methods. The intuition for adaptive learning rates of DecGD is that a good optimizer, in general cases, needs to decrease the learning rates as the loss decreases, which is similar to the learning rates decay scheduling technique. Therefore, DecGD gets a rapid convergence in the early phases of training and controls the effective learning rates according to the loss based vectors which help lead to a better generalization. Convergence analysis is discussed in both convex and non-convex situations. Finally, empirical results on widely-used tasks and models demonstrate that DecGD shows better generalization performance than SGDM and rapid convergence like Adam-type methods.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا