ترغب بنشر مسار تعليمي؟ اضغط هنا

Survival Analysis with Graph-Based Regularization for Predictors

124   0   0.0 ( 0 )
 نشر من قبل Liyan Xie
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the variable selection problem in survival analysis to identify the most important factors affecting the survival time when the variables have prior knowledge that they have a mutual correlation through a graph structure. We consider the Cox proportional hazard model with a graph-based regularizer for variable selection. A computationally efficient algorithm is developed to solve the graph regularized maximum likelihood problem by connecting to group lasso. We provide theoretical guarantees about the recovery error and asymptotic distribution of the proposed estimators. The good performance and benefit of the proposed approach compared with existing methods are demonstrated in both synthetic and real data examples.

قيم البحث

اقرأ أيضاً

In many biomedical applications, outcome is measured as a ``time-to-event (eg. disease progression or death). To assess the connection between features of a patient and this outcome, it is common to assume a proportional hazards model, and fit a prop ortional hazards regression (or Cox regression). To fit this model, a log-concave objective function known as the ``partial likelihood is maximized. For moderate-sized datasets, an efficient Newton-Raphson algorithm that leverages the structure of the objective can be employed. However, in large datasets this approach has two issues: 1) The computational tricks that leverage structure can also lead to computational instability; 2) The objective does not naturally decouple: Thus, if the dataset does not fit in memory, the model can be very computationally expensive to fit. This additionally means that the objective is not directly amenable to stochastic gradient-based optimization methods. To overcome these issues, we propose a simple, new framing of proportional hazards regression: This results in an objective function that is amenable to stochastic gradient descent. We show that this simple modification allows us to efficiently fit survival models with very large datasets. This also facilitates training complex, eg. neural-network-based, models with survival data.
We prove uniform consistency of Random Survival Forests (RSF), a newly introduced forest ensemble learner for analysis of right-censored survival data. Consistency is proven under general splitting rules, bootstrapping, and random selection of variab les--that is, under true implementation of the methodology. A key assumption made is that all variables are factors. Although this assumes that the feature space has finite cardinality, in practice the space can be a extremely large--indeed, current computational procedures do not properly deal with this setting. An indirect consequence of this work is the introduction of new computational methodology for dealing with factors with unlimited number of labels.
In this paper, we prove almost surely consistency of a Survival Analysis model, which puts a Gaussian process, mapped to the unit interval, as a prior on the so-called hazard function. We assume our data is given by survival lifetimes $T$ belonging t o $mathbb{R}^{+}$, and covariates on $[0,1]^d$, where $d$ is an arbitrary dimension. We define an appropriate metric for survival functions and prove posterior consistency with respect to this metric. Our proof is based on an extension of the theorem of Schwartz (1965), which gives general conditions for proving almost surely consistency in the setting of non i.i.d random variables. Due to the nature of our data, several results for Gaussian processes on $mathbb{R}^+$ are proved which may be of independent interest.
Holland and Leinhardt (1981) proposed a directed random graph model, the p1 model, to describe dyadic interactions in a social network. In previous work (Petrovic et al., 2010), we studied the algebraic properties of the p1 model and showed that it i s a toric model specified by a multi-homogeneous ideal. We conducted an extensive study of the Markov bases for p1 that incorporate explicitly the constraint arising from multi-homogeneity. Here we consider the properties of the corresponding toric variety and relate them to the conditions for the existence of the maximum likelihood and extended maximum likelihood estimators or the model parameters. Our results are directly relevant to the estimation and conditional goodness-of-fit testing problems in p1 models.
In non-convex settings, it is established that the behavior of gradient-based algorithms is different in the vicinity of local structures of the objective function such as strict and non-strict saddle points, local and global minima and maxima. It is therefore crucial to describe the landscape of non-convex problems. That is, to describe as well as possible the set of points of each of the above categories. In this work, we study the landscape of the empirical risk associated with deep linear neural networks and the square loss. It is known that, under weak assumptions, this objective function has no spurious local minima and no local maxima. We go a step further and characterize, among all critical points, which are global minimizers, strict saddle points, and non-strict saddle points. We enumerate all the associated critical values. The characterization is simple, involves conditions on the ranks of partial matrix products, and sheds some light on global convergence or implicit regularization that have been proved or observed when optimizing a linear neural network. In passing, we also provide an explicit parameterization of the set of all global minimizers and exhibit large sets of strict and non-strict saddle points.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا