No Arabic abstract
High-throughput microarray and sequencing technology have been used to identify disease subtypes that could not be observed otherwise by using clinical variables alone. The classical unsupervised clustering strategy concerns primarily the identification of subpopulations that have similar patterns in gene features. However, as the features corresponding to irrelevant confounders (e.g. gender or age) may dominate the clustering process, the resulting clusters may or may not capture clinically meaningful disease subtypes. This gives rise to a fundamental problem: can we find a subtyping procedure guided by a pre-specified disease outcome? Existing methods, such as supervised clustering, apply a two-stage approach and depend on an arbitrary number of selected features associated with outcome. In this paper, we propose a unified latent generative model to perform outcome-guided disease subtyping constructed from omics data, which improves the resulting subtypes concerning the disease of interest. Feature selection is embedded in a regularization regression. A modified EM algorithm is applied for numerical computation and parameter estimation. The proposed method performs feature selection, latent subtype characterization and outcome prediction simultaneously. To account for possible outliers or violation of mixture Gaussian assumption, we incorporate robust estimation using adaptive Huber or median-truncated loss function. Extensive simulations and an application to complex lung diseases with transcriptomic and clinical data demonstrate the ability of the proposed method to identify clinically relevant disease subtypes and signature genes suitable to explore toward precision medicine.
We propose a framework for Bayesian non-parametric estimation of the rate at which new infections occur assuming that the epidemic is partially observed. The developed methodology relies on modelling the rate at which new infections occur as a function which only depends on time. Two different types of prior distributions are proposed namely using step-functions and B-splines. The methodology is illustrated using both simulated and real datasets and we show that certain aspects of the epidemic such as seasonality and super-spreading events are picked up without having to explicitly incorporate them into a parametric model.
Task-based functional magnetic resonance imaging (task fMRI) is a non-invasive technique that allows identifying brain regions whose activity changes when individuals are asked to perform a given task. This contributes to the understanding of how the human brain is organized in functionally distinct subdivisions. Task fMRI experiments from high-resolution scans provide hundred of thousands of longitudinal signals for each individual, corresponding to measurements of brain activity over each voxel of the brain along the duration of the experiment. In this context, we propose some visualization techniques for high dimensional functional data relying on depth-based notions that allow for computationally efficient 2-dim representations of tfMRI data and that shed light on sample composition, outlier presence and individual variability. We believe that this step is crucial previously to any inferential approach willing to identify neuroscientific patterns across individuals, tasks and brain regions. We illustrate the proposed technique through a simulation study and demonstrate its application on a motor and language task fMRI experiment.
Predicting risks of chronic diseases has become increasingly important in clinical practice. When a prediction model is developed in a given source cohort, there is often a great interest to apply the model to other cohorts. However, due to potential discrepancy in baseline disease incidences between different cohorts and shifts in patient composition, the risk predicted by the original model often under- or over-estimates the risk in the new cohort. The remedy of such a poorly calibrated prediction is needed for proper medical decision-making. In this article, we assume the relative risks of predictors are the same between the two cohorts, and propose a novel weighted estimating equation approach to re-calibrating the projected risk for the targeted population through updating the baseline risk. The recalibration leverages the knowledge about the overall survival probabilities for the disease of interest and competing events, and the summary information of risk factors from the targeted population. The proposed re-calibrated risk estimators gain efficiency if the risk factor distributions are the same for both the source and target cohorts, and are robust with little bias if they differ. We establish the consistency and asymptotic normality of the proposed estimators. Extensive simulation studies demonstrate that the proposed estimators perform very well in terms of robustness and efficiency in finite samples. A real data application to colorectal cancer risk prediction also illustrates that the proposed method can be used in practice for model recalibration.
Causal inference has been increasingly reliant on observational studies with rich covariate information. To build tractable causal models, including the propensity score models, it is imperative to first extract important features from high dimensional data. Unlike the familiar task of variable selection for prediction modeling, our feature selection procedure aims to control for confounding while maintaining efficiency in the resulting causal effect estimate. Previous empirical studies imply that one should aim to include all predictors of the outcome, rather than the treatment, in the propensity score model. In this paper, we formalize this intuition through rigorous proofs, and propose the causal ball screening for selecting these variables from modern ultra-high dimensional data sets. A distinctive feature of our proposal is that we do not require any modeling on the outcome regression, thus providing robustness against misspecification of the functional form or violation of smoothness conditions. Our theoretical analyses show that the proposed procedure enjoys a number of oracle properties including model selection consistency, normality and efficiency. Synthetic and real data analyses show that our proposal performs favorably with existing methods in a range of realistic settings.
Use copula to model dependency of variable extends multivariate gaussian assumption. In this paper we first empirically studied copula regression model with continous response. Both simulation study and real data study are given. Secondly we give a novel copula regression model with binary outcome, and we propose a score gradient estimation algorithms to fit the model. Both simulation study and real data study are given for our model and fitting algorithm.