ترغب بنشر مسار تعليمي؟ اضغط هنا

Connecting the dots across time: Reconstruction of single cell signaling trajectories using time-stamped data

141   0   0.0 ( 0 )
 نشر من قبل Jayajit Das
 تاريخ النشر 2016
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Single cell responses are shaped by the geometry of signaling kinetic trajectories carved in a multidimensional space spanned by signaling protein abundances. It is however challenging to assay large number (>3) of signaling species in live-cell imaging which makes it difficult to probe single cell signaling kinetic trajectories in large dimensions. Flow and mass cytometry techniques can measure a large number (4 - >40) of signaling species but are unable to track single cells. Thus cytometry experiments provide detailed time stamped snapshots of single cell signaling kinetics. Is it possible to use the time stamped cytometry data to reconstruct single cell signaling trajectories? Borrowing concepts of conserved and slow variables from non-equilibrium statistical physics we develop an approach to reconstruct signaling trajectories using snapshot data by creating new variables that remain invariant or vary slowly during the signaling kinetics. We apply this approach to reconstruct trajectories using snapshot data obtained from in silico simulations and live-cell imaging measurements. The use of invariants and slow variables to reconstruct trajectories provides a radically different way to track object using snapshot data. The approach is likely to have implications for solving matching problems in a wide range of disciplines.



قيم البحث

اقرأ أيضاً

The development of single-cell technologies provides the opportunity to identify new cellular states and reconstruct novel cell-to-cell relationships. Applications range from understanding the transcriptional and epigenetic processes involved in meta zoan development to characterizing distinct cells types in heterogeneous populations like cancers or immune cells. However, analysis of the data is impeded by its unknown intrinsic biological and technical variability together with its sparseness; these factors complicate the identification of true biological signals amidst artifact and noise. Here we show that, across technologies, roughly 95% of the eigenvalues derived from each single-cell data set can be described by universal distributions predicted by Random Matrix Theory. Interestingly, 5% of the spectrum shows deviations from these distributions and present a phenomenon known as eigenvector localization, where information tightly concentrates in groups of cells. Some of the localized eigenvectors reflect underlying biological signal, and some are simply a consequence of the sparsity of single cell data; roughly 3% is artifactual. Based on the universal distributions and a technique for detecting sparsity induced localization, we present a strategy to identify the residual 2% of directions that encode biological information and thereby denoise single-cell data. We demonstrate the effectiveness of this approach by comparing with standard single-cell data analysis techniques in a variety of examples with marked cell populations.
Mathematical methods of information theory constitute essential tools to describe how stimuli are encoded in activities of signaling effectors. Exploring the information-theoretic perspective, however, remains conceptually, experimentally and computa tionally challenging. Specifically, existing computational tools enable efficient analysis of relatively simple systems, usually with one input and output only. Moreover, their robust and readily applicable implementations are missing. Here, we propose a novel algorithm to analyze signaling data within the framework of information theory. Our approach enables robust as well as statistically and computationally efficient analysis of signaling systems with high-dimensional outputs and a large number of input values. Analysis of the NF-kB single - cell signaling responses to TNF-a uniquely reveals that the NF-kB signaling dynamics improves discrimination of high concentrations of TNF-a with a modest impact on discrimination of low concentrations. Our readily applicable R-package, SLEMI - statistical learning based estimation of mutual information, allows the approach to be used by computational biologists with only elementary knowledge of information theory.
We study the stochastic kinetics of a signaling module consisting of a two-state stochastic point process with negative feedback. In the active state, a product is synthesized which increases the active-to-inactive transition rate of the process. We analyze this simple autoregulatory module using a path-integral technique based on the temporal statistics of state flips of the process. We develop a systematic framework to calculate averages, autocorrelations, and response functions by treating the feedback as a weak perturbation. Explicit analytical results are obtained to first order in the feedback strength. Monte Carlo simulations are performed to test the analytical results in the weak feedback limit and to investigate the strong feedback regime. We conclude by relating some of our results to experimental observations in the olfactory and visual sensory systems.
Continuous time Hamiltonian Monte Carlo is introduced, as a powerful alternative to Markov chain Monte Carlo methods for continuous target distributions. The method is constructed in two steps: First Hamiltonian dynamics are chosen as the determinist ic dynamics in a continuous time piecewise deterministic Markov process. Under very mild restrictions, such a process will have the desired target distribution as an invariant distribution. Secondly, the numerical implementation of such processes, based on adaptive numerical integration of second order ordinary differential equations is considered. The numerical implementation yields an approximate, yet highly robust algorithm that, unlike conventional Hamiltonian Monte Carlo, enables the exploitation of the complete Hamiltonian trajectories (hence the title). The proposed algorithm may yield large speedups and improvements in stability relative to relevant benchmarks, while incurring numerical errors that are negligible relative to the overall Monte Carlo errors.
Electronic Health Records (EHRs) are typically stored as time-stamped encounter records. Observing temporal relationship between medical records is an integral part of interpreting the information. Hence, statistical analysis of EHRs requires that cl inically informed time-interdependent analysis variables (TIAV) be created. Often, formulation and creation of these variables are iterative and requiring custom codes. We describe a technique of using sequences of time-referenced entities as the building blocks for TIAVs. These sequences represent different aspects of patients medical history in a contiguous fashion. To illustrate the principles and applications of the method, we provide examples using Veterans Health Administrations research databases. In the first example, sequences representing medication exposure were used to assess patient selection criteria for a treatment comparative effectiveness study. In the second example, sequences of Charlson Comorbidity conditions and clinical settings of inpatient or outpatient were used to create variables with which data anomalies and trends were revealed. The third example demonstrated the creation of an analysis variable derived from the temporal dependency of medication exposure and comorbidity. Complex time-interdependent analysis variables can be created from the sequences with simple, reusable codes, hence enable unscripted or automation of TIAV creation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا