Do you want to publish a course? Click here

Quantification and visualization of variation in anatomical trees

336   0   0.0 ( 0 )
 Added by Aasa Feragen
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

This paper presents two approaches to quantifying and visualizing variation in datasets of trees. The first approach localizes subtrees in which significant population differences are found through hypothesis testing and sparse classifiers on subtree features. The second approach visualizes the global metric structure of datasets through low-distortion embedding into hyperbolic planes in the style of multidimensional scaling. A case study is made on a dataset of airway trees in relation to Chronic Obstructive Pulmonary Disease.



rate research

Read More

The aim of this study was to evaluate the performance of a classical method of fractal analysis, Detrended Fluctuation Analysis (DFA), in the analysis of the dynamics of animal behavior time series. In order to correctly use DFA to assess the presence of long-range correlation, previous authors using statistical model systems have stated that different aspects should be taken into account such as: 1) the establishment by hypothesis testing of the absence of short term correlation, 2) an accurate estimation of a straight line in the log-log plot of the fluctuation function, 3) the elimination of artificial crossovers in the fluctuation function, and 4) the length of the time series. Taking into consideration these factors, herein we evaluated the presence of long-range correlation in the temporal pattern of locomotor activity of Japanese quail ({sl Coturnix coturnix}) and mosquito larva ({sl Culex quinquefasciatus}). In our study, modeling the data with the general ARFIMA model, we rejected the hypothesis of short range correlations (d=0) in all cases. We also observed that DFA was able to distinguish between the artificial crossover observed in the temporal pattern of locomotion of Japanese quail, and the crossovers in the correlation behavior observed in mosquito larvae locomotion. Although the test duration can slightly influence the parameter estimation, no qualitative differences were observed between different test durations.
Normal mode analysis offers an efficient way of modeling the conformational flexibility of protein structures. Simple models defined by contact topology, known as elastic network models, have been used to model a variety of systems, but the validation is typically limited to individual modes for a single protein. We use anisotropic displacement parameters from crystallography to test the quality of prediction of both the magnitude and directionality of conformational variance. Normal modes from four simple elastic network model potentials and from the CHARMM forcefield are calculated for a data set of 83 diverse, ultrahigh resolution crystal structures. While all five potentials provide good predictions of the magnitude of flexibility, the methods that consider all atoms have a clear edge at prediction of directionality, and the CHARMM potential produces the best agreement. The low-frequency modes from different potentials are similar, but those computed from the CHARMM potential show the greatest difference from the elastic network models. This was illustrated by computing the dynamic correlation matrices from different potentials for a PDZ domain structure. Comparison of normal mode results with anisotropic temperature factors opens the possibility of using ultrahigh resolution crystallographic data as a quantitative measure of molecular flexibility. The comprehensive evaluation demonstrates the costs and benefits of using normal mode potentials of varying complexity. Comparison of the dynamic correlation matrices suggests that a combination of topological and chemical potentials may help identify residues in which chemical forces make large contributions to intramolecular coupling.
Next-generation RNA sequencing (RNA-seq) technology has been widely used to assess full-length RNA isoform abundance in a high-throughput manner. RNA-seq data offer insight into gene expression levels and transcriptome structures, enabling us to better understand the regulation of gene expression and fundamental biological processes. Accurate isoform quantification from RNA-seq data is challenging due to the information loss in sequencing experiments. A recent accumulation of multiple RNA-seq data sets from the same tissue or cell type provides new opportunities to improve the accuracy of isoform quantification. However, existing statistical or computational methods for multiple RNA-seq samples either pool the samples into one sample or assign equal weights to the samples when estimating isoform abundance. These methods ignore the possible heterogeneity in the quality of different samples and could result in biased and unrobust estimates. In this article, we develop a method, which we call joint modeling of multiple RNA-seq samples for accurate isoform quantification (MSIQ), for more accurate and robust isoform quantification by integrating multiple RNA-seq samples under a Bayesian framework. Our method aims to (1) identify a consistent group of samples with homogeneous quality and (2) improve isoform quantification accuracy by jointly modeling multiple RNA-seq samples by allowing for higher weights on the consistent group. We show that MSIQ provides a consistent estimator of isoform abundance, and we demonstrate the accuracy and effectiveness of MSIQ compared with alternative methods through simulation studies on D. melanogaster genes. We justify MSIQs advantages over existing approaches via application studies on real RNA-seq data from human embryonic stem cells, brain tissues, and the HepG2 immortalized cell line.
Making binary decisions is a common data analytical task in scientific research and industrial applications. In data sciences, there are two related but distinct strategies: hypothesis testing and binary classification. In practice, how to choose between these two strategies can be unclear and rather confusing. Here we summarize key distinctions between these two strategies in three aspects and list five practical guidelines for data analysts to choose the appropriate strategy for specific analysis needs. We demonstrate the use of those guidelines in a cancer driver gene prediction example.
198 - Stefano Puliti 2020
This study aimed at estimating total forest above-ground net change (Delta AGB, Mt) over five years (2014-2019) based on model-assisted estimation utilizing freely available satellite imagery. The study was conducted for a boreal forest area (approx. 1.4 Mill hectares) in Norway where bi-temporal national forest inventory (NFI), Sentinel-2, and Landsat data were available. Biomass change was modelled based on a direct approach. The precision of estimates using only the NFI data in a basic expansion estimator were compared to four different alternative model-assisted estimates using 1) Sentinel-2 or Landsat data, and 2) using bi- or uni-temporal remotely sensed data. We found that the use of remotely sensed data improved the precision of the purely field-based estimates by a factor of up to three. The most precise estimates were found for the model-assisted estimation using bi-temporal Sentinel-2 (standard error; SE= 1.7 Mt). However, the decrease in precision when using Landsat data was small (SE= 1.92 Mt). In addition, we found that Delta AGB could be precisely estimated also when remotely sensed data were available only at the end of the monitoring period. We conclude that satellite optical data can considerably improve Delta AGB estimates, even in those cases where repeated and coincident NFI data are available. The free availability, global coverage, frequent update, and long-term time horizon make data from programs such as Sentinel-2 and Landsat a valuable data source for a consistent and durable monitoring of forest carbon dynamics.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا