ترغب بنشر مسار تعليمي؟ اضغط هنا

Large-scale machine learning-based phenotyping significantly improves genomic discovery for optic nerve head morphology

116   0   0.0 ( 0 )
 نشر من قبل Babak Alipanahi
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Genome-wide association studies (GWAS) require accurate cohort phenotyping, but expert labeling can be costly, time-intensive, and variable. Here we develop a machine learning (ML) model to predict glaucomatous optic nerve head features from color fundus photographs. We used the model to predict vertical cup-to-disc ratio (VCDR), a diagnostic parameter and cardinal endophenotype for glaucoma, in 65,680 Europeans in the UK Biobank (UKB). A GWAS of ML-based VCDR identified 299 independent genome-wide significant (GWS; $Pleq5times10^{-8}$) hits in 156 loci. The ML-based GWAS replicated 62 of 65 GWS loci from a recent VCDR GWAS in the UKB for which two ophthalmologists manually labeled images for 67,040 Europeans. The ML-based GWAS also identified 92 novel loci, significantly expanding our understanding of the genetic etiologies of glaucoma and VCDR. Pathway analyses support the biological significance of the novel hits to VCDR, with select loci near genes involved in neuronal and synaptic biology or known to cause severe Mendelian ophthalmic disease. Finally, the ML-based GWAS results significantly improve polygenic prediction of VCDR and primary open-angle glaucoma in the independent EPIC-Norfolk cohort.


قيم البحث

اقرأ أيضاً

In this work we present a new physics-informed machine learning model that can be used to analyze kinematic data from an instrumented mouthguard and detect impacts to the head. Monitoring player impacts is vitally important to understanding and prote cting from injuries like concussion. Typically, to analyze this data, a combination of video analysis and sensor data is used to ascertain the recorded events are true impacts and not false positives. In fact, due to the nature of using wearable devices in sports, false positives vastly outnumber the true positives. Yet, manual video analysis is time-consuming. This imbalance leads traditional machine learning approaches to exhibit poor performance in both detecting true positives and preventing false negatives. Here, we show that by simulating head impacts numerically using a standard Finite Element head-neck model, a large dataset of synthetic impacts can be created to augment the gathered, verified, impact data from mouthguards. This combined physics-informed machine learning impact detector reported improved performance on test datasets compared to traditional impact detectors with negative predictive value and positive predictive values of 88% and 87% respectively. Consequently, this model reported the best results to date for an impact detection algorithm for American Football, achieving an F1 score of 0.95. In addition, this physics-informed machine learning impact detector was able to accurately detect true and false impacts from a test dataset at a rate of 90% and 100% relative to a purely manual video analysis workflow. Saving over 12 hours of manual video analysis for a modest dataset, at an overall accuracy of 92%, these results indicate that this model could be used in place of, or alongside, traditional video analysis to allow for larger scale and more efficient impact detection in sports such as American Football.
Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commerc ial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 clean B-scans (multi-frame B-scans), and their corresponding noisy B-scans (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from $4.02 pm 0.68$ dB (single-frame) to $8.14 pm 1.03$ dB (denoised). For all the ONH tissues, the mean CNR increased from $3.50 pm 0.56$ (single-frame) to $7.63 pm 1.81$ (denoised). The MSSIM increased from $0.13 pm 0.02$ (single frame) to $0.65 pm 0.03$ (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort.
Purpose: To develop a deep learning approach to digitally-stain optical coherence tomography (OCT) images of the optic nerve head (ONH). Methods: A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for 1 eye of eac h of 100 subjects (40 normal & 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e. highlight) 6 tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the Dice coefficient, sensitivity, and specificity. We further studied how compensation and the number of training images affected the performance of our algorithm. Results: For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer + prelamina, the retinal pigment epithelium, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the mean dice coefficient was $0.84 pm 0.03$, the mean sensitivity $0.92 pm 0.03$, and the mean specificity $0.99 pm 0.00$. Our algorithm performed significantly better when compensated images were used for training. Increasing the number of images (from 10 to 40) to train our algorithm did not significantly improve performance, except for the RPE. Conclusion. Our deep learning algorithm can simultaneously stain neural and connective tissues in ONH images. Our approach offers a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management.
The optic nerve head (ONH) typically experiences complex neural- and connective-tissue structural changes with the development and progression of glaucoma, and monitoring these changes could be critical for improved diagnosis and prognosis in the gla ucoma clinic. The gold-standard technique to assess structural changes of the ONH clinically is optical coherence tomography (OCT). However, OCT is limited to the measurement of a few hand-engineered parameters, such as the thickness of the retinal nerve fiber layer (RNFL), and has not yet been qualified as a stand-alone device for glaucoma diagnosis and prognosis applications. We argue this is because the vast amount of information available in a 3D OCT scan of the ONH has not been fully exploited. In this study we propose a deep learning approach that can: textbf{(1)} fully exploit information from an OCT scan of the ONH; textbf{(2)} describe the structural phenotype of the glaucomatous ONH; and that can textbf{(3)} be used as a robust glaucoma diagnosis tool. Specifically, the structural features identified by our algorithm were found to be related to clinical observations of glaucoma. The diagnostic accuracy from these structural features was $92.0 pm 2.3 %$ with a sensitivity of $90.0 pm 2.4 % $ (at $95 %$ specificity). By changing their magnitudes in steps, we were able to reveal how the morphology of the ONH changes as one transitions from a `non-glaucoma to a `glaucoma condition. We believe our work may have strong clinical implication for our understanding of glaucoma pathogenesis, and could be improved in the future to also predict future loss of vision.
We present two algorithms designed to learn a pattern of correspondence between two data sets in situations where it is desirable to match elements that exhibit an affine relationship. In the motivating case study, the challenge is to better understa nd micro-RNA (miRNA) regulation in the striatum of Huntingtons disease (HD) model mice. The two data sets contain miRNA and messenger-RNA (mRNA) data, respectively, each data point consisting in a multi-dimensional profile. The biological hypothesis is that if a miRNA induces the degradation of a target mRNA or blocks its translation into proteins, or both, then the profile of the former should be similar to minus the profile of the latter (a particular form of affine relationship). The algorithms unfold in two stages. During the first stage, an optimal transport plan P and an optimal affine transformation are learned, using the Sinkhorn-Knopp algorithm and a mini-batch gradient descent. During the second stage, P is exploited to derive either several co-clusters or several sets of matched elements. A simulation study illustrates how the algorithms work and perform. A brief summary of the real data application in the motivating case-study further illustrates the applicability and interest of the algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا