ترغب بنشر مسار تعليمي؟ اضغط هنا

3D Convolutional Sequence to Sequence Model for Vertebral Compression Fractures Identification in CT

72   0   0.0 ( 0 )
 نشر من قبل Amir Bar
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

An osteoporosis-related fracture occurs every three seconds worldwide, affecting one in three women and one in five men aged over 50. The early detection of at-risk patients facilitates effective and well-evidenced preventative interventions, reducing the incidence of major osteoporotic fractures. In this study, we present an automatic system for identification of vertebral compression fractures on Computed Tomography images, which are often an undiagnosed precursor to major osteoporosis-related fractures. The system integrates a compact 3D representation of the spine, utilizing a Convolutional Neural Network (CNN) for spinal cord detection and a novel end-to-end sequence to sequence 3D architecture. We evaluate several model variants that exploit different representation and classification approaches and present a framework combining an ensemble of models that achieves state of the art results, validated on a large data set, with a patient-level fracture identification of 0.955 Area Under the Curve (AUC). The system proposed has the potential to support osteoporosis clinical management, improve treatment pathways, and to change the course of one of the most burdensome diseases of our generation.



قيم البحث

اقرأ أيضاً

The precise and accurate segmentation of the vertebral column is essential in the diagnosis and treatment of various orthopedic, neurological, and oncological traumas and pathologies. Segmentation is especially challenging in the presence of patholog y such as vertebral compression fractures. In this paper, we propose a method to produce segmentations for osteoporotic compression fractured vertebrae by applying a multi-atlas joint label fusion technique for clinical CT images. A total of 170 thoracic and lumbar vertebrae were evaluated using atlases from five patients with varying degrees of spinal degeneration. In an osteoporotic cohort of bundled atlases, registration provided an average Dice coefficient and mean absolute surface distance of 2.7$pm$4.5% and 0.32$pm$0.13mm for osteoporotic vertebrae, respectively, and 90.9$pm$3.0% and 0.36$pm$0.11mm for compression fractured vertebrae.
The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compre ssion fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.
Differentiating Vertebral Compression Fractures (VCFs) associated with trauma and osteoporosis (benign VCFs) or those caused by metastatic cancer (malignant VCFs) are critically important for treatment decisions. So far, automatic VCFs diagnosis is s olved in a two-step manner, i.e. first identify VCFs and then classify it into benign or malignant. In this paper, we explore to model VCFs diagnosis as a three-class classification problem, i.e. normal vertebrae, benign VCFs, and malignant VCFs. However, VCFs recognition and classification require very different features, and both tasks are characterized by high intra-class variation and high inter-class similarity. Moreover, the dataset is extremely class-imbalanced. To address the above challenges, we propose a novel Two-Stream Compare and Contrast Network (TSCCN) for VCFs diagnosis. This network consists of two streams, a recognition stream which learns to identify VCFs through comparing and contrasting between adjacent vertebra, and a classification stream which compares and contrasts between intra-class and inter-class to learn features for fine-grained classification. The two streams are integrated via a learnable weight control module which adaptively sets their contribution. The TSCCN is evaluated on a dataset consisting of 239 VCFs patients and achieves the average sensitivity and specificity of 92.56% and 96.29%, respectively.
The health and socioeconomic difficulties caused by the COVID-19 pandemic continues to cause enormous tensions around the world. In particular, this extraordinary surge in the number of cases has put considerable strain on health care systems around the world. A critical step in the treatment and management of COVID-19 positive patients is severity assessment, which is challenging even for expert radiologists given the subtleties at different stages of lung disease severity. Motivated by this challenge, we introduce COVID-Net CT-S, a suite of deep convolutional neural networks for predicting lung disease severity due to COVID-19 infection. More specifically, a 3D residual architecture design is leveraged to learn volumetric visual indicators characterizing the degree of COVID-19 lung disease severity. Experimental results using the patient cohort collected by the China National Center for Bioinformation (CNCB) showed that the proposed COVID-Net CT-S networks, by leveraging volumetric features, can achieve significantly improved severity assessment performance when compared to traditional severity assessment networks that learn and leverage 2D visual features to characterize COVID-19 severity.
Neural sequence-to-sequence models are currently the dominant approach in several natural language processing tasks, but require large parallel corpora. We present a sequence-to-sequence-to-sequence autoencoder (SEQ^3), consisting of two chained enco der-decoder pairs, with words used as a sequence of discrete latent variables. We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence. Constraining the length of the latent word sequences forces the model to distill important information from the input. A pretrained language model, acting as a prior over the latent sequences, encourages the compressed sentences to be human-readable. Continuous relaxations enable us to sample from categorical distributions, allowing gradient-based optimization, unlike alternatives that rely on reinforcement learning. The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا