Do you want to publish a course? Click here

Automated Knee X-ray Report Generation

121   0   0.0 ( 0 )
 Added by Aydan Gasimova
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Gathering manually annotated images for the purpose of training a predictive model is far more challenging in the medical domain than for natural images as it requires the expertise of qualified radiologists. We therefore propose to take advantage of past radiological exams (specifically, knee X-ray examinations) and formulate a framework capable of learning the correspondence between the images and reports, and hence be capable of generating diagnostic reports for a given X-ray examination consisting of an arbitrary number of image views. We demonstrate how aggregating the image features of individual exams and using them as conditional inputs when training a language generation model results in auto-generated exam reports that correlate well with radiologist-generated reports.



rate research

Read More

Recently, chest X-ray report generation, which aims to automatically generate descriptions of given chest X-ray images, has received growing research interests. The key challenge of chest X-ray report generation is to accurately capture and describe the abnormal regions. In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report. Due to such data bias, learning-based models may fail to attend to abnormal regions. In this work, to effectively capture and describe abnormal regions, we propose the Contrastive Attention (CA) model. Instead of solely focusing on the current input image, the CA model compares the current input image with normal images to distill the contrastive information. The acquired contrastive information can better represent the visual features of abnormal regions. According to the experiments on the public IU-X-ray and MIMIC-CXR datasets, incorporating our CA into several existing models can boost their performance across most metrics. In addition, according to the analysis, the CA model can help existing models better attend to the abnormal regions and provide more accurate descriptions which are crucial for an interpretable diagnosis. Specifically, we achieve the state-of-the-art results on the two public datasets.
We present a fully automated learning-based approach for segmenting knee cartilage in the presence of osteoarthritis (OA). The algorithm employs a hierarchical set of two random forest classifiers. The first is a neighborhood approximation forest, the output probability map of which is utilized as a feature set for the second random forest (RF) classifier. The output probabilities of the hierarchical approach are used as cost functions in a Layered Optimal Graph Segmentation of Multiple Objects and Surfaces (LOGISMOS). In this work, we highlight a novel post-processing interaction called just-enough interaction (JEI) which enables quick and accurate generation of a large set of training examples. Disjoint sets of 15 and 13 subjects were used for training and tested on another disjoint set of 53 knee datasets. All images were acquired using a double echo steady state (DESS) MRI sequence and are from the osteoarthritis initiative (OAI) database. Segmentation performance using the learning-based cost function showed significant reduction in segmentation errors ($p< 0.05$) in comparison with conventional gradient-based cost functions.
Our paper focuses on automating the generation of medical reports from chest X-ray image inputs, a critical yet time-consuming task for radiologists. Unlike existing medical re-port generation efforts that tend to produce human-readable reports, we aim to generate medical reports that are both fluent and clinically accurate. This is achieved by our fully differentiable and end-to-end paradigm containing three complementary modules: taking the chest X-ray images and clinical his-tory document of patients as inputs, our classification module produces an internal check-list of disease-related topics, referred to as enriched disease embedding; the embedding representation is then passed to our transformer-based generator, giving rise to the medical reports; meanwhile, our generator also pro-duces the weighted embedding representation, which is fed to our interpreter to ensure consistency with respect to disease-related topics.Our approach achieved promising results on commonly-used metrics concerning language fluency and clinical accuracy. Moreover, noticeable performance gains are consistently ob-served when additional input information is available, such as the clinical document and extra scans of different views.
Medical imaging plays a pivotal role in diagnosis and treatment in clinical practice. Inspired by the significant progress in automatic image captioning, various deep learning (DL)-based architectures have been proposed for generating radiology reports for medical images. However, model uncertainty (i.e., model reliability/confidence on report generation) is still an under-explored problem. In this paper, we propose a novel method to explicitly quantify both the visual uncertainty and the textual uncertainty for the task of radiology report generation. Such multi-modal uncertainties can sufficiently capture the model confidence scores at both the report-level and the sentence-level, and thus they are further leveraged to weight the losses for achieving more comprehensive model optimization. Our experimental results have demonstrated that our proposed method for model uncertainty characterization and estimation can provide more reliable confidence scores for radiology report generation, and our proposed uncertainty-weighted losses can achieve more comprehensive model optimization and result in state-of-the-art performance on a public radiology report dataset.
Machine learning has been an emerging tool for various aspects of infectious diseases including tuberculosis surveillance and detection. However, WHO provided no recommendations on using computer-aided tuberculosis detection software because of the small number of studies, methodological limitations, and limited generalizability of the findings. To quantify the generalizability of the machine-learning model, we developed a Deep Convolutional Neural Network (DCNN) model using a TB-specific CXR dataset of one population (National Library of Medicine Shenzhen No.3 Hospital) and tested it with non-TB-specific CXR dataset of another population (National Institute of Health Clinical Centers). The findings suggested that a supervised deep learning model developed by using the training dataset from one population may not have the same diagnostic performance in another population. Technical specification of CXR images, disease severity distribution, overfitting, and overdiagnosis should be examined before implementation in other settings.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا