Do you want to publish a course? Click here

Time of arrival imaging: The proof of concept for a novel medical imaging modality

165   0   0.0 ( 0 )
 Added by Tao Feng
 Publication date 2020
and research's language is English
 Authors Tao Feng




Ask ChatGPT about the research

It has been shown that with the use of ultra-wideband (UWB) electromagnetic signal and time of arrival (ToA) principle, it is possible to locate medical implants given the permittivity distribution of the body. We propose a new imaging modality using the reverse process to acquire permittivity distributions as a surrogate of human anatomy. In the proposed systems, the locations of the signal source, receiver, and signal shapes are assumed to be known exactly. The measured data is recorded as the time it takes for the signal to travel from the signal source to the signal receiver. The finite-difference-time-domain (FDTD) method is used for the modeling of signal propagation within the phantom, which is used for both simulation and image reconstruction. Image reconstruction is achieved using linear regression on the training pairs, which includes randomly generated images and its corresponding arrival times generated using the FDTD approach. The linear weights of the training images are generated to minimize the difference between the arrival time of the reconstruction image and the measured arrival time. A simulation study using UWB signal with the central frequency of 300 MHz and the Shepp-Logan phantom was carried out. Ten-picosecond timing resolution is used for the simulation and image reconstruction. The quantitative difference between the arrival times of the phantom and the reconstructed image reduced with an increased iteration number. The quantitative error of the reconstructed image reached below 10% after 900 iterations, and 8.4% after 1200 iterations. With additional post-smoothing to suppress the introduced noise pattern through reconstruction, 6.5% error was achieved. In this paper, an approach that utilizes the ToA principle to achieve transmission imaging with radio waves is proposed and validated using a simulation study.

rate research

Read More

Motion imaging phantoms are expensive, bulky and difficult to transport and set-up. The purpose of this paper is to demonstrate a simple approach to the design of multi-modality motion imaging phantoms that use mechanically stored energy to produce motion. We propose two phantom designs that use mainsprings and elastic bands to store energy. A rectangular piece was attached to an axle at the end of the transmission chain of each phantom, and underwent a rotary motion upon release of the mechanical motor. The phantoms were imaged with MRI and US, and the image sequences were embedded in a 1D non linear manifold (Laplacian Eigenmap) and the spectrogram of the embedding was used to derive the angular velocity over time. The derived velocities were consistent and reproducible within a small error. The proposed motion phantom concept showed great potential for the construction of simple and affordable motion phantoms
Medical image processing is one of the most important topics in the field of the Internet of Medical Things (IoMT). Recently, deep learning methods have carried out state-of-the-art performances on medical image tasks. However, conventional deep learning have two main drawbacks: 1) insufficient training data and 2) the domain mismatch between the training data and the testing data. In this paper, we propose a distant domain transfer learning (DDTL) method for medical image classification. Moreover, we apply our methods to a recent issue (Coronavirus diagnose). Several current studies indicate that lung Computed Tomography (CT) images can be used for a fast and accurate COVID-19 diagnosis. However, the well-labeled training data cannot be easily accessed due to the novelty of the disease and a number of privacy policies. Moreover, the proposed method has two components: Reduced-size Unet Segmentation model and Distant Feature Fusion (DFF) classification model. It is related to a not well-investigated but important transfer learning problem, termed Distant Domain Transfer Learning (DDTL). DDTL aims to make efficient transfers even when the domains or the tasks are entirely different. In this study, we develop a DDTL model for COVID-19 diagnose using unlabeled Office-31, Catech-256, and chest X-ray image data sets as the source data, and a small set of COVID-19 lung CT as the target data. The main contributions of this study: 1) the proposed method benefits from unlabeled data collected from distant domains which can be easily accessed, 2) it can effectively handle the distribution shift between the training data and the testing data, 3) it has achieved 96% classification accuracy, which is 13% higher classification accuracy than non-transfer algorithms, and 8% higher than existing transfer and distant transfer algorithms.
This article discusses how the language of causality can shed new light on the major challenges in machine learning for medical imaging: 1) data scarcity, which is the limited availability of high-quality annotations, and 2) data mismatch, whereby a trained algorithm may fail to generalize in clinical practice. Looking at these challenges through the lens of causality allows decisions about data collection, annotation procedures, and learning strategies to be made (and scrutinized) more transparently. We discuss how causal relationships between images and annotations can not only have profound effects on the performance of predictive models, but may even dictate which learning strategies should be considered in the first place. For example, we conclude that semi-supervision may be unsuitable for image segmentation---one of the possibly surprising insights from our causal analysis, which is illustrated with representative real-world examples of computer-aided diagnosis (skin lesion classification in dermatology) and radiotherapy (automated contouring of tumours). We highlight that being aware of and accounting for the causal relationships in medical imaging data is important for the safe development of machine learning and essential for regulation and responsible reporting. To facilitate this we provide step-by-step recommendations for future studies.
Advances in computing power, deep learning architectures, and expert labelled datasets have spurred the development of medical imaging artificial intelligence systems that rival clinical experts in a variety of scenarios. The National Institutes of Health in 2018 identified key focus areas for the future of artificial intelligence in medical imaging, creating a foundational roadmap for research in image acquisition, algorithms, data standardization, and translatable clinical decision support systems. Among the key issues raised in the report: data availability, need for novel computing architectures and explainable AI algorithms, are still relevant despite the tremendous progress made over the past few years alone. Furthermore, translational goals of data sharing, validation of performance for regulatory approval, generalizability and mitigation of unintended bias must be accounted for early in the development process. In this perspective paper we explore challenges unique to high dimensional clinical imaging data, in addition to highlighting some of the technical and ethical considerations in developing high-dimensional, multi-modality, machine learning systems for clinical decision support.
Image denoising is of great importance for medical imaging system, since it can improve image quality for disease diagnosis and downstream image analyses. In a variety of applications, dynamic imaging techniques are utilized to capture the time-varying features of the subject, where multiple images are acquired for the same subject at different time points. Although signal-to-noise ratio of each time frame is usually limited by the short acquisition time, the correlation among different time frames can be exploited to improve denoising results with shared information across time frames. With the success of neural networks in computer vision, supervised deep learning methods show prominent performance in single-image denoising, which rely on large datasets with clean-vs-noisy image pairs. Recently, several self-supervised deep denoising models have been proposed, achieving promising results without needing the pairwise ground truth of clean images. In the field of multi-image denoising, however, very few works have been done on extracting correlated information from multiple slices for denoising using self-supervised deep learning methods. In this work, we propose Deformed2Self, an end-to-end self-supervised deep learning framework for dynamic imaging denoising. It combines single-image and multi-image denoising to improve image quality and use a spatial transformer network to model motion between different slices. Further, it only requires a single noisy image with a few auxiliary observations at different time frames for training and inference. Evaluations on phantom and in vivo data with different noise statistics show that our method has comparable performance to other state-of-the-art unsupervised or self-supervised denoising methods and outperforms under high noise levels.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا