Do you want to publish a course? Click here

Clinical Micro-CT Empowered by Interior Tomography, Robotic Scanning, and Deep Learning

78   0   0.0 ( 0 )
 Added by Mengzhou Li
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

While micro-CT systems are instrumental in preclinical research, clinical micro-CT imaging has long been desired with cochlear implantation as a primary example. The structural details of the cochlear implant and the temporal bone require a significantly higher image resolution than that (about 0.2 mm) provided by current medical CT scanners. In this paper, we propose a clinical micro-CT (CMCT) system design integrating conventional spiral cone-beam CT, contemporary interior tomography, deep learning techniques, and technologies of micro-focus X-ray source, photon-counting detector (PCD), and robotic arms for ultrahigh resolution localized tomography of a freely-selected volume of interest (VOI) at a minimized radiation dose level. The whole system consists of a standard CT scanner for a clinical CT exam and VOI specification, and a robotic-arm based micro-CT scanner for a local scan at much higher spatial and spectral resolution as well as much reduced radiation dose. The prior information from global scan is also fully utilized for background compensation to improve interior tomography from local data for accurate and stable VOI reconstruction. Our results and analysis show that the proposed hybrid reconstruction algorithm delivers superior local reconstruction, being insensitive to the misalignment of the isocenter position and initial view angle in the data/image registration while the attenuation error caused by scale mismatch can be effectively addressed with bias correction. These findings demonstrate the feasibility of our system design. We envision that deep learning techniques can be leveraged for optimized imaging performance. With high resolution imaging, high dose efficiency and low system cost synergistically, our proposed CMCT system has great potentials in temporal bone imaging as well as various other clinical applications.

rate research

Read More

In sparse-view Computed Tomography (CT), only a small number of projection images are taken around the object, and sinogram interpolation method has a significant impact on final image quality. When the amount of sparsity (the amount of missing views in sinogram data) is not high, conventional interpolation methods have yielded good results. When the amount of sparsity is high, more advanced sinogram interpolation methods are needed. Recently, several deep learning (DL) based sinogram interpolation methods have been proposed. However, those DL-based methods have mostly tested so far on computer simulated sinogram data rather experimentally acquired sinogram data. In this study, we developed a sinogram interpolation method for sparse-view micro-CT based on the combination of U-Net and residual learning. We applied the method to sinogram data obtained from sparse-view micro-CT experiments, where the sparsity reached 90%. The interpolated sinogram by the DL neural network was fed to FBP algorithm for reconstruction. The result shows that both RMSE and SSIM of CT image are greatly improved. The experimental results demonstrate that this sinogram interpolation method produce significantly better results over standard linear interpolation methods when the sinogram data are extremely sparse.
A fundamental problem in X-ray Computed Tomography (CT) is the scatter due to interaction of photons with the imaged object. Unless corrected, scatter manifests itself as degradations in the reconstructions in the form of various artifacts. Scatter correction is therefore critical for reconstruction quality. Scatter correction methods can be divided into two categories: hardware-based; and software-based. Despite success in specific settings, hardware-based methods require modification in the hardware, or increase in the scan time or dose. This makes software-based methods attractive. In this context, Monte-Carlo based scatter estimation, analytical-numerical, and kernel-based methods were developed. Furthermore, data-driven approaches to tackle this problem were recently demonstrated. In this work, two novel physics-inspired deep-learning-based methods, PhILSCAT and OV-PhILSCAT, are proposed. The methods estimate and correct for the scatter in the acquired projection measurements. They incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it. They use a common deep neural network architecture and cost function, both tailored to the problem. Numerical experiments with data obtained by Monte-Carlo simulations of the imaging of phantoms reveal significant improvement over a recent purely projection-domain deep neural network scatter correction method.
83 - Wei Zhao , Tianling Lv , Peng Gao 2019
In a standard computed tomography (CT) image, pixels having the same Hounsfield Units (HU) can correspond to different materials and it is, therefore, challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but DECT scanners are not widely available as single-energy CT (SECT) scanners. Here we develop a deep learning approach to perform DECT imaging by using standard SECT data. A deep learning model to map low-energy image to high-energy image using a two-stage convolutional neural network (CNN) is developed. The model was evaluated using patients who received contrast-enhanced abdomen DECT scan with a popular DE application: virtual non-contrast (VNC) imaging and contrast quantification. The HU differences between the predicted and original high-energy CT images are 3.47, 2.95, 2.38 and 2.40 HU for ROIs on the spine, aorta, liver, and stomach, respectively. The HU differences between VNC images obtained from original DECT and deep learning DECT are 4.10, 3.75, 2.33 and 2.92 HU for the 4 ROIs, respectively. The aorta iodine quantification difference between iodine maps obtained from original DECT and deep learning DECT images is 0.9%, suggesting high consistency between the predicted and the original high-energy CT images. This study demonstrates that highly accurate DECT imaging with single low-energy data is achievable by using a deep learning approach. The proposed method can significantly simplify the DECT system design, reducing the scanning dose and imaging cost.
The realization of practical intelligent reflecting surface (IRS)-assisted multi-user communication (IRS-MUC) systems critically depends on the proper beamforming design exploiting accurate channel state information (CSI). However, channel estimation (CE) in IRS-MUC systems requires a significantly large training overhead due to the numerous reflection elements involved in IRS. In this paper, we adopt a deep learning approach to implicitly learn the historical channel features and directly predict the IRS phase shifts for the next time slot to maximize the average achievable sum-rate of an IRS-MUC system taking into account the user mobility. By doing this, only a low-dimension multiple-input single-output (MISO) CE is needed for transmit beamforming design, thus significantly reducing the CE overhead. To this end, a location-aware convolutional long short-term memory network (LA-CLNet) is first developed to facilitate predictive beamforming at IRS, where the convolutional and recurrent units are jointly adopted to exploit both the spatial and temporal features of channels simultaneously. Given the predictive IRS phase shift beamforming, an instantaneous CSI (ICSI)-aware fully-connected neural network (IA-FNN) is then proposed to optimize the transmit beamforming matrix at the access point. Simulation results demonstrate that the sum-rate performance achieved by the proposed method approaches that of the genie-aided scheme with the full perfect ICSI.
Electroencephalogram (EEG) monitoring and objective seizure identification is an essential clinical investigation for some patients with epilepsy. Accurate annotation is done through a time-consuming process by EEG specialists. Computer-assisted systems for seizure detection currently lack extensive clinical utility due to retrospective, patient-specific, and/or irreproducible studies that result in low sensitivity or high false positives in clinical tests. We aim to significantly reduce the time and resources on data annotation by demonstrating a continental generalization of seizure detection that balances sensitivity and specificity. This is a prospective inference test of artificial intelligence on nearly 14,590 hours of adult EEG data from patients with epilepsy between 2011 and 2019 in a hospital in Sydney, Australia. The inference set includes patients with different types and frequencies of seizures across a wide range of ages and EEG recording hours. We validated our inference model in an AI-assisted mode with a human expert arbiter and a result review panel of expert neurologists and EEG specialists on 66 sessions to demonstrate achievement of the same performance with over an order-of-magnitude reduction in time. Our inference on 1,006 EEG recording sessions on the Australian dataset achieved 76.68% with nearly 56 [0, 115] false alarms per 24 hours on average, against legacy ground-truth annotations by human experts, conducted independently over nine years. Our pilot test of 66 sessions with a human arbiter, and reviewed ground truth by a panel of experts, confirmed an identical human performance of 92.19% with an AI-assisted system, while the time requirements reduce significantly from 90 to 7.62 minutes on average.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا