No Arabic abstract
Quantitative imaging in MRI usually involves acquisition and reconstruction of a series of images at multi-echo time points, which possibly requires more scan time and specific reconstruction technique compared to conventional qualitative imaging. In this work, we focus on optimizing the acquisition and reconstruction process of multi-echo gradient echo pulse sequence for quantitative susceptibility mapping as one important quantitative imaging method in MRI. A multi-echo sampling pattern optimization block extended from LOUPE-ST is proposed to optimize the k-space sampling patterns along echoes. Besides, a recurrent temporal feature fusion block is proposed and inserted into a backbone deep ADMM network to capture the signal evolution along echo time during reconstruction. Experiments show that both blocks help improve multi-echo image reconstruction performance.
Learning-based methods have enabled the recovery of a video sequence from a single motion-blurred image or a single coded exposure image. Recovering video from a single motion-blurred image is a very ill-posed problem and the recovered video usually has many artifacts. In addition to this, the direction of motion is lost and it results in motion ambiguity. However, it has the advantage of fully preserving the information in the static parts of the scene. The traditional coded exposure framework is better-posed but it only samples a fraction of the space-time volume, which is at best 50% of the space-time volume. Here, we propose to use the complementary information present in the fully-exposed (blurred) image along with the coded exposure image to recover a high fidelity video without any motion ambiguity. Our framework consists of a shared encoder followed by an attention module to selectively combine the spatial information from the fully-exposed image with the temporal information from the coded image, which is then super-resolved to recover a non-ambiguous high-quality video. The input to our algorithm is a fully-exposed and coded image pair. Such an acquisition system already exists in the form of a Coded-two-bucket (C2B) camera. We demonstrate that our proposed deep learning approach using blurred-coded image pair produces much better results than those from just a blurred image or just a coded image.
Spin-echo functional MRI (SE-fMRI) has the potential to improve spatial specificity when compared to gradient-echo fMRI. However, high spatiotemporal resolution SE-fMRI with large slice-coverage is challenging as SE-fMRI requires a long echo time (TE) to generate blood oxygenation level-dependent (BOLD) contrast, leading to long repetition times (TR). The aim of this work is to develop an acquisition method that enhances the slice-coverage of SE-fMRI at high spatiotemporal resolution. An acquisition scheme was developed entitled Multisection Excitation by Simultaneous Spin-echo Interleaving (MESSI) with complex-encoded generalized SLIce Dithered Enhanced Resolution (cgSlider). MESSI utilizes the dead-time during the long TE by interleaving the excitation and readout of two slices to enable 2x slice-acceleration, while cgSlider utilizes the stable temporal background phase in SE-fMRI to encode and decode two adjacent slices simultaneously with a phase-constrained reconstruction method. The proposed cgSlider-MESSI was also combined with Simultaneous Multi-Slice (SMS) to achieve further slice-acceleration. This combined approach was used to achieve 1.5mm isotropic whole-brain SE-fMRI with a temporal resolution of 1.5s and was evaluated using sensory stimulation and breath-hold tasks at 3T. Compared to conventional SE-SMS, cgSlider-MESSI-SMS provides four-fold increase in slice-coverage for the same TR, with comparable temporal signal-to-noise ratio. Corresponding fMRI activation from cgSlider-MESSI-SMS for both fMRI tasks were consistent with those from conventional SE-SMS. Overall, cgSlider-MESSI-SMS achieved a 32x encoding-acceleration by combining RinplanexMBxcgSliderxMESSI=4x2x2x2. High-quality, high-resolution whole-brain SE-fMRI was acquired at a short TR using cgSlider-MESSI-SMS.
Low-dose CT image reconstruction has been a popular research topic in recent years. A typical reconstruction method based on post-log measurements is called penalized weighted-least squares (PWLS). Due to the underlying limitations of the post-log statistical model, the PWLS reconstruction quality is often degraded in low-dose scans. This paper investigates a shifted-Poisson (SP) model based likelihood function that uses the pre-log raw measurements that better represents the measurement statistics, together with a data-driven regularizer exploiting a Union of Learned TRAnsforms (SPULTRA). Both the SP induced data-fidelity term and the regularizer in the proposed framework are nonconvex. The proposed SPULTRA algorithm uses quadratic surrogate functions for the SP induced data-fidelity term. Each iteration involves a quadratic subproblem for updating the image, and a sparse coding and clustering subproblem that has a closed-form solution. The SPULTRA algorithm has a similar computational cost per iteration as its recent counterpart PWLS-ULTRA that uses post-log measurements, and it provides better image reconstruction quality than PWLS-ULTRA, especially in low-dose scans.
We investigate the properties of a recently proposed Gradient Echo Memory (GEM) scheme for information mapping between optical and atomic systems. We show that GEM can be described by the dynamic formation of polaritons in k-space. This picture highlights the flexibility and robustness with regards to the external control of the storage process. Our results also show that, as GEM is a frequency-encoding memory, it can accurately preserve the shape of signals that have large time-bandwidth products, even at moderate optical depths. At higher optical depths, we show that GEM is a high fidelity multi-mode quantum memory.
Modern one-stage video instance segmentation networks suffer from two limitations. First, convolutional features are neither aligned with anchor boxes nor with ground-truth bounding boxes, reducing the mask sensitivity to spatial location. Second, a video is directly divided into individual frames for frame-level instance segmentation, ignoring the temporal correlation between adjacent frames. To address these issues, we propose a simple yet effective one-stage video instance segmentation framework by spatial calibration and temporal fusion, namely STMask. To ensure spatial feature calibration with ground-truth bounding boxes, we first predict regressed bounding boxes around ground-truth bounding boxes, and extract features from them for frame-level instance segmentation. To further explore temporal correlation among video frames, we aggregate a temporal fusion module to infer instance masks from each frame to its adjacent frames, which helps our framework to handle challenging videos such as motion blur, partial occlusion and unusual object-to-camera poses. Experiments on the YouTube-VIS valid set show that the proposed STMask with ResNet-50/-101 backbone obtains 33.5 % / 36.8 % mask AP, while achieving 28.6 / 23.4 FPS on video instance segmentation. The code is released online https://github.com/MinghanLi/STMask.