No Arabic abstract
Single-particle cryo-electron microscopy (cryo-EM) reconstructs the three-dimensional (3D) structure of bio-molecules from a large set of 2D projection images with random and unknown orientations. A crucial step in the single-particle cryo-EM pipeline is 3D refinement, which resolves a high-resolution 3D structure from an initial approximate volume by refining the estimation of the orientation of each projection. In this work, we propose a new approach that refines the projection angles on the continuum. We formulate the optimization problem over the density map and the orientations jointly. The density map is updated using the efficient alternating-direction method of multipliers, while the orientations are updated through a semi-coordinate-wise gradient descent for which we provide an explicit derivation of the gradient. Our method eliminates the requirement for a fine discretization of the orientation space and does away with the classical but computationally expensive template-matching step. Numerical results demonstrate the feasibility and performance of our approach compared to several baselines.
Oscillating Steady-State Imaging (OSSI) is a recent fMRI acquisition method that exploits a large and oscillating signal, and can provide high SNR fMRI. However, the oscillatory nature of the signal leads to an increased number of acquisitions. To improve temporal resolution and accurately model the nonlinearity of OSSI signals, we build the MR physics for OSSI signal generation as a regularizer for the undersampled reconstruction rather than using subspace models that are not well suited for the data. Our proposed physics-based manifold model turns the disadvantages of OSSI acquisition into advantages and enables joint reconstruction and quantification. OSSI manifold model (OSSIMM) outperforms subspace models and reconstructs high-resolution fMRI images with a factor of 12 acceleration and without spatial or temporal resolution smoothing. Furthermore, OSSIMM can dynamically quantify important physics parameters, including $R_2^*$ maps, with a temporal resolution of 150 ms.
Cryogenic electron microscopy (cryo-EM) provides images from different copies of the same biomolecule in arbitrary orientations. Here, we present an end-to-end unsupervised approach that learns individual particle orientations from cryo-EM data while reconstructing the average 3D map of the biomolecule, starting from a random initialization. The approach relies on an auto-encoder architecture where the latent space is explicitly interpreted as orientations used by the decoder to form an image according to the linear projection model. We evaluate our method on simulated data and show that it is able to reconstruct 3D particle maps from noisy- and CTF-corrupted 2D projection images of unknown particle orientations.
Cryo-EM reconstruction algorithms seek to determine a molecules 3D density map from a series of noisy, unlabeled 2D projection images captured with an electron microscope. Although reconstruction algorithms typically model the 3D volume as a generic function parameterized as a voxel array or neural network, the underlying atomic structure of the protein of interest places well-defined physical constraints on the reconstructed structure. In this work, we exploit prior information provided by an atomic model to reconstruct distributions of 3D structures from a cryo-EM dataset. We propose Cryofold, a generative model for a continuous distribution of 3D volumes based on a coarse-grained model of the proteins atomic structure, with radial basis functions used to model atom locations and their physics-based constraints. Although the reconstruction objective is highly non-convex when formulated in terms of atomic coordinates (similar to the protein folding problem), we show that gradient descent-based methods can reconstruct a continuous distribution of atomic structures when initialized from a structure within the underlying distribution. This approach is a promising direction for integrating biophysical simulation, learned neural models, and experimental data for 3D protein structure determination.
Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination.
The core problem of Magnetic Resonance Imaging (MRI) is the trade off between acceleration and image quality. Image reconstruction and super-resolution are two crucial techniques in Magnetic Resonance Imaging (MRI). Current methods are designed to perform these tasks separately, ignoring the correlations between them. In this work, we propose an end-to-end task transformer network (T$^2$Net) for joint MRI reconstruction and super-resolution, which allows representations and feature transmission to be shared between multiple task to achieve higher-quality, super-resolved and motion-artifacts-free images from highly undersampled and degenerated MRI data. Our framework combines both reconstruction and super-resolution, divided into two sub-branches, whose features are expressed as queries and keys. Specifically, we encourage joint feature learning between the two tasks, thereby transferring accurate task information. We first use two separate CNN branches to extract task-specific features. Then, a task transformer module is designed to embed and synthesize the relevance between the two tasks. Experimental results show that our multi-task model significantly outperforms advanced sequential methods, both quantitatively and qualitatively.