No Arabic abstract
A central problem in neuroscience is reconstructing neuronal circuits on the synapse level. Due to a wide range of scales in brain architecture such reconstruction requires imaging that is both high-resolution and high-throughput. Existing electron microscopy (EM) techniques possess required resolution in the lateral plane and either high-throughput or high depth resolution but not both. Here, we exploit recent advances in unsupervised learning and signal processing to obtain high depth-resolution EM images computationally without sacrificing throughput. First, we show that the brain tissue can be represented as a sparse linear combination of localized basis functions that are learned using high-resolution datasets. We then develop compressive sensing-inspired techniques that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal processes and, hence, high throughput reconstruction of neural circuits on the level of individual synapses.
Recently, the Magnetic Resonance Imaging (MRI) images have limited and unsatisfactory resolutions due to various constraints such as physical, technological and economic considerations. Super-resolution techniques can obtain high-resolution MRI images. The traditional methods obtained the resolution enhancement of brain MRI by interpolations, affecting the accuracy of the following diagnose process. The requirement for brain image quality is fast increasing. In this paper, we propose an image super-resolution (SR) method based on overcomplete dictionaries and inherent similarity of an image to recover the high-resolution (HR) image from a single low-resolution (LR) image. We explore the nonlocal similarity of the image to tentatively search for similar blocks in the whole image and present a joint reconstruction method based on compressive sensing (CS) and similarity constraints. The sparsity and self-similarity of the image blocks are taken as the constraints. The proposed method is summarized in the following steps. First, a dictionary classification method based on the measurement domain is presented. The image blocks are classified into smooth, texture and edge parts by analyzing their features in the measurement domain. Then, the corresponding dictionaries are trained using the classified image blocks. Equally important, in the reconstruction part, we use the CS reconstruction method to recover the HR brain MRI image, considering both nonlocal similarity and the sparsity of an image as the constraints. This method performs better both visually and quantitatively than some existing methods.
Magnetic resonance image (MRI) in high spatial resolution provides detailed anatomical information and is often necessary for accurate quantitative analysis. However, high spatial resolution typically comes at the expense of longer scan time, less spatial coverage, and lower signal to noise ratio (SNR). Single Image Super-Resolution (SISR), a technique aimed to restore high-resolution (HR) details from one single low-resolution (LR) input image, has been improved dramatically by recent breakthroughs in deep learning. In this paper, we introduce a new neural network architecture, 3D Densely Connected Super-Resolution Networks (DCSRN) to restore HR features of structural brain MR images. Through experiments on a dataset with 1,113 subjects, we demonstrate that our network outperforms bicubic interpolation as well as other deep learning methods in restoring 4x resolution-reduced images.
Symbolic methods of analysis are valuable tools for investigating complex time-dependent signals. In particular, the ordinal method defines sequences of symbols according to the ordering in which values appear in a time series. This method has been shown to yield useful information, even when applied to signals with large noise contamination. Here we use ordinal analysis to investigate the transition between eyes closed (EC) and eyes open (EO) resting states. We analyze two {EEG} datasets (with 71 and 109 healthy subjects) with different recording conditions (sampling rates and the number of electrodes in the scalp). Using as diagnostic tools the permutation entropy, the entropy computed from symbolic transition probabilities, and an asymmetry coefficient (that measures the asymmetry of the likelihood of the transitions between symbols) we show that ordinal analysis applied to the raw data distinguishes the two brain states. In both datasets, we find that the EO state is characterized by higher entropies and lower asymmetry coefficient, as compared to the EC state. Our results thus show that these diagnostic tools have the potential for detecting and characterizing changes in time-evolving brain states.
This paper introduces a framework for super-resolution of scalable video based on compressive sensing and sparse representation of residual frames in reconnaissance and surveillance applications. We exploit efficient compressive sampling and sparse reconstruction algorithms to super-resolve the video sequence with respect to different compression rates. We use the sparsity of residual information in residual frames as the key point in devising our framework. Moreover, a controlling factor as the compressibility threshold to control the complexity-performance trade-off is defined. Numerical experiments confirm the efficiency of the proposed framework in terms of the compression rate as well as the quality of reconstructed video sequence in terms of PSNR measure. The framework leads to a more efficient compression rate and higher video quality compared to other state-of-the-art algorithms considering performance-complexity trade-offs.
With super-resolution optical microscopy, it is now possible to observe molecular interactions in living cells. The obtained images have a very high spatial precision but their overall quality can vary a lot depending on the structure of interest and the imaging parameters. Moreover, evaluating this quality is often difficult for non-expert users. In this work, we tackle the problem of learning the quality function of super- resolution images from scores provided by experts. More specifically, we are proposing a system based on a deep neural network that can provide a quantitative quality measure of a STED image of neuronal structures given as input. We conduct a user study in order to evaluate the quality of the predictions of the neural network against those of a human expert. Results show the potential while highlighting some of the limits of the proposed approach.