Do you want to publish a course? Click here

SpotNet - Learned iterations for cell detection in image-based immunoassays

103   0   0.0 ( 0 )
 Added by Pol del Aguila Pla
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Accurate cell detection and counting in the image-based ELISpot and FluoroSpot immunoassays is a challenging task. Recently proposed methodology matches human accuracy by leveraging knowledge of the underlying physical process of these assays and using proximal optimization methods to solve an inverse problem. Nonetheless, thousands of computationally expensive iterations are often needed to reach a near-optimal solution. In this paper, we exploit the structure of the iterations to design a parameterized computation graph, SpotNet, that learns the patterns embedded within several training images and their respective cell information. Further, we compare SpotNet to a convolutional neural network layout customized for cell detection. We show empirical evidence that, while both designs obtain a detection performance on synthetic data far beyond that of a human expert, SpotNet is easier to train and obtains better estimates of particle secretion for each cell.



rate research

Read More

Cell detection and counting in the image-based ELISPOT and Fluorospot immunoassays is considered a bottleneck. The task has remained hard to automatize, and biomedical researchers often have to rely on results that are not accurate. Previously proposed solutions are heuristic, and data-based solutions are subject to a lack of objective ground truth data. In this paper, we analyze a partial differential equations model for ELISPOT, Fluorospot, and assays of similar design. This leads us to a mathematical observation model for the images generated by these assays. We use this model to motivate a methodology for cell detection. Finally, we provide a real-data example that suggests that this cell detection methodology and a human expert perform comparably.
Recent works on learned image compression perform encoding and decoding processes in a full-resolution manner, resulting in two problems when deployed for practical applications. First, parallel acceleration of the autoregressive entropy model cannot be achieved due to serial decoding. Second, full-resolution inference often causes the out-of-memory(OOM) problem with limited GPU resources, especially for high-resolution images. Block partition is a good design choice to handle the above issues, but it brings about new challenges in reducing the redundancy between blocks and eliminating block effects. To tackle the above challenges, this paper provides a learned block-based hybrid image compression (LBHIC) framework. Specifically, we introduce explicit intra prediction into a learned image compression framework to utilize the relation among adjacent blocks. Superior to context modeling by linear weighting of neighbor pixels in traditional codecs, we propose a contextual prediction module (CPM) to better capture long-range correlations by utilizing the strip pooling to extract the most relevant information in neighboring latent space, thus achieving effective information prediction. Moreover, to alleviate blocking artifacts, we further propose a boundary-aware postprocessing module (BPM) with the edge importance taken into account. Extensive experiments demonstrate that the proposed LBHIC codec outperforms the VVC, with a bit-rate conservation of 4.1%, and reduces the decoding time by approximately 86.7% compared with that of state-of-the-art learned image compression methods.
Microorganisms such as bacteria and fungi play essential roles in many application fields, like biotechnique, medical technique and industrial domain. Microorganism counting techniques are crucial in microorganism analysis, helping biologists and related researchers quantitatively analyze the microorganisms and calculate their characteristics, such as biomass concentration and biological activity. However, traditional microorganism manual counting methods, such as plate counting method, hemocytometry and turbidimetry, are time-consuming, subjective and need complex operations, which are difficult to be applied in large-scale applications. In order to improve this situation, image analysis is applied for microorganism counting since the 1980s, which consists of digital image processing, image segmentation, image classification and suchlike. Image analysis-based microorganism counting methods are efficient comparing with traditional plate counting methods. In this article, we have studied the development of microorganism counting methods using digital image analysis. Firstly, the microorganisms are grouped as bacteria and other microorganisms. Then, the related articles are summarized based on image segmentation methods. Each part of the article is reviewed by methodologies. Moreover, commonly used image processing methods for microorganism counting are summarized and analyzed to find common technological points. More than 144 papers are outlined in this article. In conclusion, this paper provides new ideas for the future development trend of microorganism counting, and provides systematic suggestions for implementing integrated microorganism counting systems in the future. Researchers in other fields can refer to the techniques analyzed in this paper.
Although deep learning based image compression methods have achieved promising progress these days, the performance of these methods still cannot match the latest compression standard Versatile Video Coding (VVC). Most of the recent developments focus on designing a more accurate and flexible entropy model that can better parameterize the distributions of the latent features. However, few efforts are devoted to structuring a better transformation between the image space and the latent feature space. In this paper, instead of employing previous autoencoder style networks to build this transformation, we propose an enhanced Invertible Encoding Network with invertible neural networks (INNs) to largely mitigate the information loss problem for better compression. Experimental results on the Kodak, CLIC, and Tecnick datasets show that our method outperforms the existing learned image compression methods and compression standards, including VVC (VTM 12.1), especially for high-resolution images. Our source code is available at https://github.com/xyq7/InvCompress.
For learned image compression, the autoregressive context model is proved effective in improving the rate-distortion (RD) performance. Because it helps remove spatial redundancies among latent representations. However, the decoding process must be done in a strict scan order, which breaks the parallelization. We propose a parallelizable checkerboard context model (CCM) to solve the problem. Our two-pass checkerboard context calculation eliminates such limitations on spatial locations by re-organizing the decoding order. Speeding up the decoding process more than 40 times in our experiments, it achieves significantly improved computational efficiency with almost the same rate-distortion performance. To the best of our knowledge, this is the first exploration on parallelization-friendly spatial context model for learned image compression.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا