Do you want to publish a course? Click here

Unsupervised learning approaches to characterize heterogeneous samples using X-ray single particle imaging

114   0   0.0 ( 0 )
 Added by Kartik Ayyer
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

One of the outstanding analytical problems in X-ray single particle imaging (SPI) is the classification of structural heterogeneity, which is especially difficult given the low signal-to-noise ratios of individual patterns and that even identical objects can yield patterns that vary greatly when orientation is taken into consideration. We propose two methods which explicitly account for this orientation-induced variation and can robustly determine the structural landscape of a sample ensemble. The first, termed common-line principal component analysis (PCA) provides a rough classification which is essentially parameter-free and can be run automatically on any SPI dataset. The second method, utilizing variation auto-encoders (VAEs) can generate 3D structures of the objects at any point in the structural landscape. We implement both these methods in combination with the noise-tolerant expand-maximize-compress (EMC) algorithm and demonstrate its utility by applying it to an experimental dataset from gold nanoparticles with only a few thousand photons per pattern and recover both discrete structural classes as well as continuous deformations. These developments diverge from previous approaches of extracting reproducible subsets of patterns from a dataset and open up the possibility to move beyond studying homogeneous sample sets and study open questions on topics such as nanocrystal growth and dynamics as well as phase transitions which have not been externally triggered.



rate research

Read More

Single particle imaging (SPI) is a promising method for native structure determination which has undergone a fast progress with the development of X-ray Free-Electron Lasers. Large amounts of data are collected during SPI experiments, driving the need for automated data analysis. The necessary data analysis pipeline has a number of steps including binary object classification (single versus multiple hits). Classification and object detection are areas where deep neural networks currently outperform other approaches. In this work, we use the fast object detector networks YOLOv2 and YOLOv3. By exploiting transfer learning, a moderate amount of data is sufficient for training of the neural network. We demonstrate here that a convolutional neural network (CNN) can be successfully used to classify data from SPI experiments. We compare the results of classification for the two different networks, with different depth and architecture, by applying them to the same SPI data with different data representation. The best results are obtained for YOLOv2 color images linear scale classification, which shows an accuracy of about 97% with the precision and recall of about 52% and 61%, respectively, which is in comparison to manual data classification.
An outstanding question in X-ray single particle imaging experiments has been the feasibility of imaging sub 10-nm-sized biomolecules under realistic experimental conditions where very few photons are expected to be measured in a single snapshot and instrument background may be significant relative to particle scattering. While analyses of simulated data have shown that the determination of an average image should be feasible using Bayesian methods such as the EMC algorithm, this has yet to be demonstrated using experimental data containing realistic non-isotropic instrument background, sample variability and other experimental factors. In this work, we show that the orientation and phase retrieval steps work at photon counts diluted to the signal levels one expects from smaller molecules or with weaker pulses, using data from experimental measurements of 60-nm PR772 viruses. Even when the signal is reduced to a fraction as little as 1/256, the virus electron density determined using ab initio phasing is of almost the same quality as the high-signal data. However, we are still limited by the total number of patterns collected, which may soon be mitigated by the advent of high repetition-rate sources like the European XFEL and LCLS-II.
Modern technology for producing extremely bright and coherent X-ray laser pulses provides the possibility to acquire a large number of diffraction patterns from individual biological nanoparticles, including proteins, viruses, and DNA. These two-dimensional diffraction patterns can be practically reconstructed and retrieved down to a resolution of a few angstrom. In principle, a sufficiently large collection of diffraction patterns will contain the required information for a full three-dimensional reconstruction of the biomolecule. The computational methodology for this reconstruction task is still under development and highly resolved reconstructions have not yet been produced. We analyze the Expansion-Maximization-Compression scheme, the current state of the art approach for this very challenging application, by isolating different sources of uncertainty. Through numerical experiments on synthetic data we evaluate their respective impact. We reach conclusions of relevance for handling actual experimental data, as well as pointing out certain improvements to the underlying estimation algorithm. We also introduce a practically applicable computational methodology in the form of bootstrap procedures for assessing reconstruction uncertainty in the real data case. We evaluate the sharpness of this approach and argue that this type of procedure will be critical in the near future when handling the increasing amount of data.
Archetypal scenarios for change detection generally consider two images acquired through sensors of the same modality. However, in some specific cases such as emergency situations, the only images available may be those acquired through sensors of different modalities. This paper addresses the problem of unsupervisedly detecting changes between two observed images acquired by sensors of different modalities with possibly different resolutions. These sensor dissimilarities introduce additional issues in the context of operational change detection that are not addressed by most of the classical methods. This paper introduces a novel framework to effectively exploit the available information by modelling the two observed images as a sparse linear combination of atoms belonging to a pair of coupled overcomplete dictionaries learnt from each observed image. As they cover the same geographical location, codes are expected to be globally similar, except for possible changes in sparse spatial locations. Thus, the change detection task is envisioned through a dual code estimation which enforces spatial sparsity in the difference between the estimated codes associated with each image. This problem is formulated as an inverse problem which is iteratively solved using an efficient proximal alternating minimization algorithm accounting for nonsmooth and nonconvex functions. The proposed method is applied to real images with simulated yet realistic and real changes. A comparison with state-of-the-art change detection methods evidences the accuracy of the proposed strategy.
Obtaining 3D information from a single X-ray exposure at high-brilliance sources, such as X-ray free-electron lasers (XFELs) [1] or diffraction-limited storage rings [2], allows the study of fast dynamical processes in their native environment. However, current X-ray 3D methodologies are either not compatible with single-shot approaches because they rely on multiple exposures, such as confocal microscopy [3, 4] and tomography [5, 6]; or they record a single projection per pulse [7] and are therefore restricted to approximately two-dimensional objects [8]. Here we propose and verify experimentally a novel imaging approach named X-ray multi-projection imaging (XMPI), which simultaneously acquires several projections without rotating the sample at significant tomographic angles. When implemented at high-brilliance sources it can provide volumetric information using a single pulse. Moreover, XMPI at MHz repetition XFELs could allow a way to record 3D movies of deterministic or stochastic natural processes in the micrometer to nanometer resolution range, and at time scales from microseconds down to femtoseconds.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا