Do you want to publish a course? Click here

Assessing Uncertainties in X-ray Single-particle Three-dimensional reconstructions

56   0   0.0 ( 0 )
 Added by Jing Liu
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Modern technology for producing extremely bright and coherent X-ray laser pulses provides the possibility to acquire a large number of diffraction patterns from individual biological nanoparticles, including proteins, viruses, and DNA. These two-dimensional diffraction patterns can be practically reconstructed and retrieved down to a resolution of a few angstrom. In principle, a sufficiently large collection of diffraction patterns will contain the required information for a full three-dimensional reconstruction of the biomolecule. The computational methodology for this reconstruction task is still under development and highly resolved reconstructions have not yet been produced. We analyze the Expansion-Maximization-Compression scheme, the current state of the art approach for this very challenging application, by isolating different sources of uncertainty. Through numerical experiments on synthetic data we evaluate their respective impact. We reach conclusions of relevance for handling actual experimental data, as well as pointing out certain improvements to the underlying estimation algorithm. We also introduce a practically applicable computational methodology in the form of bootstrap procedures for assessing reconstruction uncertainty in the real data case. We evaluate the sharpness of this approach and argue that this type of procedure will be critical in the near future when handling the increasing amount of data.

rate research

Read More

One of the outstanding analytical problems in X-ray single particle imaging (SPI) is the classification of structural heterogeneity, which is especially difficult given the low signal-to-noise ratios of individual patterns and that even identical objects can yield patterns that vary greatly when orientation is taken into consideration. We propose two methods which explicitly account for this orientation-induced variation and can robustly determine the structural landscape of a sample ensemble. The first, termed common-line principal component analysis (PCA) provides a rough classification which is essentially parameter-free and can be run automatically on any SPI dataset. The second method, utilizing variation auto-encoders (VAEs) can generate 3D structures of the objects at any point in the structural landscape. We implement both these methods in combination with the noise-tolerant expand-maximize-compress (EMC) algorithm and demonstrate its utility by applying it to an experimental dataset from gold nanoparticles with only a few thousand photons per pattern and recover both discrete structural classes as well as continuous deformations. These developments diverge from previous approaches of extracting reproducible subsets of patterns from a dataset and open up the possibility to move beyond studying homogeneous sample sets and study open questions on topics such as nanocrystal growth and dynamics as well as phase transitions which have not been externally triggered.
The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding $p$-values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive $p$-value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavour physics.
Single particle imaging (SPI) is a promising method for native structure determination which has undergone a fast progress with the development of X-ray Free-Electron Lasers. Large amounts of data are collected during SPI experiments, driving the need for automated data analysis. The necessary data analysis pipeline has a number of steps including binary object classification (single versus multiple hits). Classification and object detection are areas where deep neural networks currently outperform other approaches. In this work, we use the fast object detector networks YOLOv2 and YOLOv3. By exploiting transfer learning, a moderate amount of data is sufficient for training of the neural network. We demonstrate here that a convolutional neural network (CNN) can be successfully used to classify data from SPI experiments. We compare the results of classification for the two different networks, with different depth and architecture, by applying them to the same SPI data with different data representation. The best results are obtained for YOLOv2 color images linear scale classification, which shows an accuracy of about 97% with the precision and recall of about 52% and 61%, respectively, which is in comparison to manual data classification.
An extensive comparison of the path uncertainty in single particle tracking systems for ion imaging was carried out based on Monte Carlo simulations. The spatial resolution as function of system parameters such as geometry, detector properties and the energy of proton and helium beams was investigated to serve as a guideline for hardware developments. Primary particle paths were sampled within a water volume and compared to the most likely path estimate obtained from detector measurements, yielding a depth-dependent uncertainty envelope. The maximum uncertainty along this curve was converted to a conservative estimate of the minimal radiographic pixel spacing for a single set of parameter values. Simulations with various parameter settings were analysed to obtain an overview of the reachable pixel spacing as function of system parameters. The results were used to determine intervals of detector material budget and position resolution that yield a pixel spacing small enough for clinical dose calculation. To ensure a pixel spacing below 2 mm, the material budget of a detector should remain below 0.25 % for a position resolution of 200 $mathrm{mu m}$ or below 0.75 % for a resolution of 10 $mathrm{mu m}$. Using protons, a sub-millimetre pixel size could not be achieved for a phantom size of 300 mm or at a large clearance. With helium ions, a sub-millimetre pixel spacing could be achieved even for a large phantom size and clearance, provided the position resolution was less than 100 $mathrm{mu m}$ and material budget was below 0.75 %.
The first experimental data from single-particle scattering experiments from free electron lasers (FELs) are now becoming available. The first such experiments are being performed on relatively large objects such as viruses, which produce relatively low-resolution, low-noise diffraction patterns in so-called diffract-and-destroy experiments. We describe a very simple test on the angular correlations of measured diffraction data to determine if the scattering is from an icosahedral particle. If this is confirmed, the efficient algorithm proposed can then combine diffraction data from multiple shots of particles in random unknown orientations to generate a full 3D image of the icosahedral particle. We demonstrate this with a simulation for the satellite tobacco necrosis virus (STNV), the atomic coordinates of whose asymmetric unit is given in Protein Data Bank entry 2BUK.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا