Do you want to publish a course? Click here

Low-signal limit of X-ray single particle imaging

411   0   0.0 ( 0 )
 Added by Kartik Ayyer
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

An outstanding question in X-ray single particle imaging experiments has been the feasibility of imaging sub 10-nm-sized biomolecules under realistic experimental conditions where very few photons are expected to be measured in a single snapshot and instrument background may be significant relative to particle scattering. While analyses of simulated data have shown that the determination of an average image should be feasible using Bayesian methods such as the EMC algorithm, this has yet to be demonstrated using experimental data containing realistic non-isotropic instrument background, sample variability and other experimental factors. In this work, we show that the orientation and phase retrieval steps work at photon counts diluted to the signal levels one expects from smaller molecules or with weaker pulses, using data from experimental measurements of 60-nm PR772 viruses. Even when the signal is reduced to a fraction as little as 1/256, the virus electron density determined using ab initio phasing is of almost the same quality as the high-signal data. However, we are still limited by the total number of patterns collected, which may soon be mitigated by the advent of high repetition-rate sources like the European XFEL and LCLS-II.

rate research

Read More

102 - J. Duarte , R. Cassin , J. Huijts 2019
The ability to gain insights into the 3D properties of artificial or biological systems is often critical. However, 3D structures are difficult to retrieve at low dose and with extremely fast processing, as most techniques are based on acquiring and computing hundreds of 2D angular projections. This is even more challenging with ultrashort X-rays which allow realizing nanometre scale studies and ultrafast time resolved 2D movies. Here we show that computer stereo vision concepts can be transposed to X-rays. We demonstrate nanoscale three-dimensional reconstruction from a single ultrafast acquisition. Two diffraction patterns are recorded simultaneously on a single CCD camera and after phase retrieval two stereo images are reconstructed. A 3D representation of the sample is then computed from quantitative disparity maps with about 130x130x380nm3 voxel resolution in a snapshot of 20 femtoseconds. We extend our demonstration to phase contrast X-ray stereo imaging and reveal hidden 3D features of a sample. Computed phase stereo imaging will find scientific applications at X-ray free electron lasers, synchrotrons and laser-based sources, but also in fast industrial and medical 3D diagnostics.
One of the outstanding analytical problems in X-ray single particle imaging (SPI) is the classification of structural heterogeneity, which is especially difficult given the low signal-to-noise ratios of individual patterns and that even identical objects can yield patterns that vary greatly when orientation is taken into consideration. We propose two methods which explicitly account for this orientation-induced variation and can robustly determine the structural landscape of a sample ensemble. The first, termed common-line principal component analysis (PCA) provides a rough classification which is essentially parameter-free and can be run automatically on any SPI dataset. The second method, utilizing variation auto-encoders (VAEs) can generate 3D structures of the objects at any point in the structural landscape. We implement both these methods in combination with the noise-tolerant expand-maximize-compress (EMC) algorithm and demonstrate its utility by applying it to an experimental dataset from gold nanoparticles with only a few thousand photons per pattern and recover both discrete structural classes as well as continuous deformations. These developments diverge from previous approaches of extracting reproducible subsets of patterns from a dataset and open up the possibility to move beyond studying homogeneous sample sets and study open questions on topics such as nanocrystal growth and dynamics as well as phase transitions which have not been externally triggered.
99 - Zheng-Ping Li 2021
Long-range active imaging has widespread applications in remote sensing and target recognition. Single-photon light detection and ranging (lidar) has been shown to have high sensitivity and temporal resolution. On the application front, however, the operating range of practical single-photon lidar systems is limited to about tens of kilometers over the Earths atmosphere, mainly due to the weak echo signal mixed with high background noise. Here, we present a compact coaxial single-photon lidar system capable of realizing 3D imaging at up to 201.5 km. It is achieved by using high-efficiency optical devices for collection and detection, and what we believe is a new noise-suppression technique that is efficient for long-range applications. We show that photon-efficient computational algorithms enable accurate 3D imaging over hundreds of kilometers with as few as 0.44 signal photons per pixel. The results represent a significant step toward practical, low-power lidar over extra-long ranges.
Single particle imaging (SPI) is a promising method for native structure determination which has undergone a fast progress with the development of X-ray Free-Electron Lasers. Large amounts of data are collected during SPI experiments, driving the need for automated data analysis. The necessary data analysis pipeline has a number of steps including binary object classification (single versus multiple hits). Classification and object detection are areas where deep neural networks currently outperform other approaches. In this work, we use the fast object detector networks YOLOv2 and YOLOv3. By exploiting transfer learning, a moderate amount of data is sufficient for training of the neural network. We demonstrate here that a convolutional neural network (CNN) can be successfully used to classify data from SPI experiments. We compare the results of classification for the two different networks, with different depth and architecture, by applying them to the same SPI data with different data representation. The best results are obtained for YOLOv2 color images linear scale classification, which shows an accuracy of about 97% with the precision and recall of about 52% and 61%, respectively, which is in comparison to manual data classification.
Single-photon light detection and ranging (LiDAR), offering single-photon sensitivity and picosecond time resolution, has been widely adopted for active imaging applications. Long-range active imaging is a great challenge, because the spatial resolution degrades significantly with the imaging range due to the diffraction limit of the optics, and only weak echo signal photons can return but mixed with a strong background noise. Here we propose and demonstrate a photon-efficient LiDAR approach that can achieve sub-Rayleigh resolution imaging over long ranges. This approach exploits fine sub-pixel scanning and a deconvolution algorithm tailored to this long-range application. Using this approach, we experimentally demonstrated active three-dimensional (3D) single-photon imaging by recognizing different postures of a mannequin model at a stand-off distance of 8.2 km in both daylight and night. The observed spatial (transversal) resolution is about 5.5 cm at 8.2 km, which is about twice of the systems resolution. This also beats the optical systems Rayleigh criterion. The results are valuable for geosciences and target recognition over long ranges.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا