ترغب بنشر مسار تعليمي؟ اضغط هنا

Designing and using prior data in Ankylography: Recovering a 3D object from a single diffraction intensity pattern

128   0   0.0 ( 0 )
 نشر من قبل Eliyahu Osherovich
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a novel method for Ankylography: three-dimensional structure reconstruction from a single shot diffraction intensity pattern. Our approach allows reconstruction of objects containing many more details than was ever demonstrated, in a faster and more accurate fashion



قيم البحث

اقرأ أيضاً

In this work we develop an algorithm for signal reconstruction from the magnitude of its Fourier transform in a situation where some (non-zero) parts of the sought signal are known. Although our method does not assume that the known part comprises th e boundary of the sought signal, this is often the case in microscopy: a specimen is placed inside a known mask, which can be thought of as a known light source that surrounds the unknown signal. Therefore, in the past, several algorithms were suggested that solve the phase retrieval problem assuming known boundary values. Unlike our method, these methods do rely on the fact that the known part is on the boundary. Besides the reconstruction method we give an explanation of the phenomena observed in previous work: the reconstruction is much faster when there is more energy concentrated in the known part. Quite surprisingly, this can be explained using our previous results on phase retrieval with approximately known Fourier phase.
106 - S. Marchesini 2003
A solution to the inversion problem of scattering would offer aberration-free diffraction-limited 3D images without the resolution and depth-of-field limitations of lens-based tomographic systems. Powerful algorithms are increasingly being used to ac t as lenses to form such images. Current image reconstruction methods, however, require the knowledge of the shape of the object and the low spatial frequencies unavoidably lost in experiments. Diffractive imaging has thus previously been used to increase the resolution of images obtained by other means. We demonstrate experimentally here a new inversion method, which reconstructs the image of the object without the need for any such prior knowledge.
61 - Meng Zhang , Youyi Zheng 2018
We introduce Hair-GANs, an architecture of generative adversarial networks, to recover the 3D hair structure from a single image. The goal of our networks is to build a parametric transformation from 2D hair maps to 3D hair structure. The 3D hair str ucture is represented as a 3D volumetric field which encodes both the occupancy and the orientation information of the hair strands. Given a single hair image, we first align it with a bust model and extract a set of 2D maps encoding the hair orientation information in 2D, along with the bust depth map to feed into our Hair-GANs. With our generator network, we compute the 3D volumetric field as the structure guidance for the final hair synthesis. The modeling results not only resemble the hair in the input image but also possesses many vivid details in other views. The efficacy of our method is demonstrated by using a variety of hairstyles and comparing with the prior art.
We study, using simulated experiments inspired by thin film magnetic domain patterns, the feasibility of phase retrieval in X-ray diffractive imaging in the presence of intrinsic charge scattering given only photon-shot-noise limited diffraction data . We detail a reconstruction algorithm to recover the samples magnetization distribution under such conditions, and compare its performance with that of Fourier transform holography. Concerning the design of future experiments, we also chart out the reconstruction limits of diffractive imaging when photon- shot-noise and the intensity of charge scattering noise are independently varied. This work is directly relevant to the time-resolved imaging of magnetic dynamics using coherent and ultrafast radiation from X-ray free electron lasers and also to broader classes of diffractive imaging experiments which suffer noisy data, missing data or both.
113 - Xin Chen , Yuwei Li , Xi Luo 2020
This paper presents a fully automatic framework for extracting editable 3D objects directly from a single photograph. Unlike previous methods which recover either depth maps, point clouds, or mesh surfaces, we aim to recover 3D objects with semantic parts and can be directly edited. We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives. Our work makes an attempt towards recovering two types of primitive-shaped objects, namely, generalized cuboids and generalized cylinders. To this end, we build a novel instance-aware segmentation network for accurate part separation. Our GeoNet outputs a set of smooth part-level masks labeled as profiles and bodies. Then in a key stage, we simultaneously identify profile-body relations and recover 3D parts by sweeping the recognized profile along their body contour and jointly optimize the geometry to align with the recovered masks. Qualitative and quantitative experiments show that our algorithm can recover high quality 3D models and outperforms existing methods in both instance segmentation and 3D reconstruction. The dataset and code of AutoSweep are available at https://chenxin.tech/AutoSweep.html.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا