ترغب بنشر مسار تعليمي؟ اضغط هنا

Light Field Reconstruction Using Convolutional Network on EPI and Extended Applications

102   0   0.0 ( 0 )
 نشر من قبل Gaochang Wu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, a novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views. We indicate that the reconstruction can be efficiently modeled as angular restoration on an epipolar plane image (EPI). The main problem in direct reconstruction on the EPI involves an information asymmetry between the spatial and angular dimensions, where the detailed portion in the angular dimensions is damaged by undersampling. Directly upsampling or super-resolving the light field in the angular dimensions causes ghosting effects. To suppress these ghosting effects, we contribute a novel blur-restoration-deblur framework. First, the blur step is applied to extract the low-frequency components of the light field in the spatial dimensions by convolving each EPI slice with a selected blur kernel. Then, the restoration step is implemented by a CNN, which is trained to restore the angular details of the EPI. Finally, we use a non-blind deblur operation to recover the spatial high frequencies suppressed by the EPI blur. We evaluate our approach on several datasets, including synthetic scenes, real-world scenes and challenging microscope light field data. We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms. We further show extended applications, including depth enhancement and interpolation for unstructured input. More importantly, a novel rendering approach is presented by combining the proposed framework and depth information to handle large disparities.



قيم البحث

اقرأ أيضاً

465 - Gaochang Wu , Yebin Liu , Lu Fang 2020
Learning-based light field reconstruction methods demand in constructing a large receptive field by deepening the network to capture correspondences between input views. In this paper, we propose a spatial-angular attention network to perceive corres pondences in the light field non-locally, and reconstruction high angular resolution light field in an end-to-end manner. Motivated by the non-local attention mechanism, a spatial-angular attention module specifically for the high-dimensional light field data is introduced to compute the responses from all the positions in the epipolar plane for each pixel in the light field, and generate an attention map that captures correspondences along the angular dimension. We then propose a multi-scale reconstruction structure to efficiently implement the non-local attention in the low spatial scale, while also preserving the high frequency components in the high spatial scales. Extensive experiments demonstrate the superior performance of the proposed spatial-angular attention network for reconstructing sparsely-sampled light fields with non-Lambertian effects.
Recently deep generative models have achieved impressive progress in modeling the distribution of training data. In this work, we present for the first time a generative model for 4D light field patches using variational autoencoders to capture the d ata distribution of light field patches. We develop a generative model conditioned on the central view of the light field and incorporate this as a prior in an energy minimization framework to address diverse light field reconstruction tasks. While pure learning-based approaches do achieve excellent results on each instance of such a problem, their applicability is limited to the specific observation model they have been trained on. On the contrary, our trained light field generative model can be incorporated as a prior into any model-based optimization approach and therefore extend to diverse reconstruction tasks including light field view synthesis, spatial-angular super resolution and reconstruction from coded projections. Our proposed method demonstrates good reconstruction, with performance approaching end-to-end trained networks, while outperforming traditional model-based approaches on both synthetic and real scenes. Furthermore, we show that our approach enables reliable light field recovery despite distortions in the input.
With the effective application of deep learning in computer vision, breakthroughs have been made in the research of super-resolution images reconstruction. However, many researches have pointed out that the insufficiency of the neural network extract ion on image features may bring the deteriorating of newly reconstructed image. On the other hand, the generated pictures are sometimes too artificial because of over-smoothing. In order to solve the above problems, we propose a novel self-calibrated convolutional generative adversarial networks. The generator consists of feature extraction and image reconstruction. Feature extraction uses self-calibrated convolutions, which contains four portions, and each portion has specific functions. It can not only expand the range of receptive fields, but also obtain long-range spatial and inter-channel dependencies. Then image reconstruction is performed, and finally a super-resolution image is reconstructed. We have conducted thorough experiments on different datasets including set5, set14 and BSD100 under the SSIM evaluation method. The experimental results prove the effectiveness of the proposed network.
113 - M. Nakao , F. Tong , M. Nakamura 2021
Shape reconstruction of deformable organs from two-dimensional X-ray images is a key technology for image-guided intervention. In this paper, we propose an image-to-graph convolutional network (IGCN) for deformable shape reconstruction from a single- viewpoint projection image. The IGCN learns relationship between shape/deformation variability and the deep image features based on a deformation mapping scheme. In experiments targeted to the respiratory motion of abdominal organs, we confirmed the proposed framework with a regularized loss function can reconstruct liver shapes from a single digitally reconstructed radiograph with a mean distance error of 3.6mm.
This paper proposes a particle volume reconstruction directly from an in-line hologram using a deep neural network. Digital holographic volume reconstruction conventionally uses multiple diffraction calculations to obtain sectional reconstructed imag es from an in-line hologram, followed by detection of the lateral and axial positions, and the sizes of particles by using focus metrics. However, the axial resolution is limited by the numerical aperture of the optical system, and the processes are time-consuming. The method proposed here can simultaneously detect the lateral and axial positions, and the particle sizes via a deep neural network (DNN). We numerically investigated the performance of the DNN in terms of the errors in the detected positions and sizes. The calculation time is faster than conventional diffracted-based approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا