ترغب بنشر مسار تعليمي؟ اضغط هنا

Richardson-Lucy Deblurring for Moving Light Field Cameras

63   0   0.0 ( 0 )
 نشر من قبل J\\\"urgen Leitner
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We generalize Richardson-Lucy (RL) deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur. The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation. We introduce a novel regularization term that maintains parallax information in the light field while reducing noise and ringing. We demonstrate the method operating effectively on rendered scenes and scenes captured using an off-the-shelf light field camera. An industrial robot arm provides repeatable and known trajectories, allowing us to establish quantitative performance in complex 3-D scenes. Qualitative and quantitative results confirm the effectiveness of the method, including commonly occurring cases for which previously published methods fail. We include mathematical proof that the algorithm converges to the maximum-likelihood estimate of the unblurred scene under Poisson noise. We expect extension to blind methods to be possible following the generalization of 2-D Richardson-Lucy to blind deconvolution.



قيم البحث

اقرأ أيضاً

198 - Min Li , Guangwei Li , Ke Lv 2019
We use the Richardson-Lucy deconvolution algorithm to extract one dimensional (1D) spectra from LAMOST spectrum images. Compared with other deconvolution algorithms, this algorithm is much more fast. The practice on a real LAMOST image illustrates th at the 1D resulting spectrum of this method has a higher SNR and resolution than those extracted by the LAMOST pipeline. Furthermore, our algorithm can effectively depress the ringings that are often shown in the 1D resulting spectra of other deconvolution methods.
Light field cameras can capture both spatial and angular information of light rays, enabling 3D reconstruction by a single exposure. The geometry of 3D reconstruction is affected by intrinsic parameters of a light field camera significantly. In the p aper, we propose a multi-projection-center (MPC) model with 6 intrinsic parameters to characterize light field cameras based on traditional two-parallel-plane (TPP) representation. The MPC model can generally parameterize light field in different imaging formations, including conventional and focused light field cameras. By the constraints of 4D ray and 3D geometry, a 3D projective transformation is deduced to describe the relationship between geometric structure and the MPC coordinates. Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model. Our calibration method includes a close-form solution and a non-linear optimization by minimizing re-projection errors. Experimental results on both simulated and real scene data have verified the performance of our algorithm.
In this paper, we introduce a moving object detection algorithm for fisheye cameras used in autonomous driving. We reformulate the three commonly used constraints in rectilinear images (epipolar, positive depth and positive height constraints) to sph erical coordinates which is invariant to specific camera configuration once the calibration is known. One of the main challenging use case in autonomous driving is to detect parallel moving objects which suffer from motion-parallax ambiguity. To alleviate this, we formulate an additional fourth constraint, called the anti-parallel constraint, which aids the detection of objects with motion that mirrors the ego-vehicle possible. We analyze the proposed algorithm in different scenarios and demonstrate that it works effectively operating directly on fisheye images.
Next-generation neutrinoless double beta decay experiments aim for half-life sensitivities of ~$10^{27}$ yr, requiring suppressing backgrounds to <1 count/tonne/yr. For this, any extra background rejection handle, beyond excellent energy resolution a nd the use of extremely radiopure materials, is of utmost importance. The NEXT experiment exploits differences in the spatial ionization patterns of double beta decay and single-electron events to discriminate signal from background. While the former display two Bragg peak dense ionization regions at the opposite ends of the track, the latter typically have only one such feature. Thus, comparing the energies at the track extremes provides an additional rejection tool. The unique combination of the topology-based background discrimination and excellent energy resolution (1% FWHM at the Q-value of the decay) is the distinguishing feature of NEXT. Previous studies demonstrated a topological background rejection factor of ~5 when reconstructing electron-positron pairs in the $^{208}$Tl 1.6 MeV double escape peak (with Compton events as background), recorded in the NEXT-White demonstrator at the Laboratorio Subterraneo de Canfranc, with 72% signal efficiency. This was recently improved through the use of a deep convolutional neural network to yield a background rejection factor of ~10 with 65% signal efficiency. Here, we present a new reconstruction method, based on the Richardson-Lucy deconvolution algorithm, which allows reversing the blurring induced by electron diffusion and electroluminescence light production in the NEXT TPC. The new method yields highly refined 3D images of reconstructed events, and, as a result, significantly improves the topological background discrimination. When applied to real-data 1.6 MeV $e^-e^+$ pairs, it leads to a background rejection factor of 27 at 57% signal efficiency.
Robots must reliably interact with refractive objects in many applications; however, refractive objects can cause many robotic vision algorithms to become unreliable or even fail, particularly feature-based matching applications, such as structure-fr om-motion. We propose a method to distinguish between refracted and Lambertian image features using a light field camera. Specifically, we propose to use textural cross-correlation to characterise apparent feature motion in a single light field, and compare this motion to its Lambertian equivalent based on 4D light field geometry. Our refracted feature distinguisher has a 34.3% higher rate of detection compared to state-of-the-art for light fields captured with large baselines relative to the refractive object. Our method also applies to light field cameras with much smaller baselines than previously considered, yielding up to 2 times better detection for 2D-refractive objects, such as a sphere, and up to 8 times better for 1D-refractive objects, such as a cylinder. For structure from motion, we demonstrate that rejecting refracted features using our distinguisher yields up to 42.4% lower reprojection error, and lower failure rate when the robot is approaching refractive objects. Our method lead to more robust robot vision in the presence of refractive objects.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا