ترغب بنشر مسار تعليمي؟ اضغط هنا

Fringe pattern analysis using deep learning

317   0   0.0 ( 0 )
 نشر من قبل Shijie Feng
 تاريخ النشر 2018
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

In many optical metrology techniques, fringe pattern analysis is the central algorithm for recovering the underlying phase distribution from the recorded fringe patterns. Despite extensive research efforts for decades, how to extract the desired phase information, with the highest possible accuracy, from the minimum number of fringe patterns remains one of the most challenging open problems. Inspired by recent successes of deep learning techniques for computer vision and other applications, here, we demonstrate for the first time, to our knowledge, that the deep neural networks can be trained to perform fringe analysis, which substantially enhances the accuracy of phase demodulation from a single fringe pattern. The effectiveness of the proposed method is experimentally verified using carrier fringe patterns under the scenario of fringe projection profilometry. Experimental results demonstrate its superior performance in terms of high accuracy and edge-preserving over two representative single-frame techniques: Fourier transform profilometry and Windowed Fourier profilometry.



قيم البحث

اقرأ أيضاً

Fringe projection profilometry (FPP) has become increasingly important in dynamic 3-D shape measurement. In FPP, it is necessary to retrieve the phase of the measured object before shape profiling. However, traditional phase retrieval techniques ofte n require a large number of fringes, which may generate motion-induced error for dynamic objects. In this paper, a novel phase retrieval technique based on deep learning is proposed, which uses an end-to-end deep convolution neural network to transform a single or two fringes into the phase retrieval required fringes. When the objects surface is located in a restricted depth, the presented network only requires a single fringe as the input, which otherwise requires two fringes in an unrestricted depth. The proposed phase retrieval technique is first theoretically analyzed, and then numerically and experimentally verified on its applicability for dynamic 3-D measurement.
Multiple works have applied deep learning to fringe projection profilometry (FPP) in recent years. However, to obtain a large amount of data from actual systems for training is still a tricky problem, and moreover, the network design and optimization still worth exploring. In this paper, we introduce computer graphics to build virtual FPP systems in order to generate the desired datasets conveniently and simply. The way of constructing a virtual FPP system is described in detail firstly, and then some key factors to set the virtual FPP system much close to the reality are analyzed. With the aim of accurately estimating the depth image from only one fringe image, we also design a new loss function to enhance the quality of the overall and detailed information restored. And two representative networks, U-Net and pix2pix, are compared in multiple aspects. The real experiments prove the good accuracy and generalization of the network trained by the data from our virtual systems and the designed loss, implying the potential of our method for applications.
211 - Tao Lei , Risheng Wang , Yong Wan 2020
Deep learning has been widely used for medical image segmentation and a large number of papers has been presented recording the success of deep learning in the field. In this paper, we present a comprehensive thematic survey on medical image segmenta tion using deep learning techniques. This paper makes two original contributions. Firstly, compared to traditional surveys that directly divide literatures of deep learning on medical image segmentation into many groups and introduce literatures in detail for each group, we classify currently popular literatures according to a multi-level structure from coarse to fine. Secondly, this paper focuses on supervised and weakly supervised learning approaches, without including unsupervised approaches since they have been introduced in many old surveys and they are not popular currently. For supervised learning approaches, we analyze literatures in three aspects: the selection of backbone networks, the design of network blocks, and the improvement of loss functions. For weakly supervised learning approaches, we investigate literature according to data augmentation, transfer learning, and interactive segmentation, separately. Compared to existing surveys, this survey classifies the literatures very differently from before and is more convenient for readers to understand the relevant rationale and will guide them to think of appropriate improvements in medical image segmentation based on deep learning approaches.
Structured Illumination Microscopy is a widespread methodology to image live and fixed biological structures smaller than the diffraction limits of conventional optical microscopy. Using recent advances in image up-scaling through deep learning model s, we demonstrate a method to reconstruct 3D SIM image stacks with twice the axial resolution attainable through conventional SIM reconstructions. We further evaluate our method for robustness to noise & generalisability to varying observed specimens, and discuss potential adaptions of the method to further improvements in resolution.
Background/Aims: Standard Automated Perimetry (SAP) is the gold standard to monitor visual field (VF) loss in glaucoma management, but is prone to intra-subject variability. We developed and validated a deep learning (DL) regression model that estima tes pointwise and overall VF loss from unsegmented optical coherence tomography (OCT) scans. Methods: Eight DL regression models were trained with various retinal imaging modalities: circumpapillary OCT at 3.5mm, 4.1mm, 4.7mm diameter, and scanning laser ophthalmoscopy (SLO) en face images to estimate mean deviation (MD) and 52 threshold values. This retrospective study used data from patients who underwent a complete glaucoma examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA Standard VF exam and a SPECTRALIS OCT scan using the Glaucoma Module Premium Edition. Results: A total of 1378 matched OCT-VF pairs of 496 patients (863 eyes) were included for training and evaluation of the DL models. Average sample MD was -7.53dB (from -33.8dB to +2.0dB). For 52 VF threshold values estimation, the circumpapillary OCT scan with the largest radius (4.7mm) achieved the best performance among all individual models (Pearson r=0.77, 95% CI=[0.72-0.82]). For MD, prediction averaging of OCT-trained models (3.5mm, 4.1mm, 4.7mm) resulted in a Pearson r of 0.78 [0.73-0.83] on the validation set and comparable performance on the test set (Pearson r=0.79 [0.75-0.82]). Conclusion: DL on unsegmented OCT scans accurately predicts pointwise and mean deviation of 24-2 VF in glaucoma patients. Automated VF from unsegmented OCT could be a solution for patients unable to produce reliable perimetry results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا