ترغب بنشر مسار تعليمي؟ اضغط هنا

Experimental digital Gabor hologram rendering by a model-trained convolutional neural network

341   0   0.0 ( 0 )
 نشر من قبل Michael Atlan
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Digital hologram rendering can be performed by a convolutional neural network, trained with image pairs calculated by numerical wave propagation from sparse generating images. 512-by-512 pixeldigital Gabor magnitude holograms are successfully estimated from experimental interferograms by a standard UNet trained with 50,000 synthetic image pairs over 70 epochs.



قيم البحث

اقرأ أيضاً

Fluctuations in heart rate are intimately tied to changes in the physiological state of the organism. We examine and exploit this relationship by classifying a human subjects wake/sleep status using his instantaneous heart rate (IHR) series. We use a convolutional neural network (CNN) to build features from the IHR series extracted from a whole-night electrocardiogram (ECG) and predict every 30 seconds whether the subject is awake or asleep. Our training database consists of 56 normal subjects, and we consider three different databases for validation; one is private, and two are public with different races and apnea severities. On our private database of 27 subjects, our accuracy, sensitivity, specificity, and AUC values for predicting the wake stage are 83.1%, 52.4%, 89.4%, and 0.83, respectively. Validation performance is similar on our two public databases. When we use the photoplethysmography instead of the ECG to obtain the IHR series, the performance is also comparable. A robustness check is carried out to confirm the obtained performance statistics. This result advocates for an effective and scalable method for recognizing changes in physiological state using non-invasive heart rate monitoring. The CNN model adaptively quantifies IHR fluctuation as well as its location in time and is suitable for differentiating between the wake and sleep stages.
Convolutional Neural Networks (CNN) have been used in Automatic Speech Recognition (ASR) to learn representations directly from the raw signal instead of hand-crafted acoustic features, providing a richer and lossless input signal. Recent researches propose to inject prior acoustic knowledge to the first convolutional layer by integrating the shape of the impulse responses in order to increase both the interpretability of the learnt acoustic model, and its performances. We propose to combine the complex Gabor filter with complex-valued deep neural networks to replace usual CNN weights kernels, to fully take advantage of its optimal time-frequency resolution and of the complex domain. The conducted experiments on the TIMIT phoneme recognition task shows that the proposed approach reaches top-of-the-line performances while remaining interpretable.
Head motion is inevitable in the acquisition of diffusion-weighted images, especially for certain motion-prone subjects and for data gathering of advanced diffusion models with prolonged scan times. Deficient accuracy of motion correction cause deter ioration in the quality of diffusion model reconstruction, thus affecting the derived measures. This results in either loss of data, or introducing bias in outcomes from data of different motion levels, or both. Hence minimizing motion effects and reutilizing motion-contaminated data becomes vital to quantitative studies. We have previously developed a 3-dimensional hierarchical convolution neural network (3D H-CNN) for robust diffusion kurtosis mapping from under-sampled data. In this study, we propose to extend this method to motion-contaminated data for robust recovery of diffusion model-derived measures with a process of motion assessment and corrupted volume rejection. We validate the proposed pipeline in two in-vivo datasets. Results from the first dataset of individual subjects show that all the diffusion tensor and kurtosis tensor-derived measures from the new pipeline are minimally sensitive to motion effects, and are comparable to the motion-free reference with as few as eight volumes retained from the motion-contaminated data. Results from the second dataset of a group of children with attention deficit hyperactivity disorder demonstrate the ability of our approach in ameliorating spurious group differences due to head motion. This method shows great potential for exploiting some valuable but motion-corrupted DWI data which are likely to be discarded otherwise, and applying to data with different motion level thus improving their utilization and statistic power.
Thin-plate splines can be used for interpolation of image values, but can also be used to represent a smooth surface, such as the boundary between two structures. We present a method for partitioning vertebra segmentation masks into two substructures , the vertebral body and the posterior elements, using a convolutional neural network that predicts the boundary between the two structures. This boundary is modeled as a thin-plate spline surface defined by a set of control points predicted by the network. The neural network is trained using the reconstruction error of a convolutional autoencoder to enable the use of unpaired data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا