ترغب بنشر مسار تعليمي؟ اضغط هنا

Estimating the resolution of real images

340   0   0.0 ( 0 )
 نشر من قبل Ryuta Mizutani
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

Image resolvability is the primary concern in imaging. This paper reports an estimation of the full width at half maximum of the point spread function from a Fourier domain plot of real sample images by neither using test objects, nor defining a threshold criterion. We suggest that this method can be applied to any type of image, independently of the imaging modality.



قيم البحث

اقرأ أيضاً

The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording, and analyzing the dynamics of different processes, an extensi ve organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series and over 9000 time-series analysis algorithms are analyzed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines, and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heart beat intervals, speech signals, and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent sets of stochastic processes, one of which is composed by Ornstein-Uhlenbeck processes and the other being a general (non-linear ) Langevin process. The method is able to distinguish between all stochastic process, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.
Signal processing techniques have been developed that use different strategies to bypass the Nyquist sampling theorem in order to recover more information than a traditional discrete Fourier transform. Here we examine three such methods: filter diago nalization, compressed sensing, and super-resolution. We apply them to a broad range of signal forms commonly found in science and engineering in order to discover when and how each method can be used most profitably. We find that filter diagonalization provides the best results for Lorentzian signals, while compressed sensing and super-resolution perform better for arbitrary signals.
Super-resolution (SR) has traditionally been based on pairs of high-resolution images (HR) and their low-resolution (LR) counterparts obtained artificially with bicubic downsampling. However, in real-world SR, there is a large variety of realistic im age degradations and analytically modeling these realistic degradations can prove quite difficult. In this work, we propose to handle real-world SR by splitting this ill-posed problem into two comparatively more well-posed steps. First, we train a network to transform real LR images to the space of bicubically downsampled images in a supervised manner, by using both real LR/HR pairs and synthetic pairs. Second, we take a generic SR network trained on bicubically downsampled images to super-resolve the transformed LR image. The first step of the pipeline addresses the problem by registering the large variety of degraded images to a common, well understood space of images. The second step then leverages the already impressive performance of SR on bicubically downsampled images, sidestepping the issues of end-to-end training on datasets with many different image degradations. We demonstrate the effectiveness of our proposed method by comparing it to recent methods in real-world SR and show that our proposed approach outperforms the state-of-the-art works in terms of both qualitative and quantitative results, as well as results of an extensive user study conducted on several real image datasets.
Electronic transport is at the heart of many phenomena in condensed matter physics and material science. Magnetic imaging is a non-invasive tool for detecting electric current in materials and devices. A two-dimensional current density can be reconst ructed from an image of a single component of the magnetic field produced by the current. In this work, we approach the reconstruction problem in the framework of Bayesian inference, i.e. we solve for the most likely current density given an image obtained by a magnetic probe. To enforce a sensible current density priors are used to associate a cost with unphysical features such as pixel-to-pixel oscillations or current outside the device boundary. Beyond previous work, our approach does not require analytically tractable priors and therefore creates flexibility to use priors that have not been explored in the context of current reconstruction. Here, we implement several such priors that have desirable properties. A challenging aspect of imposing a prior is choosing the optimal strength. We describe an empirical way to determine the appropriate strength of the prior. We test our approach on numerically generated examples. Our code is released in an open-source texttt{python} package called texttt{pysquid}.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا