Do you want to publish a course? Click here

Quantifying the spatial resolution of the maximum a posteriori estimate in linear, rank-deficient, Bayesian hard field tomography

61   0   0.0 ( 0 )
 Added by Johannes Emmert
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Image based diagnostics are interpreted in the context of spatial resolution. The same is true for tomographic image reconstruction. Current empirically driven approaches to quantify spatial resolution rely on a deterministic formulation based on point-spread functions which neglect the statistical prior information, that is integral to rank-deficient tomography. We propose a statistical spatial resolution measure based on the covariance of the reconstruction (point estimate) and show that the prior information acts as a lower limit for the spatial resolution. Furthermore, the spatial resolution measure can be employed for designing tomographic systems under consideration of spatial inhomogeneity of spatial resolution.

rate research

Read More

Characterizing the properties of groundwater aquifers is essential for predicting aquifer response and managing groundwater resources. In this work, we develop a high-dimensional scalable Bayesian inversion framework governed by a three-dimensional quasi-static linear poroelastic model to characterize lateral permeability variations in groundwater aquifers. We determine the maximum a posteriori (MAP) point of the posterior permeability distribution from centimeter-level surface deformation measurements obtained from Interferometric Synthetic Aperture Radar (InSAR). The scalability of our method to high parameter dimension is achieved through the use of adjoint-based derivatives, inexact Newton methods to determine the MAP point, and a Matern class sparse prior precision operator. Together, these guarantee that the MAP point is found at a cost, measured in number of forward/adjoint poroelasticity solves, that is independent of the parameter dimension. We apply our methodology to a test case for a municipal well in Mesquite, Nevada, in which InSAR and GPS surface deformation data are available. We solve problems with up to 320,824 state variable degrees of freedom (DOFs) and 16,896 parameter DOFs. A consistent treatment of noise level is employed so that the aquifer characterization result does not depend on the pixel spacing of surface deformation data. Our results show that the use of InSAR data significantly improves characterization of lateral aquifer heterogeneity, and the InSAR-based aquifer characterization recovers complex lateral displacement trends observed by independent daily GPS measurements.
Image quality assessment (IQA) models aim to establish a quantitative relationship between visual images and their perceptual quality by human observers. IQA modeling plays a special bridging role between vision science and engineering practice, both as a test-bed for vision theories and computational biovision models, and as a powerful tool that could potentially make profound impact on a broad range of image processing, computer vision, and computer graphics applications, for design, optimization, and evaluation purposes. IQA research has enjoyed an accelerated growth in the past two decades. Here we present an overview of IQA methods from a Bayesian perspective, with the goals of unifying a wide spectrum of IQA approaches under a common framework and providing useful references to fundamental concepts accessible to vision scientists and image processing practitioners. We discuss the implications of the successes and limitations of modern IQA methods for biological vision and the prospect for vision science to inform the design of future artificial vision systems.
What is habitability? Can we quantify it? What do we mean under the term habitable or potentially habitable planet? With estimates of the number of planets in our Galaxy alone running into billions, possibly a number greater than the number of stars, it is high time to start characterizing them, sorting them into classes/types just like stars, to better understand their formation paths, their properties and, ultimately, their ability to beget or sustain life. After all, we do have life thriving on one of these billions of planets, why not on others? Which planets are better suited for life and which ones are definitely not worth spending expensive telescope time on? We need to find sort of quick assessment score, a metric, using which we can make a list of promising planets and dedicate our efforts to them. Exoplanetary habitability is a transdisciplinary subject integrating astrophysics, astrobiology, planetary science, even terrestrial environmental sciences. We review the existing metrics of habitability and the new classification schemes of extrasolar planets and provide an exposition of the use of computational intelligence techniques to evaluate habitability scores and to automate the process of classification of exoplanets. We examine how solving convex optimization techniques, as in computing new metrics such as CDHS and CEESA, cross-validates ML-based classification of exoplanets. Despite the recent criticism of exoplanetary habitability ranking, this field has to continue and evolve to use all available machinery of astroinformatics, artificial intelligence and machine learning. It might actually develop into a sort of same scale as stellar types in astronomy, to be used as a quick tool of screening exoplanets in important characteristics in search for potentially habitable planets for detailed follow-up targets.
We propose a monitoring indicator of the normality of the output of a gravitational wave detector. This indicator is based on the estimation of the kurtosis (i.e., the 4th order statistical moment normalized by the variance squared) of the data selected in a time sliding window. We show how a low cost (because recursive) implementation of such estimation is possible and we illustrate the validity of the presented approach with a few examples using simulated random noises.
45 - P. Ocvirk 2005
This paper describes STECMAP (STEllar Content via Maximum A Posteriori), a flexible, non-parametric inversion method for the interpretation of the integrated light spectra of galaxies, based on synthetic spectra of single stellar populations (SSPs). We focus on the recovery of a galaxys star formation history and stellar age-metallicity relation. We use the high resolution SSPs produced by PEGASE-HR to quantify the informational content of the wavelength range 4000 - 6800 Angstroms. A detailed investigation of the properties of the corresponding simplified linear problem is performed using singular value decomposition. It turns out to be a powerful tool for explaining and predicting the behaviour of the inversion. We provide means of quantifying the fundamental limitations of the problem considering the intrinsic properties of the SSPs in the spectral range of interest, as well as the noise in these models and in the data. We performed a systematic simulation campaign and found that, when the time elapsed between two bursts of star formation is larger than 0.8 dex, the properties of each episode can be constrained with a precision of 0.04 dex in age and 0.02 dex in metallicity from high quality data (R=10 000, signal-to-noise ratio SNR=100 per pixel), not taking model errors into account. The described methods and error estimates will be useful in the design and in the analysis of extragalactic spectroscopic surveys.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا