No Arabic abstract
Polarimetric Synthetic Aperture Radar (PolSAR) images are an important source of information. Speckle noise gives SAR images a granular appearance that makes interpretation and analysis hard tasks. A major issue is the assessment of information content in these kind of images, and how it is affected by usual processing techniques. Previous works have resulted in various approaches for quantifying image information content. As Narayanan, Desetty, and Reichenbach(2002) we study this problem from the classification accuracy viewpoint, focusing in the filtering and the classification stages. Thus, through classified images we verify how changing properties of the input data affects their quality. Our input is an actual PolSAR image, the control parameters are the filter (Local Mean or Model Based PolSAR, MBPolSAR), the size of them and the classification method (Maximum Likelihood, ML, or Support Vector Machine, SVM), and the output are the classification precision obtained applying the classification algorithm to the filtered data. To expand the conclusions, this study deals not only with Classification Accuracy, but also with Kappa and Overall Accuracy as measures of map precision. Experiments were conducted on two airborne PolSAR images. Unless Narayanan, Desetty, and Reichenbach(2002) almost all measure values are good and increase with degradation, i.e. the filtering algorithm that we used always improves the classification results at least up to 7x7.
A spatial stochastic model is developed which describes the 3D nanomorphology of composite materials, being blends of two different (organic and inorganic) solid phases. Such materials are used, for example, in photoactive layers of hybrid polymer zinc oxide solar cells. The model is based on ideas from stochastic geometry and spatial statistics. Its parameters are fitted to image data gained by electron tomography (ET), where adaptive thresholding and stochastic segmentation have been used to represent morphological features of the considered ET data by unions of overlapping spheres. Their midpoints are modeled by a stack of 2D point processes with a suitably chosen correlation structure, whereas a moving-average procedure is used to add the radii of spheres. The model is validated by comparing physically relevant characteristics of real and simulated data, like the efficiency of exciton quenching, which is important for the generation of charges and their transport toward the electrodes.
Approach-level models were developed to accommodate the diversity of approaches within the same intersection. A random effect term, which indicates the intersection-specific effect, was incorporated into each crash type model to deal with the spatial correlation between different approaches within the same intersection. The model parameters were estimated under the Bayesian framework. Results show that different crash types are correlated with different groups of factors, and each factor shows diverse effects on different crash types, which indicates the importance of crash type models. Besides, the significance of random effect term confirms the existence of spatial correlations among different approaches within the same intersection.
We discuss an analytical approximation for the matter power spectrum covariance matrix and its inverse on translinear scales, $k sim 0.1h - 0.8h/textrm{Mpc}$ at $z = 0$. We proceed to give an analytical expression for the Fisher information matrix of the nonlinear density field spectrum, and derive implications for its cosmological information content. We find that the spectrum information is characterized by a pair of upper bounds, plateaux, caused by the trispectrum, and a knee in the presence of white noise. The effective number of Fourier modes, normally growing as a power law, is bounded from above by these plateaux, explaining naturally earlier findings from $N$-body simulations. These plateaux limit best possible measurements of the nonlinear power at the percent level in a $h^{-3}textrm{Gpc}^3$ volume; the extraction of model parameters from the spectrum is limited explicitly by their degeneracy to the nonlinear amplitude. The value of the first, super-survey (SS) plateau depends on the characteristic survey volume and the large scale power; the second, intra-survey (IS) plateau is set by the small scale power. While both have simple interpretations within the hierarchical textit{Ansatz}, the SS plateau can be predicted and generalized to still smaller scales within Takada and Hus spectrum response formalism. Finally, the noise knee is naturally set by the density of tracers.
A simple model for image formation in linear shift-invariant systems is considered, in which both the detected signal and the noise variance are varying slowly compared to the point-spread function of the system. It is shown that within the constraints of this model, the square of the signal-to-noise ratio is always proportional to the volume of the spatial resolution unit. In the case of Poisson statistics, the ratio of these two quantities divided by the incident density of the imaging particles (e.g. photons) represents a dimensionless invariant of the imaging system, which was previously termed the intrinsic imaging quality. The relationship of this invariant to the notion of information capacity of communication and imaging systems, which was previously considered by Shannon, Gabor and others, is investigated. The results are then applied to a simple generic model of quantitative imaging of weakly scattering objects, leading to an estimate of the upper limit for the amount of information about the sample that can be obtained in such experiments. It is shown that this limit depends only on the total number of imaging particles incident on the sample, the average scattering coefficient, the size of the sample and the number of spatial resolution units.
Spatial resolution is one of the most important specifications of an imaging system. Recent results in quantum parameter estimation theory reveal that an arbitrarily small distance between two incoherent point sources can always be efficiently determined through the use of a spatial mode sorter. However, extending this procedure to a general object consisting of many incoherent point sources remains challenging, due to the intrinsic complexity of multi-parameter estimation problems. Here, we generalize the Richardson-Lucy (RL) deconvolution algorithm to address this challenge. We simulate its application to an incoherent confocal microscope, with a Zernike spatial mode sorter replacing the pinhole used in a conventional confocal microscope. We test different spatially incoherent objects of arbitrary geometry, and we find that the resolution enhancement of sorter-based microscopy is on average over 30% higher than that of a conventional confocal microscope using the standard RL deconvolution algorithm. Our method could potentially be used in diverse applications such as fluorescence microscopy and astronomical imaging.