ترغب بنشر مسار تعليمي؟ اضغط هنا

The Potts model is frequently used to describe the behavior of image classes, since it allows to incorporate contextual information linking neighboring pixels in a simple way. Its isotropic version has only one real parameter beta, known as smoothnes s parameter or inverse temperature, which regulates the classes map homogeneity. The classes are unavailable, and estimating them is central in important image processing procedures as, for instance, image classification. Methods for estimating the classes which stem from a Bayesian approach under the Potts model require to adequately specify a value for beta. The estimation of such parameter can be efficiently made solving the Pseudo Maximum likelihood (PML) equations in two different schemes, using the prior or the posterior model. Having only radiometric data available, the first scheme needs the computation of an initial segmentation, while the second uses both the segmentation and the radiometric data to make the estimation. In this paper, we compare these two PML estimators by computing the mean square error (MSE), bias, and sensitivity to deviations from the hypothesis of the model. We conclude that the use of extra data does not improve the accuracy of the PML, moreover, under gross deviations from the model, this extra information introduces unpredictable distortions and bias.
In this paper, we study statistical classification accuracy of two different Markov field environments for pixelwise image segmentation, considering the labels of the image as hidden states and solving the estimation of such labels as a solution of t he MAP equation. The emission distribution is assumed the same in all models, and the difference lays in the Markovian prior hypothesis made over the labeling random field. The a priori labeling knowledge will be modeled with a) a second order anisotropic Markov Mesh and b) a classical isotropic Potts model. Under such models, we will consider three different segmentation procedures, 2D Path Constrained Viterbi training for the Hidden Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts model, and ICM (Iterated Conditional Modes) for the second order isotropic Potts model. We provide a unified view of all three methods, and investigate goodness of fit for classification, studying the influence of parameter estimation, computational gain, and extent of automation in the statistical measures Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust and accurate statistical analysis on synthetic and real-life experimental data coming from the field of Dental Diagnostic Radiography. All algorithms, using the learned parameters, generate good segmentations with little interaction when the images have a clear multimodal histogram. Suboptimal learning proves to be frail in the case of non-distinctive modes, which limits the complexity of usable models, and hence the achievable error rate as well. All Matlab code written is provided in a toolbox available for download from our website, following the Reproducible Research Paradigm.
The aim of this study was to evaluate the performance of a classical method of fractal analysis, Detrended Fluctuation Analysis (DFA), in the analysis of the dynamics of animal behavior time series. In order to correctly use DFA to assess the presenc e of long-range correlation, previous authors using statistical model systems have stated that different aspects should be taken into account such as: 1) the establishment by hypothesis testing of the absence of short term correlation, 2) an accurate estimation of a straight line in the log-log plot of the fluctuation function, 3) the elimination of artificial crossovers in the fluctuation function, and 4) the length of the time series. Taking into consideration these factors, herein we evaluated the presence of long-range correlation in the temporal pattern of locomotor activity of Japanese quail ({sl Coturnix coturnix}) and mosquito larva ({sl Culex quinquefasciatus}). In our study, modeling the data with the general ARFIMA model, we rejected the hypothesis of short range correlations (d=0) in all cases. We also observed that DFA was able to distinguish between the artificial crossover observed in the temporal pattern of locomotion of Japanese quail, and the crossovers in the correlation behavior observed in mosquito larvae locomotion. Although the test duration can slightly influence the parameter estimation, no qualitative differences were observed between different test durations.
Radar (SAR) images often exhibit profound appearance variations due to a variety of factors including clutter noise produced by the coherent nature of the illumination. Ultrasound images and infrared images have similar cluttered appearance, that mak e 1 dimensional structures, as edges and object boundaries difficult to locate. Structure information is usually extracted in two steps: first, building and edge strength mask classifying pixels as edge points by hypothesis testing, and secondly estimating from that mask, pixel wide connected edges. With constant false alarm rate (CFAR) edge strength detectors for speckle clutter, the image needs to be scanned by a sliding window composed of several differently oriented splitting sub-windows. The accuracy of edge location for these ratio detectors depends strongly on the orientation of the sub-windows. In this work we propose to transform the edge strength detection problem into a binary segmentation problem in the undecimated wavelet domain, solvable using parallel 1d Hidden Markov Models. For general dependency models, exact estimation of the state map becomes computationally complex, but in our model, exact MAP is feasible. The effectiveness of our approach is demonstrated on simulated noisy real-life natural images with available ground truth, while the strength of our output edge map is measured with Pratts, Baddeley an Kappa proficiency measures. Finally, analysis and experiments on three different types of SAR images, with different polarizations, resolutions and textures, illustrate that the proposed method can detect structure on SAR images effectively, providing a very good start point for active contour methods.
We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an emph{Equilibrium} index $mathcal{E}$ obtained by projecting the edge map into a family of ed ge patterns, and an emph{Entropy} index $mathcal{H}$, defined as a function of the Kolmogorov Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i)~the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters, and (ii)~the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratts Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation.
Imputation of missing data in large regions of satellite imagery is necessary when the acquired image has been damaged by shadows due to clouds, or information gaps produced by sensor failure. The general approach for imputation of missing data, th at could not be considered missed at random, suggests the use of other available data. Previous work, like local linear histogram matching, take advantage of a co-registered older image obtained by the same sensor, yielding good results in filling homogeneous regions, but poor results if the scenes being combined have radical differences in target radiance due, for example, to the presence of sun glint or snow. This study proposes three different alternatives for filling the data gaps. The first two involves merging radiometric information from a lower resolution image acquired at the same time, in the Fourier domain (Method A), and using linear regression (Method B). The third method consider segmentation as the main target of processing, and propose a method to fill the gaps in the map of classes, avoiding direct imputation (Method C). All the methods were compared by means of a large simulation study, evaluating performance with a multivariate response vector with four measures: Q, RMSE, Kappa and Overall Accuracy coefficients. Difference in performance were tested with a MANOVA mixed model design with two main effects, imputation method and type of lower resolution extra data, and a blocking third factor with a nested sub-factor, introduced by the real Landsat image and the sub-images that were used. Method B proved to be the best for all criteria.
In this paper, we address the question of comparison between populations of trees. We study an statistical test based on the distance between empirical mean trees, as an analog of the two sample z statistic for comparing two means. Despite its simpli city, we can report that the test is quite powerful to separate distributions with different means but it does not distinguish between different populations with the same mean, a more complicated test should be applied in that setting. The performance of the test is studied via simulations on Galton-Watson branching processes. We also show an application to a real data problem in genomics.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا