Do you want to publish a course? Click here

Outlier Elimination for Robust Ellipse and Ellipsoid Fitting

175   0   0.0 ( 0 )
 Added by Jieqi Yu
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

In this paper, an outlier elimination algorithm for ellipse/ellipsoid fitting is proposed. This two-stage algorithm employs a proximity-based outlier detection algorithm (using the graph Laplacian), followed by a model-based outlier detection algorithm similar to random sample consensus (RANSAC). These two stages compensate for each other so that outliers of various types can be eliminated with reasonable computation. The outlier elimination algorithm considerably improves the robustness of ellipse/ellipsoid fitting as demonstrated by simulations.



rate research

Read More

We consider the robust filtering problem for a nonlinear state-space model with outliers in measurements. To improve the robustness of the traditional Kalman filtering algorithm, we propose in this work two robust filters based on mixture correntropy, especially the double-Gaussian mixture correntropy and Laplace-Gaussian mixture correntropy. We have formulated the robust filtering problem by adopting the mixture correntropy induced cost to replace the quadratic one in the conventional Kalman filter for measurement fitting errors. In addition, a tradeoff weight coefficient is introduced to make sure the proposed approaches can provide reasonable state estimates in scenarios where measurement fitting errors are small. The formulated robust filtering problems are iteratively solved by utilizing the cubature Kalman filtering framework with a reweighted measurement covariance. Numerical results show that the proposed methods can achieve a performance improvement over existing robust solutions.
Model averaging is an alternative to model selection for dealing with model uncertainty, which is widely used and very valuable. However, most of the existing model averaging methods are proposed based on the least squares loss function, which could be very sensitive to the presence of outliers in the data. In this paper, we propose an outlier-robust model averaging approach by Mallows-type criterion. The key idea is to develop weight choice criteria by minimising an estimator of the expected prediction error for the function being convex with an unique minimum, and twice differentiable in expectation, rather than the expected squared error. The robust loss functions, such as least absolute deviation and Hubers function, reduce the effects of large residuals and poor samples. Simulation study and real data analysis are conducted to demonstrate the finite-sample performance of our estimators and compare them with other model selection and averaging methods.
A collection of robust Mahalanobis distances for multivariate outlier detection is proposed, based on the notion of shrinkage. Robust intensity and scaling factors are optimally estimated to define the shrinkage. Some properties are investigated, such as affine equivariance and breakdown value. The performance of the proposal is illustrated through the comparison to other techniques from the literature, in a simulation study and with a real dataset. The behavior when the underlying distribution is heavy-tailed or skewed, shows the appropriateness of the method when we deviate from the common assumption of normality. The resulting high correct detection rates and low false detection rates in the vast majority of cases, as well as the significantly smaller computation time shows the advantages of our proposal.
We give the first polynomial-time algorithm for performing linear or polynomial regression resilient to adversarial corruptions in both examples and labels. Given a sufficiently large (polynomial-size) training set drawn i.i.d. from distribution D and subsequently corrupted on some fraction of points, our algorithm outputs a linear function whose squared error is close to the squared error of the best-fitting linear function with respect to D, assuming that the marginal distribution of D over the input space is emph{certifiably hypercontractive}. This natural property is satisfied by many well-studied distributions such as Gaussian, strongly log-concave distributions and, uniform distribution on the hypercube among others. We also give a simple statistical lower bound showing that some distributional assumption is necessary to succeed in this setting. These results are the first of their kind and were not known to be even information-theoretically possible prior to our work. Our approach is based on the sum-of-squares (SoS) method and is inspired by the recent applications of the method for parameter recovery problems in unsupervised learning. Our algorithm can be seen as a natural convex relaxation of the following conceptually simple non-convex optimization problem: find a linear function and a large subset of the input corrupted sample such that the least squares loss of the function over the subset is minimized over all possible large subsets.
We present the Simplified Lissajous Ellipse Fitting (SLEF) method for the calculation of the random phase step and the phase distribution from two phase-shifted interferograms. We consider interferograms with spatial and temporal dependency of background intensities, amplitude modulations and noise. Given these problems, the use of the Gabor Filters Bank (GFB) allows us to filter--out the noise, normalize the amplitude and eliminate the background. The normalized patterns permit to implement the SLEF algorithm, which is based on reducing the number of estimated coefficients of the ellipse equation, from five terms to only two. Our method consists of three stages. First, we preprocess the interferograms with GFB methodology in order to normalize the fringe patterns. Second, we calculate the phase step by using the proposed SLEF technique and third, we estimate the phase distribution using a two--steps formula. For the calculation of the phase step, we present two alternatives: the use of the Least Squares (LS) method to approximate the values of the coefficients and, in order to improve the LS estimation, a robust estimation based on the Leclercs potential. The SLEF methods performance is evaluated through synthetic and experimental data to demonstrate its feasibility.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا