ترغب بنشر مسار تعليمي؟ اضغط هنا

Least Squares and Shrinkage Estimation under Bimonotonicity Constraints

115   0   0.0 ( 0 )
 نشر من قبل Lutz D\\\"umbgen
 تاريخ النشر 2009
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we describe active set type algorithms for minimization of a smooth function under general order constraints, an important case being functions on the set of bimonotone r-by-s matrices. These algorithms can be used, for instance, to estimate a bimonotone regression function via least squares or (a smooth approximation of) least absolute deviations. Another application is shrinkage estimation in image denoising or, more generally, regression problems with two ordinal factors after representing the data in a suitable basis which is indexed by pairs (i,j) in {1,...,r}x{1,...,s}. Various numerical examples illustrate our methods.



قيم البحث

اقرأ أيضاً

The identification of increasingly smaller signal from objects observed with a non-perfect instrument in a noisy environment poses a challenge for a statistically clean data analysis. We want to compute the probability of frequencies determined in va rious data sets to be related or not, which cannot be answered with a simple comparison of amplitudes. Our method provides a statistical estimator for a given signal with different strengths in a set of observations to be of instrumental origin or to be intrinsic. Based on the spectral significance as an unbiased statistical quantity in frequency analysis, Discrete Fourier Transforms (DFTs) of target and background light curves are comparatively examined. The individual False-Alarm Probabilities are used to deduce conditional probabilities for a peak in a target spectrum to be real in spite of a corresponding peak in the spectrum of a background or of comparison stars. Alternatively, we can compute joint probabilities of frequencies to occur in the DFT spectra of several data sets simultaneously but with different amplitude, which leads to composed spectral significances. These are useful to investigate a star observed in different filters or during several observing runs. The composed spectral significance is a measure for the probability that none of coinciding peaks in the DFT spectra under consideration are due to noise. Cinderella is a mathematical approach to a general statistical problem. Its potential reaches beyond photometry from ground or space: to all cases where a quantitative statistical comparison of periodicities in different data sets is desired. Examples for the composed and the conditional Cinderella mode for different observation setups are presented.
Penalization procedures often suffer from their dependence on multiplying factors, whose optimal values are either unknown or hard to estimate from the data. We propose a completely data-driven calibration algorithm for this parameter in the least-sq uares regression framework, without assuming a particular shape for the penalty. Our algorithm relies on the concept of minimal penalty, recently introduced by Birge and Massart (2007) in the context of penalized least squares for Gaussian homoscedastic regression. On the positive side, the minimal penalty can be evaluated from the data themselves, leading to a data-driven estimation of an optimal penalty which can be used in practice; on the negative side, their approach heavily relies on the homoscedastic Gaussian nature of their stochastic framework. The purpose of this paper is twofold: stating a more general heuristics for designing a data-driven penalty (the slope heuristics) and proving that it works for penalized least-squares regression with a random design, even for heteroscedastic non-Gaussian data. For technical reasons, some exact mathematical results will be proved only for regressogram bin-width selection. This is at least a first step towards further results, since the approach and the method that we use are indeed general.
87 - Ben Boukai , Yue Zhang 2018
We consider a resampling scheme for parameters estimates in nonlinear regression models. We provide an estimation procedure which recycles, via random weighting, the relevant parameters estimates to construct consistent estimates of the sampling dist ribution of the various estimates. We establish the asymptotic normality of the resampled estimates and demonstrate the applicability of the recycling approach in a small simulation study and via example.
This work provides a theoretical framework for the pose estimation problem using total least squares for vector observations from landmark features. First, the optimization framework is formulated for the pose estimation problem with observation vect ors extracted from point cloud features. Then, error-covariance expressions are derived. The attitude and position solutions obtained via the derived optimization framework are proven to reach the bounds defined by the Cramer-Rao lower bound under the small angle approximation of attitude errors. The measurement data for the simulation of this problem is provided through a series of vector observation scans, and a fully populated observation noise-covariance matrix is assumed as the weight in the cost function to cover for the most general case of the sensor uncertainty. Here, previous derivations are expanded for the pose estimation problem to include more generic cases of correlations in the errors than previously cases involving an isotropic noise assumption. The proposed solution is simulated in a Monte-Carlo framework with 10,000 samples to validate the error-covariance analysis.
In this tutorial we schematically illustrate four algorithms: (1) ABC rejection for parameter estimation (2) ABC SMC for parameter estimation (3) ABC rejection for model selection on the joint space (4) ABC SMC for model selection on the joint space.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا