ترغب بنشر مسار تعليمي؟ اضغط هنا

Gradient Descent-based D-optimal Design for the Least-Squares Polynomial Approximation

89   0   0.0 ( 0 )
 نشر من قبل Vitaly Zankin
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we propose a novel sampling method for Design of Experiments. This method allows to sample such input values of the parameters of a computational model for which the constructed surrogate model will have the least possible approximation error. High efficiency of the proposed method is demonstrated by its comparison with other sampling techniques (LHS, Sobol sequence sampling, and Maxvol sampling) on the problem of least-squares polynomial approximation. Also, numerical experiments for the Lebesgue constant growth for the points sampled by the proposed method are carried out.


قيم البحث

اقرأ أيضاً

122 - Marc Baboulin 2010
We derive closed formulas for the condition number of a linear function of the total least squares solution. Given an over determined linear system Ax=b, we show that this condition number can be computed using the singular values and the right singu lar vectors of [A,b] and A. We also provide an upper bound that requires the computation of the largest and the smallest singular value of [A,b] and the smallest singular value of A. In numerical examples, we compare these values and the resulting forward error bounds with existing error estimates.
We consider equation systems of the form X_1 = f_1(X_1, ..., X_n), ..., X_n = f_n(X_1, ..., X_n) where f_1, ..., f_n are polynomials with positive real coefficients. In vector form we denote such an equation system by X = f(X) and call f a system of positive polynomials, short SPP. Equation systems of this kind appear naturally in the analysis of stochastic models like stochastic context-free grammars (with numerous applications to natural language processing and computational biology), probabilistic programs with procedures, web-surfing models with back buttons, and branching processes. The least nonnegative solution mu f of an SPP equation X = f(X) is of central interest for these models. Etessami and Yannakakis have suggested a particular version of Newtons method to approximate mu f. We extend a result of Etessami and Yannakakis and show that Newtons method starting at 0 always converges to mu f. We obtain lower bounds on the convergence speed of the method. For so-called strongly connected SPPs we prove the existence of a threshold k_f such that for every i >= 0 the (k_f+i)-th iteration of Newtons method has at least i valid bits of mu f. The proof yields an explicit bound for k_f depending only on syntactic parameters of f. We further show that for arbitrary SPP equations Newtons method still converges linearly: there are k_f>=0 and alpha_f>0 such that for every i>=0 the (k_f+alpha_f i)-th iteration of Newtons method has at least i valid bits of mu f. The proof yields an explicit bound for alpha_f; the bound is exponential in the number of equations, but we also show that it is essentially optimal. Constructing a bound for k_f is still an open problem. Finally, we also provide a geometric interpretation of Newtons method for SPPs.
In recent studies on sparse modeling, $l_q$ ($0<q<1$) regularized least squares regression ($l_q$LS) has received considerable attention due to its superiorities on sparsity-inducing and bias-reduction over the convex counterparts. In this paper, we propose a Gauss-Seidel iterative thresholding algorithm (called GAITA) for solution to this problem. Different from the classical iterative thresholding algorithms using the Jacobi updating rule, GAITA takes advantage of the Gauss-Seidel rule to update the coordinate coefficients. Under a mild condition, we can justify that the support set and sign of an arbitrary sequence generated by GAITA will converge within finite iterations. This convergence property together with the Kurdyka-{L}ojasiewicz property of ($l_q$LS) naturally yields the strong convergence of GAITA under the same condition as above, which is generally weaker than the condition for the convergence of the classical iterative thresholding algorithms. Furthermore, we demonstrate that GAITA converges to a local minimizer under certain additional conditions. A set of numerical experiments are provided to show the effectiveness, particularly, much faster convergence of GAITA as compared with the classical iterative thresholding algorithms.
This work provides a theoretical framework for the pose estimation problem using total least squares for vector observations from landmark features. First, the optimization framework is formulated for the pose estimation problem with observation vect ors extracted from point cloud features. Then, error-covariance expressions are derived. The attitude and position solutions obtained via the derived optimization framework are proven to reach the bounds defined by the Cramer-Rao lower bound under the small angle approximation of attitude errors. The measurement data for the simulation of this problem is provided through a series of vector observation scans, and a fully populated observation noise-covariance matrix is assumed as the weight in the cost function to cover for the most general case of the sensor uncertainty. Here, previous derivations are expanded for the pose estimation problem to include more generic cases of correlations in the errors than previously cases involving an isotropic noise assumption. The proposed solution is simulated in a Monte-Carlo framework with 10,000 samples to validate the error-covariance analysis.
This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares. This result is obtained by analyzing SGD as a stochastic process and by sharpl y characterizing the stationary covariance matrix of this process. The finite rate optimality characterization captures the constant factors and addresses model mis-specification.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا