ترغب بنشر مسار تعليمي؟ اضغط هنا

Lasso hyperinterpolation over general regions

55   0   0.0 ( 0 )
 نشر من قبل Hao-Ning Wu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper develops a fully discrete soft thresholding polynomial approximation over a general region, named Lasso hyperinterpolation. This approximation is an $ell_1$-regularized discrete least squares approximation under the same conditions of hyperinterpolation. Lasso hyperinterpolation also uses a high-order quadrature rule to approximate the Fourier coefficients of a given continuous function with respect to some orthonormal basis, and then it obtains its coefficients by acting a soft threshold operator on all approximated Fourier coefficients. Lasso hyperinterpolation is not a discrete orthogonal projection, but it is an efficient tool to deal with noisy data. We theoretically analyze Lasso hyperinterpolation for continuous and smooth functions. The principal results are twofold: the norm of the Lasso hyperinterpolation operator is bounded independently of the polynomial degree, which is inherited from hyperinterpolation; and the $L_2$ error bound of Lasso hyperinterpolation is less than that of hyperinterpolation when the level of noise becomes large, which improves the robustness of hyperinterpolation. Explicit constructions and corresponding numerical examples of Lasso hyperinterpolation over intervals, discs, spheres, and cubes are given.



قيم البحث

اقرأ أيضاً

Learning mappings of data on manifolds is an important topic in contemporary machine learning, with applications in astrophysics, geophysics, statistical physics, medical diagnosis, biochemistry, 3D object analysis. This paper studies the problem of learning real-valued functions on manifolds through filtered hyperinterpolation of input-output data pairs where the inputs may be sampled deterministically or at random and the outputs may be clean or noisy. Motivated by the problem of handling large data sets, it presents a parallel data processing approach which distributes the data-fitting task among multiple servers and synthesizes the fitted sub-models into a global estimator. We prove quantitative relations between the approximation quality of the learned function over the entire manifold, the type of target function, the number of servers, and the number and type of available samples. We obtain the approximation rates of convergence for distributed and non-distributed approaches. For the non-distributed case, the approximation order is optimal.
128 - Adrian Sandu 2020
This paper studies fixed-step convergence of implicit-explicit general linear methods. We focus on a subclass of schemes that is internally consistent, has high stage order, and favorable stability properties. Classical, index-1 differential algebrai c equation, and singular perturbation convergence analyses results are given. For all these problems IMEX GLMs from the class of interest converge with the full theoretical orders under general assumptions. The convergence results require the time steps to be sufficiently small, with upper bounds that are independent on the stiffness of the problem.
We consider the problem of reconstructing an unknown function $uin L^2(D,mu)$ from its evaluations at given sampling points $x^1,dots,x^min D$, where $Dsubset mathbb R^d$ is a general domain and $mu$ a probability measure. The approximation is picked from a linear space $V_n$ of interest where $n=dim(V_n)$. Recent results have revealed that certain weighted least-squares methods achieve near best approximation with a sampling budget $m$ that is proportional to $n$, up to a logarithmic factor $ln(2n/varepsilon)$, where $varepsilon>0$ is a probability of failure. The sampling points should be picked at random according to a well-chosen probability measure $sigma$ whose density is given by the inverse Christoffel function that depends both on $V_n$ and $mu$. While this approach is greatly facilitated when $D$ and $mu$ have tensor product structure, it becomes problematic for domains $D$ with arbitrary geometry since the optimal measure depends on an orthonormal basis of $V_n$ in $L^2(D,mu)$ which is not explicitly given, even for simple polynomial spaces. Therefore sampling according to this measure is not practically feasible. In this paper, we discuss practical sampling strategies, which amount to using a perturbed measure $widetilde sigma$ that can be computed in an offline stage, not involving the measurement of $u$. We show that near best approximation is attained by the resulting weighted least-squares method at near-optimal sampling budget and we discuss multilevel approaches that preserve optimality of the cumulated sampling budget when the spaces $V_n$ are iteratively enriched. These strategies rely on the knowledge of a-priori upper bounds on the inverse Christoffel function. We establish such bounds for spaces $V_n$ of multivariate algebraic polynomials, and for general domains $D$.
As the use of spectral/$hp$ element methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks s uch as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendousattention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/$hp$ element methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/$hp$ element library Nektar++.
We propose a general theory of estimating interpolation error for smooth functions in two and three dimensions. In our theory, the error of interpolation is bound in terms of the diameter of a simplex and a geometric parameter. In the two-dimensional case, our geometric parameter is equivalent to the circumradius of a triangle. In the three-dimensional case, our geometric parameter also represents the flatness of a tetrahedron. Through the introduction of the geometric parameter, the error estimates newly obtained can be applied to cases that violate the maximum-angle condition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا