ترغب بنشر مسار تعليمي؟ اضغط هنا

Reproducing kernel Hilbert spaces and variable metric algorithms in PDE constrained shape optimisation

92   0   0.0 ( 0 )
 نشر من قبل Kevin Sturm
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we investigate and compare different gradient algorithms designed for the domain expression of the shape derivative. Our main focus is to examine the usefulness of kernel reproducing Hilbert spaces for PDE constrained shape optimisation problems. We show that radial kernels provide convenient formulas for the shape gradient that can be efficiently used in numerical simulations. The shape gradients associated with radial kernels depend on a so called smoothing parameter that allows a smoothness adjustment of the shape during the optimisation process. Besides, this smoothing parameter can be used to modify the movement of the shape. The theoretical findings are verified in a number of numerical experiments.



قيم البحث

اقرأ أيضاً

Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in Open AI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.
We give two new global and algorithmic constructions of the reproducing kernel Hilbert space associated to a positive definite kernel. We further present ageneral positive definite kernel setting using bilinear forms, and we provide new examples. Our results cover the case of measurable positive definite kernels, and we give applications to both stochastic analysisand metric geometry and provide a number of examples.
171 - Sneh Lata , Vern I. Paulsen 2010
We prove two new equivalences of the Feichtinger conjecture that involve reproducing kernel Hilbert spaces. We prove that if for every Hilbert space, contractively contained in the Hardy space, each Bessel sequence of normalized kernel functions can be partitioned into finitely many Riesz basic sequences, then a general bounded Bessel sequence in an arbitrary Hilbert space can be partitioned into finitely many Riesz basic sequences. In addition, we examine some of these spaces and prove that for these spaces bounded Bessel sequences of normalized kernel functions are finite unions of Riesz basic sequences.
Let $G$ be a locally compact abelian group with a Haar measure, and $Y$ be a measure space. Suppose that $H$ is a reproducing kernel Hilbert space of functions on $Gtimes Y$, such that $H$ is naturally embedded into $L^2(Gtimes Y)$ and is invariant u nder the translations associated with the elements of $G$. Under some additional technical assumptions, we study the W*-algebra $mathcal{V}$ of translation-invariant bounded linear operators acting on $H$. First, we decompose $mathcal{V}$ into the direct integral of the W*-algebras of bounded operators acting on the reproducing kernel Hilbert spaces $widehat{H}_xi$, $xiinwidehat{G}$, generated by the Fourier transform of the reproducing kernel. Second, we give a constructive criterion for the commutativity of $mathcal{V}$. Third, in the commutative case, we construct a unitary operator that simultaneously diagonalizes all operators belonging to $mathcal{V}$, i.e., converts them into some multiplication operators. Our scheme generalizes many examples previously studied by Nikolai Vasilevski and other authors.
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm fo r efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form $exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا