ترغب بنشر مسار تعليمي؟ اضغط هنا

Applying Lepskij-Balancing in Practice

38   0   0.0 ( 0 )
 نشر من قبل Frank Bauer
 تاريخ النشر 2010
  مجال البحث
والبحث باللغة English
 تأليف Frank Bauer




اسأل ChatGPT حول البحث

In a stochastic noise setting the Lepskij balancing principle for choosing the regularization parameter in the regularization of inverse problems is depending on a parameter $tau$ which in the currently known proofs is depending on the unknown noise level of the input data. However, in practice this parameter seems to be obsolete. We will present an explanation for this behavior by using a stochastic model for noise and initial data. Furthermore, we will prove that a small modification of the algorithm also improves the performance of the method, in both speed and accuracy.

قيم البحث

اقرأ أيضاً

44 - Frank Bauer 2010
Choosing the regularization parameter for inverse problems is of major importance for the performance of the regularization method. We will introduce a fast version of the Lepskij balancing principle and show that it is a valid parameter choice met hod for Tikhonov regularization both in a deterministic and a stochastic noise regime as long as minor conditions on the solution are fulfilled.
We present an implementation of the trimmed serendipity finite element family, using the open source finite element package Firedrake. The new elements can be used seamlessly within the software suite for problems requiring $H^1$, hcurl, or hdiv-conf orming elements on meshes of squares or cubes. To test how well trimmed serendipity elements perform in comparison to traditional tensor product elements, we perform a sequence of numerical experiments including the primal Poisson, mixed Poisson, and Maxwell cavity eigenvalue problems. Overall, we find that the trimmed serendipity elements converge, as expected, at the same rate as the respective tensor product elements while being able to offer significant savings in the time or memory required to solve certain problems.
We study the computational complexity of (deterministic or randomized) algorithms based on point samples for approximating or integrating functions that can be well approximated by neural networks. Such algorithms (most prominently stochastic gradien t descent and its variants) are used extensively in the field of deep learning. One of the most important problems in this field concerns the question of whether it is possible to realize theoretically provable neural network approximation rates by such algorithms. We answer this question in the negative by proving hardness results for the problems of approximation and integration on a novel class of neural network approximation spaces. In particular, our results confirm a conjectured and empirically observed theory-to-practice gap in deep learning. We complement our hardness results by showing that approximation rates of a comparable order of convergence are (at least theoretically) achievable.
We present a scheme to accurately calculate the persistence probabilities on sequences of $n$ heights above a level $h$ from the measured $n+2$ points of the height-height correlation function of a fluctuating interface. The calculated persistence pr obabilities compare very well with the measured persistence probabilities of a fluctuating phase-separated colloidal interface for the whole experimental range.
85 - James Webber 2015
We lay the foundations for a new fast method to reconstruct the electron density in x-ray scanning applications using measurements in the dark field. This approach is applied to a type of machine configuration with fixed energy sensitive (or resolvin g) detectors, and where the X-ray source is polychromatic. We consider the case where the measurements in the dark field are dominated by the Compton scattering process. This leads us to a 2D inverse problem where we aim to reconstruct an electron density slice from its integrals over discs whose boundaries intersect the given source point. We show that a unique solution exists for smooth densities compactly supported on an annulus centred at the source point. Using Sobolev space estimates we determine a measure for the ill posedness of our problem based on the criterion given by Natterer (The mathematics of computerized tomography SIAM 2001). In addition, with a combination of our method and the more common attenuation coefficient reconstruction, we show under certain assumptions that the atomic number of the target is uniquely determined. We test our method on simulated data sets with varying levels of added pseudo random noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا