ترغب بنشر مسار تعليمي؟ اضغط هنا

The Cost of Deterministic, Adaptive, Automatic Algorithms: Cones, Not Balls

36   0   0.0 ( 0 )
 نشر من قبل Fred J. Hickernell
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Automatic numerical algorithms attempt to provide approximate solutions that differ from exact solutions by no more than a user-specified error tolerance. The computational cost is often determined emph{adaptively} by the algorithm based on the function values sampled. While adaptive, automatic algorithms are widely used in practice, most lack emph{guarantees}, i.e., conditions on input functions that ensure that the error tolerance is met. This article establishes a framework for guaranteed, adaptive, automatic algorithms. Sufficient conditions for success and two-sided bounds on the computational cost are provided in Theorems ref{TwoStageDetermThm} and ref{MultiStageThm}. Lower bounds on the complexity of the problem are given in Theorem ref{complowbd}, and conditions under which the proposed algorithms have optimal order are given in Corollary ref{optimcor}. These general theorems are illustrated for univariate numerical integration and function recovery via adaptive algorithms based on linear splines. The key to these adaptive algorithms is performing the analysis for emph{cones} of input functions rather than balls. Cones provide a setting where adaption may be beneficial.

قيم البحث

اقرأ أيضاً

We present an analysis of the additive average Schwarz preconditioner with two newly proposed adaptively enriched coarse spaces which was presented at the 23rd International conference on domain decomposition methods in Korea, for solving second orde r elliptic problems with highly varying and discontinuous coefficients. It is shown that the condition number of the preconditioned system is bounded independently of the variations and the jumps in the coefficient, and depends linearly on the mesh parameter ratio H/h, that is the ratio between the subdomain size and the mesh size, thereby retaining the same optimality and scalablity of the original additive average Schwarz preconditioner.
A systematic mathematical framework for the study of numerical algorithms would allow comparisons, facilitate conjugacy arguments, as well as enable the discovery of improved, accelerated, data-driven algorithms. Over the course of the last century, the Koopman operator has provided a mathematical framework for the study of dynamical systems, which facilitates conjugacy arguments and can provide efficient reduced descriptions. More recently, numerical approximations of the operator have enabled the analysis of a large number of deterministic and stochastic dynamical systems in a completely data-driven, essentially equation-free pipeline. Discrete or continuous time numerical algorithms (integrators, nonlinear equation solvers, optimization algorithms) are themselves dynamical systems. In this paper, we use this insight to leverage the Koopman operator framework in the data-driven study of such algorithms and discuss benefits for analysis and acceleration of numerical computation. For algorithms acting on high-dimensional spaces by quickly contracting them towards low-dimensional manifolds, we demonstrate how basis functions adapted to the data help to construct efficient reduced representations of the operator. Our illustrative examples include the gradient descent and Nesterov optimization algorithms, as well as the Newton-Raphson algorithm.
The fragile complexity of a comparison-based algorithm is $f(n)$ if each input element participates in $O(f(n))$ comparisons. In this paper, we explore the fragile complexity of algorithms adaptive to various restrictions on the input, i.e., algorith ms with a fragile complexity parameterized by a quantity other than the input size n. We show that searching for the predecessor in a sorted array has fragile complexity ${Theta}(log k)$, where $k$ is the rank of the query element, both in a randomized and a deterministic setting. For predecessor searches, we also show how to optimally reduce the amortized fragile complexity of the elements in the array. We also prove the following results: Selecting the $k$-th smallest element has expected fragile complexity $O(log log k)$ for the element selected. Deterministically finding the minimum element has fragile complexity ${Theta}(log(Inv))$ and ${Theta}(log(Runs))$, where $Inv$ is the number of
117 - Kristin Kirchner 2016
Numerical methods for stochastic partial differential equations typically estimate moments of the solution from sampled paths. Instead, we shall directly target the deterministic equations satisfied by the first and second moments, as well as the cov ariance. In the first part, we focus on stochastic ordinary differential equations. For the canonical examples with additive noise (Ornstein-Uhlenbeck process) or multiplicative noise (geometric Brownian motion) we derive these deterministic equations in variational form and discuss their well-posedness in detail. Notably, the second moment equation in the multiplicative case is naturally posed on projective-injective tensor product spaces as trial-test spaces. We construct Petrov-Galerkin discretizations based on tensor product piecewise polynomials and analyze their stability and convergence in these natural norms. In the second part, we proceed with parabolic stochastic partial differential equations with affine multiplicative noise. We prove well-posedness of the deterministic variational problem for the second moment, improving an earlier result. We then propose conforming space-time Petrov-Galerkin discretizations, which we show to be stable and quasi-optimal. In both parts, the outcomes are illustrated by numerical examples.
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which *arbitrarily many* edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combina tion of (1) having an $O(1)$ amortized time complexity, (2) using only $O(log{n})$-bit messages, (3) not posing any restrictions on the dynamic behavior of the environment, (4) being deterministic, (5) having strong guarantees for intermediate solutions, and (6) being applicable for a wide family of tasks. The tasks for which we deduce such an algorithm are maximal matching, $(degree+1)$-coloring, 2-approximation for minimum weight vertex cover, and maximal independent set (which is the most subtle case). For some of these tasks, node insertions can also be among the allowed topology changes, and for some of them also abrupt node deletions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا