ترغب بنشر مسار تعليمي؟ اضغط هنا

Theorems and explicit examples are used to show how transformations between self-similar sets (general sense) may be continuous almost everywhere with respect to stationary measures on the sets and may be used to carry well known flows and spectral a nalysis over from familiar settings to new ones. The focus of this work is on a number of surprising applications including (i) what we call fractal Fourier analysis, in which the graphs of the basis functions are Cantor sets, being discontinuous at a countable dense set of points, yet have very good approximation properties; (ii) Lebesgue measure-preserving flows, on polygonal laminas, whose wave-fronts are fractals. The key idea is to exploit fractal transformations to provide unitary transformations between Hilbert spaces defined on attractors of iterated function systems. Some of the examples relate to work of Oxtoby and Ulam concerning ergodic flows on regions bounded by polygons.
This paper continues to develop a fault tolerant extension of the sparse grid combination technique recently proposed in [B. Harding and M. Hegland, ANZIAM J., 54 (CTAC2012), pp. C394-C411]. The approach is novel for two reasons, first it provides se veral levels in which one can exploit parallelism leading towards massively parallel implementations, and second, it provides algorithm-based fault tolerance so that solutions can still be recovered if failures occur during computation. We present a generalisation of the combination technique from which the fault tolerant algorithm is a consequence. Using a model for the time between faults on each node of a high performance computer we provide bounds on the expected error for interpolation with this algorithm. Numerical experiments on the scalar advection PDE demonstrate that the algorithm is resilient to faults on a real application. It is observed that the trade-off of recovery time to decreased accuracy of the solution is suitably small. A comparison with traditional checkpoint-restart methods applied to the combination technique show that our approach is highly scalable with respect to the number of faults.
Sparsity promoting regularization is an important technique for signal reconstruction and several other ill-posed problems. Theoretical investigation typically bases on the assumption that the unknown solution has a sparse representation with respect to a fixed basis. We drop this sparsity assumption and provide error estimates for non-sparse solutions. After discussing a result in this direction published earlier by one of the authors and coauthors we prove a similar error estimate under weaker assumptions. Two examples illustrate that this set of weaker assumptions indeed covers additional situations which appear in applications.
Local iterated function systems are an important generalisation of the standard (global) iterated function systems (IFSs). For a particular class of mappings, their fixed points are the graphs of local fractal functions and these functions themselves are known to be the fixed points of an associated Read-Bajactarevic operator. This paper establishes existence and properties of local fractal functions and discusses how they are computed. In particular, it is shown that piecewise polynomials are a special case of local fractal functions. Finally, we develop a method to compute the components of a local IFS from data or (partial differential) equations.
We examine sparse grid quadrature on weighted tensor products (WTP) of reproducing kernel Hilbert spaces on products of the unit sphere, in the case of worst case quadrature error for rules with arbitrary quadrature weights. We describe a dimension a daptive quadrature algorithm based on an algorithm of Hegland (2003), and also formulate a version of Wasilkowski and Wozniakowskis WTP algorithm (1999), here called the WW algorithm. We prove that the dimension adaptive algorithm is optimal in the sense of Dantzig (1957) and therefore no greater in cost than the WW algorithm. Both algorithms therefore have the optimal asymptotic rate of convergence given by Theorem 3 of Wasilkowski and Wozniakowski (1999). A numerical example shows that, even though the asymptotic convergence rate is optimal, if the dimension weights decay slowly enough, and the dimensionality of the problem is large enough, the initial convergence of the dimension adaptive algorithm can be slow.
We consider approximation problems for a special space of d variate functions. We show that the problems have small number of active variables, as it has been postulated in the past using concentration of measure arguments. We also show that, dependi ng on the norm for measuring the error, the problems are strongly polynomially or quasi-polynomially tractable even in the model of computation where functional evaluations have the cost exponential in the number of active variables.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا