ترغب بنشر مسار تعليمي؟ اضغط هنا

Let $mathcal{G} = {G_1 = (V, E_1), dots, G_m = (V, E_m)}$ be a collection of $m$ graphs defined on a common set of vertices $V$ but with different edge sets $E_1, dots, E_m$. Informally, a function $f :V rightarrow mathbb{R}$ is smooth with respect t o $G_k = (V,E_k)$ if $f(u) sim f(v)$ whenever $(u, v) in E_k$. We study the problem of understanding whether there exists a nonconstant function that is smooth with respect to all graphs in $mathcal{G}$, simultaneously, and how to find it if it exists.
We consider the multi-target detection problem of estimating a two-dimensional target image from a large noisy measurement image that contains many randomly rotated and translated copies of the target image. Motivated by single-particle cryo-electron microscopy, we focus on the low signal-to-noise regime, where it is difficult to estimate the locations and orientations of the target images in the measurement. Our approach uses autocorrelation analysis to estimate rotationally and translationally invariant features of the target image. We demonstrate that, regardless of the level of noise, our technique can be used to recover the target image when the measurement is sufficiently large.
We introduce a framework for recovering an image from its rotationally and translationally invariant features based on autocorrelation analysis. This work is an instance of the multi-target detection statistical model, which is mainly used to study t he mathematical and computational properties of single-particle reconstruction using cryo-electron microscopy (cryo-EM) at low signal-to-noise ratios. We demonstrate with synthetic numerical experiments that an image can be reconstructed from rotationally and translationally invariant features and show that the reconstruction is robust to noise. These results constitute an important step towards the goal of structure determination of small biomolecules using cryo-EM.
The purpose of this paper is to extend the result of arXiv:1810.00823 to mixed Holder functions on $[0,1]^d$ for all $d ge 1$. In particular, we prove that by sampling an $alpha$-mixed Holder function $f : [0,1]^d rightarrow mathbb{R}$ at $sim frac{1 }{varepsilon} left(log frac{1}{varepsilon} right)^d$ independent uniformly random points from $[0,1]^d$, we can construct an approximation $tilde{f}$ such that $$ |f - tilde{f}|_{L^2} lesssim varepsilon^alpha left(log textstyle{frac{1}{varepsilon}} right)^{d-1/2}, $$ with high probability.
We present a fast method for evaluating expressions of the form $$ u_j = sum_{i = 1,i ot = j}^n frac{alpha_i}{x_i - x_j}, quad text{for} quad j = 1,ldots,n, $$ where $alpha_i$ are real numbers, and $x_i$ are points in a compact interval of $mathbb{R }$. This expression can be viewed as representing the electrostatic potential generated by charges on a line in $mathbb{R}^3$. While fast algorithms for computing the electrostatic potential of general distributions of charges in $mathbb{R}^3$ exist, in a number of situations in computational physics it is useful to have a simple and extremely fast method for evaluating the potential of charges on a line; we present such a method in this paper, and report numerical results for several examples.
Suppose $f : [0,1]^2 rightarrow mathbb{R}$ is a $(c,alpha)$-mixed Holder function that we sample at $l$ points $X_1,ldots,X_l$ chosen uniformly at random from the unit square. Let the location of these points and the function values $f(X_1),ldots,f(X _l)$ be given. If $l ge c_1 n log^2 n$, then we can compute an approximation $tilde{f}$ such that $$ |f - tilde{f} |_{L^2} = mathcal{O}(n^{-alpha} log^{3/2} n), $$ with probability at least $1 - n^{2 -c_1}$, where the implicit constant only depends on the constants $c > 0$ and $c_1 > 0$.
In this paper we answer the following question: what is the infinitesimal generator of the diffusion process defined by a kernel that is normalized such that it is bi-stochastic with respect to a specified measure? More precisely, under the assumptio n that data is sampled from a Riemannian manifold we determine how the resulting infinitesimal generator depends on the potentially nonuniform distribution of the sample points, and the specified measure for the bi-stochastic normalization. In a special case, we demonstrate a connection to the heat kernel. We consider both the case where only a single data set is given, and the case where a data set and a reference set are given. The spectral theory of the constructed operators is studied, and Nystrom extension formulas for the gradients of the eigenfunctions are computed. Applications to discrete point sets and manifold learning are discussed.
We consider an optimal stretching problem for strictly convex domains in $mathbb{R}^d$ that are symmetric with respect to each coordinate hyperplane, where stretching refers to transformation by a diagonal matrix of determinant $1$. Specifically, we prove that the stretched convex domain which captures the most positive lattice points in the large volume limit is balanced: the $(d-1)$-dimensional measures of the intersections of the domain with each coordinate hyperplane are equal. Our results extend those of Antunes & Freitas, van den Berg, Bucur & Gittins, Ariturk & Laugesen, van den Berg & Gittins, and Gittins & Larson. The approach is motivated by the Fourier analysis techniques used to prove the classical $#{(i,j) in mathbb{Z}^2 : i^2 +j^2 le r^2 } =pi r^2 + mathcal{O}(r^{2/3})$ result for the Gauss circle problem.
We study a combinatorial problem that recently arose in the context of shape optimization: among all triangles with vertices $(0,0)$, $(x,0)$, and $(0,y)$ and fixed area, which one encloses the most lattice points from $mathbb{Z}_{>0}^2$? Moreover, d oes its shape necessarily converge to the isosceles triangle $(x=y)$ as the area becomes large? Laugesen and Liu suggested that, in contrast to similar problems, there might not be a limiting shape. We prove that the limiting set is indeed nontrivial and contains infinitely many elements. We also show that there exist `bad areas where no triangle is particularly good at capturing lattice points and show that there exists an infinite set of slopes $y/x$ such that any associated triangle captures more lattice points than any other fixed triangle for infinitely many (and arbitrarily large) areas; this set of slopes is a fractal subset of $[1/3, 3]$ and has Minkowski dimension at most $3/4$.
The robustness of manifold learning methods is often predicated on the stability of the Neumann Laplacian eigenfunctions under deformations of the assumed underlying domain. Indeed, many manifold learning methods are based on approximating the Neuman n Laplacian eigenfunctions on a manifold that is assumed to underlie data, which is viewed through a source of distortion. In this paper, we study the stability of the first Neumann Laplacian eigenfunction with respect to deformations of a domain by a diffeomorphism. In particular, we are interested in the stability of the first eigenfunction on tall thin domains where, intuitively, the first Neumann Laplacian eigenfunction should only depend on the length along the domain. We prove a rigorous version of this statement and apply it to a machine learning problem in geophysical interpretation.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا