ترغب بنشر مسار تعليمي؟ اضغط هنا

Multidimensional Scaling on Metric Measure Spaces

78   0   0.0 ( 0 )
 نشر من قبل Lara Kassab
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Multidimensional scaling (MDS) is a popular technique for mapping a finite metric space into a low-dimensional Euclidean space in a way that best preserves pairwise distances. We overview the theory of classical MDS, along with its optimality properties and goodness of fit. Further, we present a notion of MDS on infinite metric measure spaces that generalizes these optimality properties. As a consequence we can study the MDS embeddings of the geodesic circle $S^1$ into $mathbb{R}^m$ for all $m$, and ask questions about the MDS embeddings of the geodesic $n$-spheres $S^n$ into $mathbb{R}^m$. Finally, we address questions on convergence of MDS. For instance, if a sequence of metric measure spaces converges to a fixed metric measure space $X$, then in what sense do the MDS embeddings of these spaces converge to the MDS embedding of $X$?



قيم البحث

اقرأ أيضاً

66 - Lara Kassab 2019
Multidimensional scaling (MDS) is a popular technique for mapping a finite metric space into a low-dimensional Euclidean space in a way that best preserves pairwise distances. We study a notion of MDS on infinite metric measure spaces, along with its optimality properties and goodness of fit. This allows us to study the MDS embeddings of the geodesic circle $S^1$ into $mathbb{R}^m$ for all $m$, and to ask questions about the MDS embeddings of the geodesic $n$-spheres $S^n$ into $mathbb{R}^m$. Furthermore, we address questions on convergence of MDS. For instance, if a sequence of metric measure spaces converges to a fixed metric measure space $X$, then in what sense do the MDS embeddings of these spaces converge to the MDS embedding of $X$? Convergence is understood when each metric space in the sequence has the same finite number of points, or when each metric space has a finite number of points tending to infinity. We are also interested in notions of convergence when each metric space in the sequence has an arbitrary (possibly infinite) number of points.
In this note we give several characterisations of weights for two-weight Hardy inequalities to hold on general metric measure spaces possessing polar decompositions. Since there may be no differentiable structure on such spaces, the inequalities are given in the integral form in the spirit of Hardys original inequality. We give examples obtaining new weighted Hardy inequalities on $mathbb R^n$, on homogeneous groups, on hyperbolic spaces, and on Cartan-Hadamard manifolds.
Optimal linear prediction (also known as kriging) of a random field ${Z(x)}_{xinmathcal{X}}$ indexed by a compact metric space $(mathcal{X},d_{mathcal{X}})$ can be obtained if the mean value function $mcolonmathcal{X}tomathbb{R}$ and the covariance f unction $varrhocolonmathcal{X}timesmathcal{X}tomathbb{R}$ of $Z$ are known. We consider the problem of predicting the value of $Z(x^*)$ at some location $x^*inmathcal{X}$ based on observations at locations ${x_j}_{j=1}^n$ which accumulate at $x^*$ as $ntoinfty$ (or, more generally, predicting $varphi(Z)$ based on ${varphi_j(Z)}_{j=1}^n$ for linear functionals $varphi, varphi_1, ldots, varphi_n$). Our main result characterizes the asymptotic performance of linear predictors (as $n$ increases) based on an incorrect second order structure $(tilde{m},tilde{varrho})$, without any restrictive assumptions on $varrho, tilde{varrho}$ such as stationarity. We, for the first time, provide necessary and sufficient conditions on $(tilde{m},tilde{varrho})$ for asymptotic optimality of the corresponding linear predictor holding uniformly with respect to $varphi$. These general results are illustrated by weakly stationary random fields on $mathcal{X}subsetmathbb{R}^d$ with Matern or periodic covariance functions, and on the sphere $mathcal{X}=mathbb{S}^2$ for the case of two isotropic covariance functions.
Multidimensional Scaling (MDS) is a classical technique for embedding data in low dimensions, still in widespread use today. Originally introduced in the 1950s, MDS was not designed with high-dimensional data in mind; while it remains popular with da ta analysis practitioners, no doubt it should be adapted to the high-dimensional data regime. In this paper we study MDS under modern setting, and specifically, high dimensions and ambient measurement noise. We show that, as the ambient noise level increase, MDS suffers a sharp breakdown that depends on the data dimension and noise level, and derive an explicit formula for this breakdown point in the case of white noise. We then introduce MDS+, an extremely simple variant of MDS, which applies a carefully derived shrinkage nonlinearity to the eigenvalues of the MDS similarity matrix. Under a loss function measuring the embedding quality, MDS+ is the unique asymptotically optimal shrinkage function. We prove that MDS+ offers improved embedding, sometimes significantly so, compared with classical MDS. Furthermore, MDS+ does not require external estimates of the embedding dimension (a famous difficulty in classical MDS), as it calculates the optimal dimension into which the data should be embedded.
We relate the existence of many infinite geodesics on Alexandrov spaces to a statement about the average growth of volumes of balls. We deduce that the geodesic flow exists and preserves the Liouville measure in several important cases. The developed analytic tool has close ties to integral geometry.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا