Do you want to publish a course? Click here

Multidimensional Scaling: Infinite Metric Measure Spaces

67   0   0.0 ( 0 )
 Added by Lara Kassab
 Publication date 2019
and research's language is English
 Authors Lara Kassab




Ask ChatGPT about the research

Multidimensional scaling (MDS) is a popular technique for mapping a finite metric space into a low-dimensional Euclidean space in a way that best preserves pairwise distances. We study a notion of MDS on infinite metric measure spaces, along with its optimality properties and goodness of fit. This allows us to study the MDS embeddings of the geodesic circle $S^1$ into $mathbb{R}^m$ for all $m$, and to ask questions about the MDS embeddings of the geodesic $n$-spheres $S^n$ into $mathbb{R}^m$. Furthermore, we address questions on convergence of MDS. For instance, if a sequence of metric measure spaces converges to a fixed metric measure space $X$, then in what sense do the MDS embeddings of these spaces converge to the MDS embedding of $X$? Convergence is understood when each metric space in the sequence has the same finite number of points, or when each metric space has a finite number of points tending to infinity. We are also interested in notions of convergence when each metric space in the sequence has an arbitrary (possibly infinite) number of points.



rate research

Read More

Multidimensional scaling (MDS) is a popular technique for mapping a finite metric space into a low-dimensional Euclidean space in a way that best preserves pairwise distances. We overview the theory of classical MDS, along with its optimality properties and goodness of fit. Further, we present a notion of MDS on infinite metric measure spaces that generalizes these optimality properties. As a consequence we can study the MDS embeddings of the geodesic circle $S^1$ into $mathbb{R}^m$ for all $m$, and ask questions about the MDS embeddings of the geodesic $n$-spheres $S^n$ into $mathbb{R}^m$. Finally, we address questions on convergence of MDS. For instance, if a sequence of metric measure spaces converges to a fixed metric measure space $X$, then in what sense do the MDS embeddings of these spaces converge to the MDS embedding of $X$?
Multidimensional Scaling (MDS) is a classical technique for embedding data in low dimensions, still in widespread use today. Originally introduced in the 1950s, MDS was not designed with high-dimensional data in mind; while it remains popular with data analysis practitioners, no doubt it should be adapted to the high-dimensional data regime. In this paper we study MDS under modern setting, and specifically, high dimensions and ambient measurement noise. We show that, as the ambient noise level increase, MDS suffers a sharp breakdown that depends on the data dimension and noise level, and derive an explicit formula for this breakdown point in the case of white noise. We then introduce MDS+, an extremely simple variant of MDS, which applies a carefully derived shrinkage nonlinearity to the eigenvalues of the MDS similarity matrix. Under a loss function measuring the embedding quality, MDS+ is the unique asymptotically optimal shrinkage function. We prove that MDS+ offers improved embedding, sometimes significantly so, compared with classical MDS. Furthermore, MDS+ does not require external estimates of the embedding dimension (a famous difficulty in classical MDS), as it calculates the optimal dimension into which the data should be embedded.
62 - Fabrice Gamboa 2020
In this paper, we introduce new indices adapted to outputs valued in general metric spaces. This new class of indices encompasses the classical ones; in particular, the so-called Sobol indices and the Cram{e}r-von-Mises indices. Furthermore, we provide asymptotically Gaussian estimators of these indices based on U-statistics. Surprisingly, we prove the asymp-totic normality straightforwardly. Finally, we illustrate this new procedure on a toy model and on two real-data examples.
In this note we give several characterisations of weights for two-weight Hardy inequalities to hold on general metric measure spaces possessing polar decompositions. Since there may be no differentiable structure on such spaces, the inequalities are given in the integral form in the spirit of Hardys original inequality. We give examples obtaining new weighted Hardy inequalities on $mathbb R^n$, on homogeneous groups, on hyperbolic spaces, and on Cartan-Hadamard manifolds.
This paper describes universal lossless coding strategies for compressing sources on countably infinite alphabets. Classes of memoryless sources defined by an envelope condition on the marginal distribution provide benchmarks for coding techniques originating from the theory of universal coding over finite alphabets. We prove general upper-bounds on minimax regret and lower-bounds on minimax redundancy for such source classes. The general upper bounds emphasize the role of the Normalized Maximum Likelihood codes with respect to minimax regret in the infinite alphabet context. Lower bounds are derived by tailoring sharp bounds on the redundancy of Krichevsky-Trofimov coders for sources over finite alphabets. Up to logarithmic (resp. constant) factors the bounds are matching for source classes defined by algebraically declining (resp. exponentially vanishing) envelopes. Effective and (almost) adaptive coding techniques are described for the collection of source classes defined by algebraically vanishing envelopes. Those results extend ourknowledge concerning universal coding to contexts where the key tools from parametric inference
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا