Do you want to publish a course? Click here

Persistent homology detects curvature

268   0   0.0 ( 0 )
 Added by Peter Bubenik
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In topological data analysis, persistent homology is used to study the shape of data. Persistent homology computations are completely characterized by a set of intervals called a bar code. It is often said that the long intervals represent the topological signal and the short intervals represent noise. We give evidence to dispute this thesis, showing that the short intervals encode geometric information. Specifically, we prove that persistent homology detects the curvature of disks from which points have been sampled. We describe a general computational framework for solving inverse problems using the average persistence landscape, a continuous mapping from metric spaces with a probability measure to a Hilbert space. In the present application, the average persistence landscapes of points sampled from disks of constant curvature results in a path in this Hilbert space which may be learned using standard tools from statistical and machine learning.

rate research

Read More

We apply persistent homology to the task of discovering and characterizing phase transitions, using lattice spin models from statistical physics for working examples. Persistence images provide a useful representation of the homological data for conducting statistical tasks. To identify the phase transitions, a simple logistic regression on these images is sufficient for the models we consider, and interpretable order parameters are then read from the weights of the regression. Magnetization, frustration and vortex-antivortex structure are identified as relevant features for characterizing phase transitions.
The Discrete Morse Theory of Forman appeared to be useful for providing filtration-preserving reductions of complexes in the study of persistent homology. So far, the algorithms computing discrete Morse matchings have only been used for one-dimensional filtrations. This paper is perhaps the first attempt in the direction of extending such algorithms to multidimensional filtrations. Initial framework related to Morse matchings for the multidimensional setting is proposed, and a matching algorithm given by King, Knudson, and Mramor is extended in this direction. The correctness of the algorithm is proved, and its complexity analyzed. The algorithm is used for establishing a reduction of a simplicial complex to a smaller but not necessarily optimal cellular complex. First experiments with filtrations of triangular meshes are presented.
Comparison between multidimensional persistent Betti numbers is often based on the multidimensional matching distance. While this metric is rather simple to define and compute by considering a suitable family of filtering functions associated with lines having a positive slope, it has two main drawbacks. First, it forgets the natural link between the homological properties of filtrations associated with lines that are close to each other. As a consequence, part of the interesting homological information is lost. Second, its intrinsically discontinuous definition makes it difficult to study its properties. In this paper we introduce a new matching distance for 2D persistent Betti numbers, called coherent matching distance and based on matchings that change coherently with the filtrations we take into account. Its definition is not trivial, as it must face the presence of monodromy in multidimensional persistence, i.e. the fact that different paths in the space parameterizing the above filtrations can induce different matchings between the associated persistent diagrams. In our paper we prove that the coherent 2D matching distance is well-defined and stable.
We propose a general technique for extracting a larger set of stable information from persistent homology computations than is currently done. The persistent homology algorithm is usually viewed as a procedure which starts with a filtered complex and ends with a persistence diagram. This procedure is stable (at least to certain types of perturbations of the input). This justifies the use of the diagram as a signature of the input, and the use of features derived from it in statistics and machine learning. However, these computations also produce other information of great interest to practitioners that is unfortunately unstable. For example, each point in the diagram corresponds to a simplex whose addition in the filtration results in the birth of the corresponding persistent homology class, but this correspondence is unstable. In addition, the persistence diagram is not stable with respect to other procedures that are employed in practice, such as thresholding a point cloud by density. We recast these problems as real-valued functions which are discontinuous but measurable, and then observe that convolving such a function with a suitable function produces a Lipschitz function. The resulting stable function can be estimated by perturbing the input and averaging the output. We illustrate this approach with a number of examples, including a stable localization of a persistent homology generator from brain imaging data.
Machine learning has emerged as a powerful approach in materials discovery. Its major challenge is selecting features that create interpretable representations of materials, useful across multiple prediction tasks. We introduce an end-to-end machine learning model that automatically generates descriptors that capture a complex representation of a materials structure and chemistry. This approach builds on computational topology techniques (namely, persistent homology) and word embeddings from natural language processing. It automatically encapsulates geometric and chemical information directly from the material system. We demonstrate our approach on multiple nanoporous metal-organic framework datasets by predicting methane and carbon dioxide adsorption across different conditions. Our results show considerable improvement in both accuracy and transferability across targets compared to models constructed from the commonly-used, manually-curated features, consistently achieving an average 25-30% decrease in root-mean-squared-deviation and an average increase of 40-50% in R2 scores. A key advantage of our approach is interpretability: Our model identifies the pores that correlate best to adsorption at different pressures, which contributes to understanding atomic-level structure--property relationships for materials design.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا