No Arabic abstract
Persistent Homology is a fairly new branch of Computational Topology which combines geometry and topology for an effective shape description of use in Pattern Recognition. In particular it registers through Betti Numbers the presence of holes and their persistence while a parameter (filtering function) is varied. In this paper, some recent developments in this field are integrated in a k-Nearest Neighbor search algorithm suited for an automatic retrieval of melanocytic lesions. Since long, dermatologists use five morphological parameters (A = Asymmetry, B = Boundary, C = Color, D = Diameter, E = Elevation or Evolution) for assessing the malignancy of a lesion. The algorithm is based on a qualitative assessment of the segmented images by computing both 1 and 2-dimensional Persistent Betti Numbers functions related to the ABCDE parameters and to the internal texture of the lesion. The results of a feasibility test on a set of 107 melanocytic lesions are reported in the section dedicated to the numerical experiments.
Obstructive sleep Apnea (OSA) is a form of sleep disordered breathing characterized by frequent episodes of upper airway collapse during sleep. Pediatric OSA occurs in 1-5% of children and can related to other serious health conditions such as high blood pressure, behavioral issues, or altered growth. OSA is often diagnosed by studying the patients sleep cycle, the pattern with which they progress through various sleep states such as wakefulness, rapid eye-movement, and non-rapid eye-movement. The sleep state data is obtained using an overnight polysomnography test that the patient undergoes at a hospital or sleep clinic, where a technician manually labels each 30 second time interval, also called an epoch, with the current sleep state. This process is laborious and prone to human error. We seek an automatic method of classifying the sleep state, as well as a method to analyze the sleep cycles. This article is a pilot study in sleep state classification using two approaches: first, well use methods from the field of topological data analysis to classify the sleep state and second, well model sleep states as a Markov chain and visually analyze the sleep patterns. In the future, we will continue to build on this work to improve our methods.
Nearest neighbor search has found numerous applications in machine learning, data mining and massive data processing systems. The past few years have witnessed the popularity of the graph-based nearest neighbor search paradigm because of its superiority over the space-partitioning algorithms. While a lot of empirical studies demonstrate the efficiency of graph-based algorithms, not much attention has been paid to a more fundamental question: why graph-based algorithms work so well in practice? And which data property affects the efficiency and how? In this paper, we try to answer these questions. Our insight is that the probability that the neighbors of a point o tends to be neighbors in the KNN graph is a crucial data property for query efficiency. For a given dataset, such a property can be qualitatively measured by clustering coefficient of the KNN graph. To show how clustering coefficient affects the performance, we identify that, instead of the global connectivity, the local connectivity around some given query q has more direct impact on recall. Specifically, we observed that high clustering coefficient makes most of the k nearest neighbors of q sit in a maximum strongly connected component (SCC) in the graph. From the algorithmic point of view, we show that the search procedure is actually composed of two phases - the one outside the maximum SCC and the other one in it, which is different from the widely accepted single or multiple paths search models. We proved that the commonly used graph-based search algorithm is guaranteed to traverse the maximum SCC once visiting any point in it. Our analysis reveals that high clustering coefficient leads to large size of the maximum SCC, and thus provides good answer quality with the help of the two-phase search procedure. Extensive empirical results over a comprehensive collection of datasets validate our findings.
We propose a method, based on persistent homology, to uncover topological properties of a priori unknown covariates of neuron activity. Our input data consist of spike train measurements of a set of neurons of interest, a candidate list of the known stimuli that govern neuron activity, and the corresponding state of the animal throughout the experiment performed. Using a generalized linear model for neuron activity and simple assumptions on the effects of the external stimuli, we infer away any contribution to the observed spike trains by the candidate stimuli. Persistent homology then reveals useful information about any further, unknown, covariates.
Embedding into hyperbolic space is emerging as an effective representation technique for datasets that exhibit hierarchical structure. This development motivates the need for algorithms that are able to effectively extract knowledge and insights from datapoints embedded in negatively curved spaces. We focus on the problem of nearest neighbor search, a fundamental problem in data analysis. We present efficient algorithmic solutions that build upon established methods for nearest neighbor search in Euclidean space, allowing for easy adoption and integration with existing systems. We prove theoretical guarantees for our techniques and our experiments demonstrate the effectiveness of our approach on real datasets over competing algorithms.
A triplet comparison oracle on a set $S$ takes an object $x in S$ and for any pair ${y, z} subset S setminus {x}$ declares which of $y$ and $z$ is more similar to $x$. Partitioned Local Depth (PaLD) supplies a principled non-parametric partitioning of $S$ under such triplet comparisons but needs $O(n^2 log{n})$ oracle calls and $O(n^3)$ post-processing steps. We introduce Partitioned Nearest Neighbors Local Depth (PaNNLD), a computationally tractable variant of PaLD leveraging the $K$-nearest neighbors digraph on $S$. PaNNLD needs only $O(n K log{n})$ oracle calls, by replacing an oracle call by a coin flip when neither $y$ nor $z$ is adjacent to $x$ in the undirected version of the $K$-nearest neighbors digraph. By averaging over randomizations, PaNNLD subsequently requires (at best) only $O(n K^2)$ post-processing steps. Concentration of measure shows that the probability of randomization-induced error $delta$ in PaNNLD is no more than $2 e^{-delta^2 K^2}$.