ترغب بنشر مسار تعليمي؟ اضغط هنا

Clustering Using Isoperimetric Number of Trees

269   0   0.0 ( 0 )
 نشر من قبل Amir Daneshgar
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we propose a graph-based data clustering algorithm which is based on exact clustering of a minimum spanning tree in terms of a minimum isoperimetry criteria. We show that our basic clustering algorithm runs in $O(n log n)$ and with post-processing in $O(n^2)$ (worst case) time where $n$ is the size of the data set. We also show that our generalized graph model which also allows the use of potentials at vertices can be used to extract a more detailed pack of information as the {it outlier profile} of the data set. In this direction we show that our approach can be used to define the concept of an outlier-set in a precise way and we propose approximation algorithms for finding such sets. We also provide a comparative performance analysis of our algorithm with other related ones and we show that the new clustering algorithm (without the outlier extraction procedure) behaves quite effectively even on hard benchmarks and handmade examples.



قيم البحث

اقرأ أيضاً

We propose a parallel graph-based data clustering algorithm using CUDA GPU, based on exact clustering of the minimum spanning tree in terms of a minimum isoperimetric criteria. We also provide a comparative performance analysis of our algorithm with other related ones which demonstrates the general superiority of this parallel algorithm over other competing algorithms in terms of accuracy and speed.
This paper is aimed to investigate some computational aspects of different isoperimetric problems on weighted trees. In this regard, we consider different connectivity parameters called {it minimum normalized cuts}/{it isoperimteric numbers} defined through taking minimum of the maximum or the mean of the normalized outgoing flows from a set of subdomains of vertices, where these subdomains constitute a {it partition}/{it subpartition}. Following the main result of [A. Daneshgar, {it et. al.}, {it On the isoperimetric spectrum of graphs and its approximations}, JCTB, (2010)], it is known that the isoperimetric number and the minimum normalized cut both can be described as ${0,1}$-optimization programs, where the latter one does {it not} admit a relaxation to the reals. We show that the decision problem for the case of taking $k$-partitions and the maximum (called the max normalized cut problem {rm NCP}$^M$) as well as the other two decision problems for the mean version (referred to as {rm IPP}$^m$ and {rm NCP}$^m$) are $NP$-complete problems. On the other hand, we show that the decision problem for the case of taking $k$-subpartitions and the maximum (called the max isoperimetric problem {rm IPP}$^M$) can be solved in {it linear time} for any weighted tree and any $k geq 2$. Based on this fact, we provide polynomial time $O(k)$-approximation algorithms for all differe
Understanding videos such as TV series and movies requires analyzing who the characters are and what they are doing. We address the challenging problem of clustering face tracks based on their identity. Different from previous work in this area, we c hoose to operate in a realistic and difficult setting where: (i) the number of characters is not known a priori; and (ii) face tracks belonging to minor or background characters are not discarded. To this end, we propose Ball Cluster Learning (BCL), a supervised approach to carve the embedding space into balls of equal size, one for each cluster. The learned ball radius is easily translated to a stopping criterion for iterative merging algorithms. This gives BCL the ability to estimate the number of clusters as well as their assignment, achieving promising results on commonly used datasets. We also present a thorough discussion of how existing metric learning literature can be adapted for this task.
Subspace clustering is the unsupervised grouping of points lying near a union of low-dimensional linear subspaces. Algorithms based directly on geometric properties of such data tend to either provide poor empirical performance, lack theoretical guar antees, or depend heavily on their initialization. We present a novel geometric approach to the subspace clustering problem that leverages ensembles of the K-subspaces (KSS) algorithm via the evidence accumulation clustering framework. Our algorithm, referred to as ensemble K-subspaces (EKSS), forms a co-association matrix whose (i,j)th entry is the number of times points i and j are clustered together by several runs of KSS with random initializations. We prove general recovery guarantees for any algorithm that forms an affinity matrix with entries close to a monotonic transformation of pairwise absolute inner products. We then show that a specific instance of EKSS results in an affinity matrix with entries of this form, and hence our proposed algorithm can provably recover subspaces under similar conditions to state-of-the-art algorithms. The finding is, to the best of our knowledge, the first recovery guarantee for evidence accumulation clustering and for KSS variants. We show on synthetic data that our method performs well in the traditionally challenging settings of subspaces with large intersection, subspaces with small principal angles, and noisy data. Finally, we evaluate our algorithm on six common benchmark datasets and show that unlike existing methods, EKSS achieves excellent empirical performance when there are both a small and large number of points per subspace.
In this paper we propose a unified framework to simultaneously discover the number of clusters and group the data points into them using subspace clustering. Real data distributed in a high-dimensional space can be disentangled into a union of low-di mensional subspaces, which can benefit various applications. To explore such intrinsic structure, state-of-the-art subspace clustering approaches often optimize a self-representation problem among all samples, to construct a pairwise affinity graph for spectral clustering. However, a graph with pairwise similarities lacks robustness for segmentation, especially for samples which lie on the intersection of two subspaces. To address this problem, we design a hyper-correlation based data structure termed as the textit{triplet relationship}, which reveals high relevance and local compactness among three samples. The triplet relationship can be derived from the self-representation matrix, and be utilized to iteratively assign the data points to clusters. Three samples in each triplet are encouraged to be highly correlated and are considered as a meta-element during clustering, which show more robustness than pairwise relationships when segmenting two densely distributed subspaces. Based on the triplet relationship, we propose a unified optimizing scheme to automatically calculate clustering assignments. Specifically, we optimize a model selection reward and a fusion reward by simultaneously maximizing the similarity of triplets from different clusters while minimizing the correlation of triplets from same cluster. The proposed algorithm also automatically reveals the number of clusters and fuses groups to avoid over-segmentation. Extensive experimental results on both synthetic and real-world datasets validate the effectiveness and robustness of the proposed method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا