ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Embedded Multi-view Clustering with Collaborative Training

82   0   0.0 ( 0 )
 نشر من قبل Jie Xu
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Multi-view clustering has attracted increasing attentions recently by utilizing information from multiple views. However, existing multi-view clustering methods are either with high computation and space complexities, or lack of representation capability. To address these issues, we propose deep embedded multi-view clustering with collaborative training (DEMVC) in this paper. Firstly, the embedded representations of multiple views are learned individually by deep autoencoders. Then, both consensus and complementary of multiple views are taken into account and a novel collaborative training scheme is proposed. Concretely, the feature representations and cluster assignments of all views are learned collaboratively. A new consistency strategy for cluster centers initialization is further developed to improve the multi-view clustering performance with collaborative training. Experimental results on several popular multi-view datasets show that DEMVC achieves significant improvements over state-of-the-art methods.



قيم البحث

اقرأ أيضاً

Multi-view clustering (MVC) has been extensively studied to collect multiple source information in recent years. One typical type of MVC methods is based on matrix factorization to effectively perform dimension reduction and clustering. However, the existing approaches can be further improved with following considerations: i) The current one-layer matrix factorization framework cannot fully exploit the useful data representations. ii) Most algorithms only focus on the shared information while ignore the view-specific structure leading to suboptimal solutions. iii) The partition level information has not been utilized in existing work. To solve the above issues, we propose a novel multi-view clustering algorithm via deep matrix decomposition and partition alignment. To be specific, the partition representations of each view are obtained through deep matrix decomposition, and then are jointly utilized with the optimal partition representation for fusing multi-view information. Finally, an alternating optimization algorithm is developed to solve the optimization problem with proven convergence. The comprehensive experimental results conducted on six benchmark multi-view datasets clearly demonstrates the effectiveness of the proposed algorithm against the SOTA methods.
In this paper, we propose an unsupervised collaborative representation deep network (UCRDNet) which consists of novel collaborative representation RBM (crRBM) and collaborative representation GRBM (crGRBM). The UCRDNet is a novel deep collaborative f eature extractor for exploring more sophisticated probabilistic models of real-valued and binary data. Unlike traditional representation methods, one similarity relation between the input instances and another similarity relation between the features of the input instances are collaboratively fused together in the representation process of the crGRBM and crRBM models. Here, we use the Locality Sensitive Hashing (LSH) method to divide the input instance matrix into many mini blocks which contain similar instance and local features. Then, we expect the hidden layer feature units of each block gather to block center as much as possible in the training processes of the crRBM and crGRBM. Hence, the correlations between the instances and features as collaborative relations are fused in the hidden layer features. In the experiments, we use K-means and Spectral Clustering (SC) algorithms based on four contrast deep networks to verify the deep collaborative representation capability of the UCRDNet architecture. One architecture of the UCRDNet is composed with a crGRBM and two crRBMs for modeling real-valued data and another architecture of it is composed with three crRBMs for modeling binary data. The experimental results show that the proposed UCRDNet has more outstanding performance than the Autoencoder and DeepFS deep networks (without collaborative representation strategy) for unsupervised clustering on the MSRA-MM2.0 and UCI datasets. Furthermore, the proposed UCRDNet shows more excellent collaborative representation capabilities than the CDL deep collaborative networks for unsupervised clustering.
We present the first deep learning based architecture for collective matrix tri-factorization (DCMTF) of arbitrary collections of matrices, also known as augmented multi-view data. DCMTF can be used for multi-way spectral clustering of heterogeneous collections of relational data matrices to discover latent clusters in each input matrix, across both dimensions, as well as the strengths of association across clusters. The source code for DCMTF is available on our public repository: https://bitbucket.org/cdal/dcmtf_generic
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data by performing clustering on the learned optimal embedding across views. Though demonstrating promising performance in various applications, most of exist ing methods usually linearly combine a group of pre-specified first-order Laplacian matrices to construct the optimal Laplacian matrix, which may result in limited representation capability and insufficient information exploitation. Also, storing and implementing complex operations on the $ntimes n$ Laplacian matrices incurs intensive storage and computation complexity. To address these issues, this paper first proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix, and then extends it to the late fusion version for accurate and efficient multi-view clustering. Specifically, our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base Laplacian matrices simultaneously. By this way, the representative capacity of the learned optimal Laplacian matrix is enhanced, which is helpful to better utilize the hidden high-order connection information among data, leading to improved clustering performance. We design an efficient algorithm with proved convergence to solve the resultant optimization problem. Extensive experimental results on nine datasets demonstrate the superiority of our algorithm against state-of-the-art methods, which verifies the effectiveness and advantages of the proposed algorithm.
Multi-view clustering methods have been a focus in recent years because of their superiority in clustering performance. However, typical traditional multi-view clustering algorithms still have shortcomings in some aspects, such as removal of redundan t information, utilization of various views and fusion of multi-view features. In view of these problems, this paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization. We construct two new data matrix decomposition models into a unified optimization model. In this framework, we address the significance of the common knowledge shared by the cross view and the unique knowledge of each view by presenting new low-rank and sparse constraints on the sparse subspace matrix. To ensure that we achieve effective sparse representation and clustering performance on the original data matrix, adaptive graph regularization and unsupervised clustering constraints are also incorporated in the proposed model to preserve the internal structural features of the data. Finally, the proposed method is compared with several state-of-the-art algorithms. Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا