ترغب بنشر مسار تعليمي؟ اضغط هنا

Structural Deep Clustering Network

97   0   0.0 ( 0 )
 نشر من قبل Deyu Bo
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Clustering is a fundamental task in data analysis. Recently, deep clustering, which derives inspiration primarily from deep learning approaches, achieves state-of-the-art performance and has attracted considerable attention. Current deep clustering methods usually boost the clustering results by means of the powerful representation ability of deep learning, e.g., autoencoder, suggesting that learning an effective representation for clustering is a crucial requirement. The strength of deep clustering methods is to extract the useful representations from the data itself, rather than the structure of data, which receives scarce attention in representation learning. Motivated by the great success of Graph Convolutional Network (GCN) in encoding the graph structure, we propose a Structural Deep Clustering Network (SDCN) to integrate the structural information into deep clustering. Specifically, we design a delivery operator to transfer the representations learned by autoencoder to the corresponding GCN layer, and a dual self-supervised mechanism to unify these two different deep neural architectures and guide the update of the whole model. In this way, the multiple structures of data, from low-order to high-order, are naturally combined with the multiple representations learned by autoencoder. Furthermore, we theoretically analyze the delivery operator, i.e., with the delivery operator, GCN improves the autoencoder-specific representation as a high-order graph regularization constraint and autoencoder helps alleviate the over-smoothing problem in GCN. Through comprehensive experiments, we demonstrate that our propose model can consistently perform better over the state-of-the-art techniques.



قيم البحث

اقرأ أيضاً

In this paper, we propose an unsupervised collaborative representation deep network (UCRDNet) which consists of novel collaborative representation RBM (crRBM) and collaborative representation GRBM (crGRBM). The UCRDNet is a novel deep collaborative f eature extractor for exploring more sophisticated probabilistic models of real-valued and binary data. Unlike traditional representation methods, one similarity relation between the input instances and another similarity relation between the features of the input instances are collaboratively fused together in the representation process of the crGRBM and crRBM models. Here, we use the Locality Sensitive Hashing (LSH) method to divide the input instance matrix into many mini blocks which contain similar instance and local features. Then, we expect the hidden layer feature units of each block gather to block center as much as possible in the training processes of the crRBM and crGRBM. Hence, the correlations between the instances and features as collaborative relations are fused in the hidden layer features. In the experiments, we use K-means and Spectral Clustering (SC) algorithms based on four contrast deep networks to verify the deep collaborative representation capability of the UCRDNet architecture. One architecture of the UCRDNet is composed with a crGRBM and two crRBMs for modeling real-valued data and another architecture of it is composed with three crRBMs for modeling binary data. The experimental results show that the proposed UCRDNet has more outstanding performance than the Autoencoder and DeepFS deep networks (without collaborative representation strategy) for unsupervised clustering on the MSRA-MM2.0 and UCI datasets. Furthermore, the proposed UCRDNet shows more excellent collaborative representation capabilities than the CDL deep collaborative networks for unsupervised clustering.
Deep clustering is a fundamental yet challenging task for data analysis. Recently we witness a strong tendency of combining autoencoder and graph neural networks to exploit structure information for clustering performance enhancement. However, we obs erve that existing literature 1) lacks a dynamic fusion mechanism to selectively integrate and refine the information of graph structure and node attributes for consensus representation learning; 2) fails to extract information from both sides for robust target distribution (i.e., groundtruth soft labels) generation. To tackle the above issues, we propose a Deep Fusion Clustering Network (DFCN). Specifically, in our network, an interdependency learning-based Structure and Attribute Information Fusion (SAIF) module is proposed to explicitly merge the representations learned by an autoencoder and a graph autoencoder for consensus representation learning. Also, a reliable target distribution generation measure and a triplet self-supervision strategy, which facilitate cross-modality information exploitation, are designed for network training. Extensive experiments on six benchmark datasets have demonstrated that the proposed DFCN consistently outperforms the state-of-the-art deep clustering methods.
We propose a deep learning approach for discovering kernels tailored to identifying clusters over sample data. Our neural network produces sample embeddings that are motivated by--and are at least as expressive as--spectral clustering. Our training o bjective, based on the Hilbert Schmidt Information Criterion, can be optimized via gradient adaptations on the Stiefel manifold, leading to significant acceleration over spectral methods relying on eigendecompositions. Finally, our trained embedding can be directly applied to out-of-sample data. We show experimentally that our approach outperforms several state-of-the-art deep clustering methods, as well as traditional approaches such as $k$-means and spectral clustering over a broad array of real-life and synthetic datasets.
A common approach for compressing NLP networks is to encode the embedding layer as a matrix $Ainmathbb{R}^{ntimes d}$, compute its rank-$j$ approximation $A_j$ via SVD, and then factor $A_j$ into a pair of matrices that correspond to smaller fully-co nnected layers to replace the original embedding layer. Geometrically, the rows of $A$ represent points in $mathbb{R}^d$, and the rows of $A_j$ represent their projections onto the $j$-dimensional subspace that minimizes the sum of squared distances (errors) to the points. In practice, these rows of $A$ may be spread around $k>1$ subspaces, so factoring $A$ based on a single subspace may lead to large errors that turn into large drops in accuracy. Inspired by emph{projective clustering} from computational geometry, we suggest replacing this subspace by a set of $k$ subspaces, each of dimension $j$, that minimizes the sum of squared distances over every point (row in $A$) to its emph{closest} subspace. Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of $k$ small layers that operate in parallel and are then recombined with a single fully-connected layer. Extensive experimental results on the GLUE benchmark yield networks that are both more accurate and smaller compared to the standard matrix factorization (SVD). For example, we further compress DistilBERT by reducing the size of the embedding layer by $40%$ while incurring only a $0.5%$ average drop in accuracy over all nine GLUE tasks, compared to a $2.8%$ drop using the existing SVD approach. On RoBERTa we achieve $43%$ compression of the embedding layer with less than a $0.8%$ average drop in accuracy as compared to a $3%$ drop previously. Open code for reproducing and extending our results is provided.
We address the problem of simultaneously learning a k-means clustering and deep feature representation from unlabelled data, which is of interest due to the potential of deep k-means to outperform traditional two-step feature extraction and shallow-c lustering strategies. We achieve this by developing a gradient-estimator for the non-differentiable k-means objective via the Gumbel-Softmax reparameterisation trick. In contrast to previous attempts at deep clustering, our concrete k-means model can be optimised with respect to the canonical k-means objective and is easily trained end-to-end without resorting to alternating optimisation. We demonstrate the efficacy of our method on standard clustering benchmarks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا