Do you want to publish a course? Click here

Automatic Image Pixel Clustering based on Mussels Wandering Optimiz

61   0   0.0 ( 0 )
 Added by Xin Zhong
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Image segmentation as a clustering problem is to identify pixel groups on an image without any preliminary labels available. It remains a challenge in machine vision because of the variations in size and shape of image segments. Furthermore, determining the segment number in an image is NP-hard without prior knowledge of the image content. This paper presents an automatic color image pixel clustering scheme based on mussels wandering optimization. By applying an activation variable to determine the number of clusters along with the cluster centers optimization, an image is segmented with minimal prior knowledge and human intervention. By revising the within- and between-class sum of squares ratio for random natural image contents, we provide a novel fitness function for image pixel clustering tasks. Comprehensive empirical studies of the proposed scheme against other state-of-the-art competitors on synthetic data and the ASD dataset have demonstrated the promising performance of the proposed scheme.

rate research

Read More

Many approaches to 3D image segmentation are based on hierarchical clustering of supervoxels into image regions. Here we describe a distributed algorithm capable of handling a tremendous number of supervoxels. The algorithm works recursively, the regions are divided into chunks that are processed independently in parallel by multiple workers. At each round of the recursive procedure, the chunk size in all dimensions are doubled until a single chunk encompasses the entire image. The final result is provably independent of the chunking scheme, and the same as if the entire image were processed without division into chunks. This is nontrivial because a pair of adjacent regions is scored by some statistical property (e.g. mean or median) of the affinities at the interface, and the interface may extend over arbitrarily many chunks. The trick is to delay merge decisions for regions that touch chunk boundaries, and only complete them in a later round after the regions are fully contained within a chunk. We demonstrate the algorithm by clustering an affinity graph with over 1.5 trillion edges between 135 billion supervoxels derived from a 3D electron microscopic brain image.
Hashing techniques, also known as binary code learning, have recently gained increasing attention in large-scale data analysis and storage. Generally, most existing hash clustering methods are single-view ones, which lack complete structure or complementary information from multiple views. For cluster tasks, abundant prior researches mainly focus on learning discrete hash code while few works take original data structure into consideration. To address these problems, we propose a novel binary code algorithm for clustering, which adopts graph embedding to preserve the original data structure, called (Graph-based Multi-view Binary Learning) GMBL in this paper. GMBL mainly focuses on encoding the information of multiple views into a compact binary code, which explores complementary information from multiple views. In particular, in order to maintain the graph-based structure of the original data, we adopt a Laplacian matrix to preserve the local linear relationship of the data and map it to the Hamming space. Considering different views have distinctive contributions to the final clustering results, GMBL adopts a strategy of automatically assign weights for each view to better guide the clustering. Finally, An alternating iterative optimization method is adopted to optimize discrete binary codes directly instead of relaxing the binary constraint in two steps. Experiments on five public datasets demonstrate the superiority of our proposed method compared with previous approaches in terms of clustering performance.
344 - V. Grabski 2009
Here an image restoration on the basis of pixel simultaneous detection probabilities (PSDP) is proposed. These probabilities can be precisely determined by means of correlations measurement [NIMA 586 (2008) 314-326]. The proposed image restoration is based on the solution of matrix equation. Non-zero elements of Toeplitz block matrix with ones on the main diagonal, is determined using PSDP. The number of non zero descending diagonals depends on the detector construction and is not always smaller than 8. To solve the matrix equation, the Gaussian elimination algorithm is used. The proposed restoration algorithm is studied by means of the simulated images (with and without additive noise using PSDP for General Electric Senographe 2000D mammography device detector) and a small area (160x160 pixels) of real images acquired by the above mentioned device. The estimation errors of PSDP and the additive noise magnitude permits to restore images with the precision better than 3% for the above mentioned detector. The additive noise in the real image is present after restoration and almost has the same magnitude. In the restored small area (16x16 mm) of real images, the pixel responses are not correlated. The spatial resolution improvement is also analyzed by the image of an absorber edge.
In this paper a new formulation of event recognition task is examined: it is required to predict event categories in a gallery of images, for which albums (groups of photos corresponding to a single event) are unknown. We propose the novel two-stage approach. At first, features are extracted in each photo using the pre-trained convolutional neural network. These features are classified individually. The scores of the classifier are used to group sequential photos into several clusters. Finally, the features of photos in each group are aggregated into a single descriptor using neural attention mechanism. This algorithm is optionally extended to improve the accuracy for classification of each image in an album. In contrast to conventional fine-tuning of convolutional neural networks (CNN) we proposed to use image captioning, i.e., generative model that converts images to textual descriptions. They are one-hot encoded and summarized into sparse feature vector suitable for learning of arbitrary classifier. Experimental study with Photo Event Collection and Multi-Label Curation of Flickr Events Dataset demonstrates that our approach is 9-20% more accurate than event recognition on single photos. Moreover, proposed method has 13-16% lower error rate than classification of groups of photos obtained with hierarchical clustering. It is experimentally shown that the image captions trained on Conceptual Captions dataset can be classified more accurately than the features from object detector, though they both are obviously not as rich as the CNN-based features. However, it is possible to combine our approach with conventional CNNs in an ensemble to provide the state-of-the-art results for several event datasets.
Recent advances in deep learning have shown their ability to learn strong feature representations for images. The task of image clustering naturally requires good feature representations to capture the distribution of the data and subsequently differentiate data points from one another. Often these two aspects are dealt with independently and thus traditional feature learning alone does not suffice in partitioning the data meaningfully. Variational Autoencoders (VAEs) naturally lend themselves to learning data distributions in a latent space. Since we wish to efficiently discriminate between different clusters in the data, we propose a method based on VAEs where we use a Gaussian Mixture prior to help cluster the images accurately. We jointly learn the parameters of both the prior and the posterior distributions. Our method represents a true Gaussian Mixture VAE. This way, our method simultaneously learns a prior that captures the latent distribution of the images and a posterior to help discriminate well between data points. We also propose a novel reparametrization of the latent space consisting of a mixture of discrete and continuous variables. One key takeaway is that our method generalizes better across different datasets without using any pre-training or learnt models, unlike existing methods, allowing it to be trained from scratch in an end-to-end manner. We verify our efficacy and generalizability experimentally by achieving state-of-the-art results among unsupervised methods on a variety of datasets. To the best of our knowledge, we are the first to pursue image clustering using VAEs in a purely unsupervised manner on real image datasets.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا