ﻻ يوجد ملخص باللغة العربية
The morphology of a radio galaxy is highly affected by its central active galactic nuclei (AGN), which is studied to reveal the evolution of the super massive black hole (SMBH). In this work, we propose a morphology generation framework for two typical radio galaxies namely Fanaroff-Riley type-I (FRI) and type-II (FRII) with deep neural network based autoencoder (DNNAE) and Gaussian mixture models (GMMs). The encoder and decoder subnets in the DNNAE are symmetric aside a fully-connected layer namely code layer hosting the extracted feature vectors. By randomly generating the feature vectors later with a three-component Gaussian Mixture models, new FRI or FRII radio galaxy morphologies are simulated. Experiments were demonstrated on real radio galaxy images, where we discussed the length of feature vectors, selection of lost functions, and made comparisons on batch normalization and dropout techniques for training the network. The results suggest a high efficiency and performance of our morphology generation framework. Code is available at: https://github.com/myinxd/dnnae-gmm.
Variation Autoencoder (VAE) has become a powerful tool in modeling the non-linear generative process of data from a low-dimensional latent space. Recently, several studies have proposed to use VAE for unsupervised clustering by using mixture models t
Variational autoencoders (VAEs) have been shown to be able to generate game levels but require manual exploration of the learned latent space to generate outputs with desired attributes. While conditional VAEs address this by allowing generation to b
Gaussian mixture models (GMM) are powerful parametric tools with many applications in machine learning and computer vision. Expectation maximization (EM) is the most popular algorithm for estimating the GMM parameters. However, EM guarantees only con
In this paper we address the problem of building a class of robust factorization algorithms that solve for the shape and motion parameters with both affine (weak perspective) and perspective camera models. We introduce a Gaussian/uniform mixture mode
Heterogeneity of sentences exists in sequence to sequence tasks such as machine translation. Sentences with largely varied meanings or grammatical structures may increase the difficulty of convergence while training the network. In this paper, we int