Modelling of multivariate densities is a core component in many signal processing, pattern recognition and machine learning applications. The modelling is often done via Gaussian mixture models (GMMs), which use computationally expensive and potentially unstable training algorithms. We provide an overview of a fast and robust implementation of GMMs in the C++ language, employing multi-thread
The scaled boundary finite element method (SBFEM) is capable of generating polyhedral elements with an arbitrary number of surfaces. This salient feature significantly alleviates the meshing burden being a bottleneck in the analysis pipeline in the standard finite element method (FEM). In this paper, we implement polyhedral elements based on the SBFEM into the commercial finite element software ABAQUS. To this end, user elements are provided through the user subroutine UEL. Detailed explanations regarding the data structures and implementational aspects of the procedures are given. The focus of the current implementation is on interfacial problems and therefore, element-based surfaces are created on polyhedral user elements to establish interactions. This is achieved by an overlay of standard finite elements with negligible stiffness, provided in the ABAQUS element library, with polyhedral user elements. By means of several numerical examples, the advantages of polyhedral elements regarding the treatment of non-matching interfaces and automatic mesh generation are clearly demonstrated. Thus, the performance of ABAQUS for problems involving interfaces is augmented based on the availability of polyhedral meshes. Due to the implementation of polyhedral user elements, ABAQUS can directly handle complex geometries given in the form of digital images or stereolithography (STL) files. In order to facilitate the use of the proposed approach, the code of the UEL is published open-source and can be downloaded from https://github.com/ShukaiYa/SBFEM-UEL.
Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks. While there does not exist a single pre-training model that works best in all cases, it is of necessity to develop a framework that is able to deploy various pre-training models efficiently. For this purpose, we propose an assemble-on-demand pre-training toolkit, namely Universal Encoder Representations (UER). UER is loosely coupled, and encapsulated with rich modules. By assembling modules on demand, users can either reproduce a state-of-the-art pre-training model or develop a pre-training model that remains unexplored. With UER, we have built a model zoo, which contains pre-trained models based on different corpora, encoders, and targets (objectives). With proper pre-trained models, we could achieve new state-of-the-art results on a range of downstream datasets.
Gaussian mixture models (GMM) are powerful parametric tools with many applications in machine learning and computer vision. Expectation maximization (EM) is the most popular algorithm for estimating the GMM parameters. However, EM guarantees only convergence to a stationary point of the log-likelihood function, which could be arbitrarily worse than the optimal solution. Inspired by the relationship between the negative log-likelihood function and the Kullback-Leibler (KL) divergence, we propose an alternative formulation for estimating the GMM parameters using the sliced Wasserstein distance, which gives rise to a new algorithm. Specifically, we propose minimizing the sliced-Wasserstein distance between the mixture model and the data distribution with respect to the GMM parameters. In contrast to the KL-divergence, the energy landscape for the sliced-Wasserstein distance is more well-behaved and therefore more suitable for a stochastic gradient descent scheme to obtain the optimal GMM parameters. We show that our formulation results in parameter estimates that are more robust to random initializations and demonstrate that it can estimate high-dimensional data distributions more faithfully than the EM algorithm.
We present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination, and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factorization leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite. The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK -- STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.
We study the community detection problem on a Gaussian mixture model, in which vertices are divided into $kgeq 2$ distinct communities. The major difference in our model is that the intensities for Gaussian perturbations are different for different entries in the observation matrix, and we do not assume that every community has the same number of vertices. We explicitly find the threshold for the exact recovery of the maximum likelihood estimation. Applications include the community detection on hypergraphs.