ﻻ يوجد ملخص باللغة العربية
Joint network topology inference represents a canonical problem of jointly learning multiple graph Laplacian matrices from heterogeneous graph signals. In such a problem, a widely employed assumption is that of a simple common component shared among multiple networks. However, in practice, a more intricate topological pattern, comprising simultaneously of sparse, homogeneity and heterogeneity components, would exhibit in multiple networks. In this paper, we propose a general graph estimator based on a novel structured fusion regularization that enables us to jointly learn multiple graph Laplacian matrices with such complex topological patterns, and enjoys both high computational efficiency and rigorous theoretical guarantee. Moreover, in the proposed regularization term, the topological pattern among networks is characterized by a Gram matrix, endowing our graph estimator with the ability of flexible modelling different types of topological patterns by different choices of the Gram matrix. Computationally, the regularization term, coupling the parameters together, makes the formulated optimization problem intractable and thus, we develop a computationally-scalable algorithm based on the alternating direction method of multipliers (ADMM) to solve it efficiently. Theoretically, we provide a theoretical analysis of the proposed graph estimator, which establishes a non-asymptotic bound of the estimation error under the high-dimensional setting and reflects the effect of several key factors on the convergence rate of our algorithm. Finally, the superior performance of the proposed method is illustrated through simulated and real data examples.
In this paper, we propose a way to combine two acceleration techniques for the $ell_{1}$-regularized least squares problem: safe screening tests, which allow to eliminate useless dictionary atoms; and the use of fast structured approximations of the
This work focuses on the problem of unraveling nonlinearly mixed latent components in an unsupervised manner. The latent components are assumed to reside in the probability simplex, and are transformed by an unknown post-nonlinear mixing system. This
We discuss structured Schatten norms for tensor decomposition that includes two recently proposed norms (overlapped and latent) for convex-optimization-based tensor decomposition, and connect tensor decomposition with wider literature on structured s
Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss. Despite its effectiveness, existing regularization-based parameter pruning methods usually drive
Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degrade. Despite its effectiveness, existing regularization-based parameter pruning methods usually dr