It is well known that Sparse PCA (Sparse Principal Component Analysis) is NP-hard to solve exactly on worst-case instances. What is the complexity of solving Sparse PCA approximately? Our contributions include: 1) a simple and efficient algorithm that achieves an $n^{-1/3}$-approximation; 2) NP-hardness of approximation to within $(1-varepsilon)$, for some small constant $varepsilon > 0$; 3) SSE-hardness of approximation to within any constant factor; and 4) an $expexpleft(Omegaleft(sqrt{log log n}right)right)$ (quasi-quasi-polynomial) gap for the standard semidefinite program.
We consider the problem of maximizing the spread of influence in a social network by choosing a fixed number of initial seeds, formally referred to as the influence maximization problem. It admits a $(1-1/e)$-factor approximation algorithm if the influence function is submodular. Otherwise, in the worst case, the problem is NP-hard to approximate to within a factor of $N^{1-varepsilon}$. This paper studies whether this worst-case hardness result can be circumvented by making assumptions about either the underlying network topology or the cascade model. All of our assumptions are motivated by many real life social network cascades. First, we present strong inapproximability results for a very restricted class of networks called the (stochastic) hierarchical blockmodel, a special case of the well-studied (stochastic) blockmodel in which relationships between blocks admit a tree structure. We also provide a dynamic-program based polynomial time algorithm which optimally computes a directed variant of the influence maximization problem on hierarchical blockmodel networks. Our algorithm indicates that the inapproximability result is due to the bidirectionality of influence between agent-blocks. Second, we present strong inapproximability results for a class of influence functions that are almost submodular, called 2-quasi-submodular. Our inapproximability results hold even for any 2-quasi-submodular $f$ fixed in advance. This result also indicates that the threshold between submodularity and nonsubmodularity is sharp, regarding the approximability of influence maximization.
We consider the following multi-component sparse PCA problem: given a set of data points, we seek to extract a small number of sparse components with disjoint supports that jointly capture the maximum possible variance. These components can be computed one by one, repeatedly solving the single-component problem and deflating the input data matrix, but as we show this greedy procedure is suboptimal. We present a novel algorithm for sparse PCA that jointly optimizes multiple disjoint components. The extracted features capture variance that lies within a multiplicative factor arbitrarily close to 1 from the optimal. Our algorithm is combinatorial and computes the desired components by solving multiple instances of the bipartite maximum weight matching problem. Its complexity grows as a low order polynomial in the ambient dimension of the input data matrix, but exponentially in its rank. However, it can be effectively applied on a low-dimensional sketch of the data; this allows us to obtain polynomial-time approximation guarantees via spectral bounds. We evaluate our algorithm on real data-sets and empirically demonstrate that in many cases it outperforms existing, deflation-based approaches.
We introduce the combinatorial optimization problem Time Disjoint Walks (TDW), which has applications in collision-free routing of discrete objects (e.g., autonomous vehicles) over a network. This problem takes as input a digraph $G$ with positive integer arc lengths, and $k$ pairs of vertices that each represent a trip demand from a source to a destination. The goal is to find a walk and delay for each demand so that no two trips occupy the same vertex at the same time, and so that a min-max or min-sum objective over the trip durations is realized. We focus here on the min-sum variant of Time Disjoint Walks, although most of our results carry over to the min-max case. We restrict our study to various subclasses of DAGs, and observe that there is a sharp complexity boundary between Time Disjoint Walks on oriented stars and on oriented stars with the central vertex replaced by a path. In particular, we present a poly-time algorithm for min-sum and min-max TDW on the former, but show that min-sum TDW on the latter is NP-hard. Our main hardness result is that for DAGs with max degree $Deltaleq3$, min-sum Time Disjoint Walks is APX-hard. We present a natural approximation algorithm for the same class, and provide a tight analysis. In particular, we prove that it achieves an approximation ratio of $Theta(k/log k)$ on bounded-degree DAGs, and $Theta(k)$ on DAGs and bounded-degree digraphs.
Sparse principal component analysis (PCA) and sparse canonical correlation analysis (CCA) are two essential techniques from high-dimensional statistics and machine learning for analyzing large-scale data. Both problems can be formulated as an optimization problem with nonsmooth objective and nonconvex constraints. Since non-smoothness and nonconvexity bring numerical difficulties, most algorithms suggested in the literature either solve some relaxations or are heuristic and lack convergence guarantees. In this paper, we propose a new alternating manifold proximal gradient method to solve these two high-dimensional problems and provide a unified convergence analysis. Numerical experiment results are reported to demonstrate the advantages of our algorithm.
We present a probabilistic algorithm to compute the product of two univariate sparse polynomials over a field with a number of bit operations that is quasi-linear in the size of the input and the output. Our algorithm works for any field of characteristic zero or larger than the degree. We mainly rely on sparse interpolation and on a new algorithm for verifying a sparse product that has also a quasi-linear time complexity. Using Kronecker substitution techniques we extend our result to the multivariate case.