ﻻ يوجد ملخص باللغة العربية
We develop a new family of convex relaxations for $k$-means clustering based on sum-of-squares norms, a relaxation of the injective tensor norm that is efficiently computable using the Sum-of-Squares algorithm. We give an algorithm based on this relaxation that recovers a faithful approximation to the true means in the given data whenever the low-degree moments of the points in each cluster have bounded sum-of-squares norms. We then prove a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the emph{Poincare inequality}. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As an immediate corollary, for any $gamma > 0$, we obtain an efficient algorithm for learning the means of a mixture of $k$ arbitrary Poincare distributions in $mathbb{R}^d$ in time $d^{O(1/gamma)}$ so long as the means have separation $Omega(k^{gamma})$. This in particular yields an algorithm for learning Gaussian mixtures with separation $Omega(k^{gamma})$, thus partially resolving an open problem of Regev and Vijayaraghavan citet{regev2017learning}. Our algorithm works even in the outlier-robust setting where an $epsilon$ fraction of arbitrary outliers are added to the data, as long as the fraction of outliers is smaller than the smallest cluster. We, therefore, obtain results in the strong agnostic setting where, in addition to not knowing the distribution family, the data itself may be arbitrarily corrupted.
We introduce and study a class of entanglement criteria based on the idea of applying local contractions to an input multipartite state, and then computing the projective tensor norm of the output. More precisely, we apply to a mixed quantum state a
Sum-of-norms clustering is a clustering formulation based on convex optimization that automatically induces hierarchy. Multiple algorithms have been proposed to solve the optimization problem: subgradient descent by Hocking et al., ADMM and ADA by Ch
Supervised machine learning explainability has developed rapidly in recent years. However, clustering explainability has lagged behind. Here, we demonstrate the first adaptation of model-agnostic explainability methods to explain unsupervised cluster
We design differentially private learning algorithms that are agnostic to the learning model. Our algorithms are interactive in nature, i.e., instead of outputting a model based on the training data, they provide predictions for a set of $m$ feature
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference. However, the invertibility requirement restricts models to have the same latent dimen