ﻻ يوجد ملخص باللغة العربية
Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This noisy stable model realistically fits with practical MAP inference problems: we design an algorithm for finding close stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.
The simultaneous orthogonal matching pursuit (SOMP) is a popular, greedy approach for common support recovery of a row-sparse matrix. The support recovery guarantee of SOMP has been extensively studied under the noiseless scenario. Compared to the no
Variational inference has become one of the most widely used methods in latent variable modeling. In its basic form, variational inference employs a fully factorized variational distribution and minimizes its KL divergence to the posterior. As the mi
Quality-Diversity optimisation algorithms enable the evolution of collections of both high-performing and diverse solutions. These collections offer the possibility to quickly adapt and switch from one solution to another in case it is not working as
The Data Processing Inequality (DPI) says that the Umegaki relative entropy $S(rho||sigma) := {rm Tr}[rho(log rho - log sigma)]$ is non-increasing under the action of completely positive trace preserving (CPTP) maps. Let ${mathcal M}$ be a finite dim
Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where non-adaptive quadrature is not competitive). Its key ingre