Do you want to publish a course? Click here

Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

340   0   0.0 ( 0 )
 Added by Hunter Lang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This noisy stable model realistically fits with practical MAP inference problems: we design an algorithm for finding close stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.



rate research

Read More

95 - Wei Zhang , Taejoon Kim 2021
The simultaneous orthogonal matching pursuit (SOMP) is a popular, greedy approach for common support recovery of a row-sparse matrix. The support recovery guarantee of SOMP has been extensively studied under the noiseless scenario. Compared to the noiseless scenario, the performance analysis of noisy SOMP is still nascent, in which only the restricted isometry property (RIP)-based analysis has been studied. In this paper, we present the mutual incoherence property (MIP)-based study for performance analysis of noisy SOMP. Specifically, when noise is bounded, we provide the condition on which the exact support recovery is guaranteed in terms of the MIP. When noise is unbounded, we instead derive a bound on the successful recovery probability (SRP) that depends on the specific distribution of noise. Then we focus on the common case when noise is random Gaussian and show that the lower bound of SRP follows Tracy-Widom law distribution. The analysis reveals the number of measurements, noise level, the number of sparse vectors, and the value of MIP constant that are required to guarantee a predefined recovery performance. Theoretically, we show that the MIP constant of the measurement matrix must increase proportional to the noise standard deviation, and the number of sparse vectors needs to grow proportional to the noise variance. Finally, we extensively validate the derived analysis through numerical simulations.
Variational inference has become one of the most widely used methods in latent variable modeling. In its basic form, variational inference employs a fully factorized variational distribution and minimizes its KL divergence to the posterior. As the minimization can only be carried out approximately, this approximation induces a bias. In this paper, we revisit perturbation theory as a powerful way of improving the variational approximation. Perturbation theory relies on a form of Taylor expansion of the log marginal likelihood, vaguely in terms of the log ratio of the true posterior and its variational approximation. While first order terms give the classical variational bound, higher-order terms yield corrections that tighten it. However, traditional perturbation theory does not provide a lower bound, making it inapt for stochastic optimization. In this paper, we present a similar yet alternative way of deriving corrections to the ELBO that resemble perturbation theory, but that result in a valid bound. We show in experiments on Gaussian Processes and Variational Autoencoders that the new bounds are more mass covering, and that the resulting posterior covariances are closer to the true posterior and lead to higher likelihoods on held-out data.
Quality-Diversity optimisation algorithms enable the evolution of collections of both high-performing and diverse solutions. These collections offer the possibility to quickly adapt and switch from one solution to another in case it is not working as expected. It therefore finds many applications in real-world domain problems such as robotic control. However, QD algorithms, like most optimisation algorithms, are very sensitive to uncertainty on the fitness function, but also on the behavioural descriptors. Yet, such uncertainties are frequent in real-world applications. Few works have explored this issue in the specific case of QD algorithms, and inspired by the literature in Evolutionary Computation, mainly focus on using sampling to approximate the true value of the performances of a solution. However, sampling approaches require a high number of evaluations, which in many applications such as robotics, can quickly become impractical. In this work, we propose Deep-Grid MAP-Elites, a variant of the MAP-Elites algorithm that uses an archive of similar previously encountered solutions to approximate the performance of a solution. We compare our approach to previously explored ones on three noisy tasks: a standard optimisation task, the control of a redundant arm and a simulated Hexapod robot. The experimental results show that this simple approach is significantly more resilient to noise on the behavioural descriptors, while achieving competitive performances in terms of fitness optimisation, and being more sample-efficient than other existing approaches.
The Data Processing Inequality (DPI) says that the Umegaki relative entropy $S(rho||sigma) := {rm Tr}[rho(log rho - log sigma)]$ is non-increasing under the action of completely positive trace preserving (CPTP) maps. Let ${mathcal M}$ be a finite dimensional von Neumann algebra and ${mathcal N}$ a von Neumann subalgebra if it. Let ${mathcal E}_tau$ be the tracial conditional expectation from ${mathcal M}$ onto ${mathcal N}$. For density matrices $rho$ and $sigma$ in ${mathcal N}$, let $rho_{mathcal N} := {mathcal E}_tau rho$ and $sigma_{mathcal N} := {mathcal E}_tau sigma$. Since ${mathcal E}_tau$ is CPTP, the DPI says that $S(rho||sigma) geq S(rho_{mathcal N}||sigma_{mathcal N})$, and the general case is readily deduced from this. A theorem of Petz says that there is equality if and only if $sigma = {mathcal R}_rho(sigma_{mathcal N} )$, where ${mathcal R}_rho$ is the Petz recovery map, which is dual to the Accardi-Cecchini coarse graining operator ${mathcal A}_rho$ from ${mathcal M} $ to ${mathcal N} $. In it simplest form, our bound is $$S(rho||sigma) - S(rho_{mathcal N} ||sigma_{mathcal N} ) geq left(frac{1}{8pi}right)^{4} |Delta_{sigma,rho}|^{-2} | {mathcal R}_{rho_{mathcal N}} -sigma|_1^4 $$ where $Delta_{sigma,rho}$ is the relative modular operator. We also prove related results for various quasi-relative entropies. Explicitly describing the solutions set of the Petz equation $sigma = {mathcal R}_rho(sigma_{mathcal N} )$ amounts to determining the set of fixed points of the Accardi-Cecchini coarse graining map. Building on previous work, we provide a throughly detailed description of the set of solutions of the Petz equation, and obtain all of our results in a simple self, contained manner.
Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where non-adaptive quadrature is not competitive). Its key ingredient is an acquisition function that changes as a function of previously collected values of the integrand. While this adaptivity appears to be empirically powerful, it complicates analysis. Consequently, there are no theoretical guarantees so far for this class of methods. In this work, for a broad class of adaptive Bayesian quadrature methods, we prove consistency, deriving non-tight but informative convergence rates. To do so we introduce a new concept we call weak adaptivity. Our results identify a large and flexible class of adaptive Bayesian quadrature rules as consistent, within which practitioners can develop empirically efficient methods.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا