Do you want to publish a course? Click here

Adaptive Uncertainty Resolution in Bayesian Combinatorial Optimization Problems

156   0   0.0 ( 0 )
 Added by Kamesh Munagala
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

In several applications such as databases, planning, and sensor networks, parameters such as selectivity, load, or sensed values are known only with some associated uncertainty. The performance of such a system (as captured by some objective function over the parameters) is significantly improved if some of these parameters can be probed or observed. In a resource constrained situation, deciding which parameters to observe in order to optimize system performance itself becomes an interesting and important optimization problem. This general problem is the focus of this paper. One of the most important considerations in this framework is whether adaptivity is required for the observations. Adaptive observations introduce blocking or sequential operations in the system whereas non-adaptive observations can be performed in parallel. One of the important questions in this regard is to characterize the benefit of adaptivity for probes and observation. We present general techniques for designing constant factor approximations to the optimal observation schemes for several widely used scheduling and metric objective functions. We show a unifying technique that relates this optimization problem to the outlier version of the corresponding deterministic optimization. By making this connection, our technique shows constant factor upper bounds for the benefit of adaptivity of the observation schemes. We show that while probing yields significant improvement in the objective function, being adaptive about the probing is not beneficial beyond constant factors.



rate research

Read More

Many discrete optimization problems amount to select a feasible subgraph of least weight. We consider in this paper the context of spatial graphs where the positions of the vertices are uncertain and belong to known uncertainty sets. The objective is to minimize the sum of the distances in the chosen subgraph for the worst positions of the vertices in their uncertainty sets. We first prove that these problems are NP-hard even when the feasible subgraphs consist either of all spanning trees or of all s-t paths. In view of this, we propose en exact solution algorithm combining integer programming formulations with a cutting plane algorithm, identifying the cases where the separation problem can be solved efficiently. We also propose two types of polynomial-time approximation algorithms. The first one relies on solving a nominal counterpart of the problem considering pairwise worst-case distances. We study in details the resulting approximation ratio, which depends on the structure of the metric space and of the feasible subgraphs. The second algorithm considers the special case of s-t paths and leads to a fully-polynomial time approximation scheme. Our algorithms are numerically illustrated on a subway network design problem and a facility location problem.
140 - Tapani Toivonen 2021
Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such problem is maximum clique which -- under standard assumptions in complexity theory -- cannot be solved in sub-exponential time or be approximated within polynomial factor efficiently. We show that if a polynomial time algorithm can query informative Gaussian priors from an expert $poly(n)$ times, then a class of combinatorial optimization problems can be solved efficiently in expectation up to a multiplicative factor $epsilon$ where $epsilon$ is arbitrary constant. While our proposed methods are merely theoretical, they cast new light on how to approach solving these problems that have been usually considered intractable.
148 - Jose F. Fontanari 2010
We investigate the performance of a variant of Axelrods model for dissemination of culture - the Adaptive Culture Heuristic (ACH) - on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size $F$ by a Boolean Binary Perceptron. In this heuristic, $N$ agents, characterized by binary strings of length $F$ which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable $F/N^{1/4}$ so that the number of agents must increase with the fourth power of the problem size, $N propto F^ 4$, to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with $F^ 6$ which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean Binary Perceptron, given a fixed probability of success.
In this paper we present an algorithmic framework for solving a class of combinatorial optimization problems on graphs with bounded pathwidth. The problems are NP-hard in general, but solvable in linear time on this type of graphs. The problems are relevant for assessing network reliability and improving the networks performance and fault tolerance. The main technique considered in this paper is dynamic programming.
In this paper, we present approximation algorithms for combinatorial optimization problems under probabilistic constraints. Specifically, we focus on stochastic variants of two important combinatorial optimization problems: the k-center problem and the set cover problem, with uncertainty characterized by a probability distribution over set of points or elements to be covered. We consider these problems under adaptive and non-adaptive settings, and present efficient approximation algorithms for the case when underlying distribution is a product distribution. In contrast to the expected cost model prevalent in stochastic optimization literature, our problem definitions support restrictions on the probability distributions of the total costs, via incorporating constraints that bound the probability with which the incurred costs may exceed a given threshold.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا