Do you want to publish a course? Click here

Composite optimization for the resource allocation problem

104   0   0.0 ( 0 )
 Added by Anastasiya Ivanova
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

In this paper we consider resource allocation problem stated as a convex minimization problem with linear constraints. To solve this problem, we use gradient and accelerated gradient descent applied to the dual problem and prove the convergence rate both for the primal iterates and the dual iterates. We obtain faster convergence rates than the ones known in the literature. We also provide economic interpretation for these two methods. This means that iterations of the algorithms naturally correspond to the process of price and production adjustment in order to obtain the desired production volume in the economy. Overall, we show how these actions of the economic agents lead the whole system to the equilibrium.



rate research

Read More

This paper is concerned with a class of zero-norm regularized piecewise linear-quadratic (PLQ) composite minimization problems, which covers the zero-norm regularized $ell_1$-loss minimization problem as a special case. For this class of nonconvex nonsmooth problems, we show that its equivalent MPEC reformulation is partially calm on the set of global optima and make use of this property to derive a family of equivalent DC surrogates. Then, we propose a proximal majorization-minimization (MM) method, a convex relaxation approach not in the DC algorithm framework, for solving one of the DC surrogates which is a semiconvex PLQ minimization problem involving three nonsmooth terms. For this method, we establish its global convergence and linear rate of convergence, and under suitable conditions show that the limit of the generated sequence is not only a local optimum but also a good critical point in a statistical sense. Numerical experiments are conducted with synthetic and real data for the proximal MM method with the subproblems solved by a dual semismooth Newton method to confirm our theoretical findings, and numerical comparisons with a convergent indefinite-proximal ADMM for the partially smoothed DC surrogate verify its superiority in the quality of solutions and computing time.
The $alpha$-fair resource allocation problem has received remarkable attention and has been studied in numerous application fields. Several algorithms have been proposed in the context of $alpha$-fair resource sharing to distributively compute its value. However, little work has been done on its structural properties. In this work, we present a lower bound for the optimal solution of the weighted $alpha$-fair resource allocation problem and compare it with existing propositions in the literature. Our derivations rely on a localization property verified by optimization problems with separable objective that permit one to better exploit their local structures. We give a local version of the well-known midpoint domination axiom used to axiomatically build the Nash Bargaining Solution (or proportionally fair resource allocation problem). Moreover, we show how our lower bound can improve the performances of a distributed algorithm based on the Alternating Directions Method of Multipliers (ADMM). The evaluation of the algorithm shows that our lower bound can considerably reduce its convergence time up to two orders of magnitude compared to when the bound is not used at all or is simply looser.
We consider a new and general online resource allocation problem, where the goal is to maximize a function of a positive semidefinite (PSD) matrix with a scalar budget constraint. The problem data arrives online, and the algorithm needs to make an irrevocable decision at each step. Of particular interest are classic experiment design problems in the online setting, with the algorithm deciding whether to allocate budget to each experiment as new experiments become available sequentially. We analyze two greedy primal-dual algorithms and provide bounds on their competitive ratios. Our analysis relies on a smooth surrogate of the objective function that needs to satisfy a new diminishing returns (PSD-DR) property (that its gradient is order-reversing with respect to the PSD cone). Using the representation for monotone maps on the PSD cone given by Lowners theorem, we obtain a convex parametrization of the family of functions satisfying PSD-DR. We then formulate a convex optimization problem to directly optimize our competitive ratio bound over this set. This design problem can be solved offline before the data start arriving. The online algorithm that uses the designed smoothing is tailored to the given cost function, and enjoys a competitive ratio at least as good as our optimized bound. We provide examples of computing the smooth surrogate for D-optimal and A-optimal experiment design, and demonstrate the performance of the custom-designed algorithm.
Distributed resource allocation is a central task in network systems such as smart grids, water distribution networks, and urban transportation systems. When solving such problems in practice it is often important to have nonasymptotic feasibility guarantees for the iterates, since overallocation of resources easily causes systems to break down. In this paper, we develop a distributed resource reallocation algorithm where every iteration produces a feasible allocation. The algorithm is fully distributed in the sense that nodes communicate only with neighbors over a given communication network. We prove that under mild conditions the algorithm converges to a point arbitrarily close to the optimal resource allocation. Numerical experiments demonstrate the competitive practical performance of the algorithm.
In this paper, we introduce various mechanisms to obtain accelerated first-order stochastic optimization algorithms when the objective function is convex or strongly convex. Specifically, we extend the Catalyst approach originally designed for deterministic objectives to the stochastic setting. Given an optimization method with mild convergence guarantees for strongly convex problems, the challenge is to accelerate convergence to a noise-dominated region, and then achieve convergence with an optimal worst-case complexity depending on the noise variance of the gradients. A side contribution of our work is also a generic analysis that can handle inexact proximal operators, providing new insights about the robustness of stochastic algorithms when the proximal operator cannot be exactly computed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا