Do you want to publish a course? Click here

Robust Optimization on Unrelated Parallel Machine Scheduling with Setup Times

151   0   0.0 ( 0 )
 Added by Chutong Gao
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

The parallel machine scheduling problem has been a popular topic for many years due to its theoretical and practical importance. This paper addresses the robust makespan optimization problem on unrelated parallel machine scheduling with sequence-dependent setup times, where the processing times are uncertain, and the only knowledge is the intervals they take values from. We propose a robust optimization model with min-max regret criterion to formulate this problem. To solve this problem, we prove that the worst-case scenario with the maximum regret for a given solution belongs to a finite set of extreme scenarios. Based on this theoretical analysis, the procedure to obtain the maximum regret is proposed and an enhanced regret evaluation method (ERE) is designed to accelerate this process. A multi-start decomposition-based heuristic algorithm (MDH) is proposed to solve this problem. High-quality initial solutions and an upper bound are examined to help better solve the problem. Computational experiments are conducted to justify the performance of these methods.



rate research

Read More

In this paper the problem of scheduling of jobs on parallel machines under incompatibility relation is considered. In this model a binary relation between jobs is given and no two jobs that are in the relation can be scheduled on the same machine. In particular, we consider job scheduling under incompatibility relation forming bipartite graphs, under makespan optimality criterion, on uniform and unrelated machines. We show that no algorithm can achieve a good approximation ratio for uniform machines, even for a case of unit time jobs, under $P eq NP$. We also provide an approximation algorithm that achieves the best possible approximation ratio, even for the case of jobs of arbitrary lengths $p_j$, under the same assumption. Precisely, we present an $O(n^{1/2-epsilon})$ inapproximability bound, for any $epsilon > 0$; and $sqrt{p_{sum}}$-approximation algorithm, respectively. To enrich the analysis, bipartite graphs generated randomly according to Gilberts model $mathcal{G}_{n,n,p(n)}$ are considered. For a broad class of $p(n)$ functions we show that there exists an algorithm producing a schedule with makespan almost surely at most twice the optimum. Due to our knowledge, this is the first study of randomly generated graphs in the context of scheduling in the considered model. For unrelated machines, an FPTAS for $R2|G = bipartite|C_{max}$ is provided. We also show that there is no algorithm of approximation ratio $O(n^bp_{max}^{1-epsilon})$, even for $Rm|G = bipartite|C_{max}$ for $m ge 3$ and any $epsilon > 0$, $b > 0$, unless $P = NP$.
Given n jobs with release dates, deadlines and processing times we consider the problem of scheduling them on m parallel machines so as to minimize the total energy consumed. Machines can enter a sleep state and they consume no energy in this state. Each machine requires Q units of energy to awaken from the sleep state and in its active state the machine can process jobs and consumes a unit of energy per unit time. We allow for preemption and migration of jobs and provide the first constant approximation algorithm for this problem.
This paper studies distributionally robust optimization (DRO) when the ambiguity set is given by moments for the distributions. The objective and constraints are given by polynomials in decision variables. We reformulate the DRO with equivalent moment conic constraints. Under some general assumptions, we prove the DRO is equivalent to a linear optimization problem with moment and psd polynomial cones. A moment-SOS relaxation method is proposed to solve it. Its asymptotic and finite convergence are shown under certain assumptions. Numerical examples are presented to show how to solve DRO problems.
To ensure a successful bid while maximizing of profits, generation companies (GENCOs) need a self-scheduling strategy that can cope with a variety of scenarios. So distributionally robust opti-mization (DRO) is a good choice because that it can provide an adjustable self-scheduling strategy for GENCOs in the uncertain environment, which can well balance robustness and economics compared to strategies derived from robust optimization (RO) and stochastic programming (SO). In this paper, a novel mo-ment-based DRO model with conditional value-at-risk (CVaR) is proposed to solve the self-scheduling problem under electricity price uncertainty. Such DRO models are usually translated into semi-definite programming (SDP) for solution, however, solving large-scale SDP needs a lot of computational time and resources. For this shortcoming, two effective approximate models are pro-posed: one approximate model based on vector splitting and an-other based on alternate direction multiplier method (ADMM), both can greatly reduce the calculation time and resources, and the second approximate model only needs the information of the current area in each step of the solution and thus information private is guaranteed. Simulations of three IEEE test systems are conducted to demonstrate the correctness and effectiveness of the proposed DRO model and two approximate models.
We propose kernel distributionally robust optimization (Kernel DRO) using insights from the robust optimization theory and functional analysis. Our method uses reproducing kernel Hilbert spaces (RKHS) to construct a wide range of convex ambiguity sets, which can be generalized to sets based on integral probability metrics and finite-order moment bounds. This perspective unifies multiple existing robust and stochastic optimization methods. We prove a theorem that generalizes the classical duality in the mathematical problem of moments. Enabled by this theorem, we reformulate the maximization with respect to measures in DRO into the dual program that searches for RKHS functions. Using universal RKHSs, the theorem applies to a broad class of loss functions, lifting common limitations such as polynomial losses and knowledge of the Lipschitz constant. We then establish a connection between DRO and stochastic optimization with expectation constraints. Finally, we propose practical algorithms based on both batch convex solvers and stochastic functional gradient, which apply to general optimization and machine learning tasks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا