Do you want to publish a course? Click here

Price Optimization with Practical Constraints

79   0   0.0 ( 0 )
 Added by Alvin Lim
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we study a retailer price optimization problem which includes the practical constraints: maximum number of price changes and minimum amount of price change (if a change is recommended). We provide a closed-form formula for the Euclidean projection onto the feasible set defined by these two constraints, based on which a simple gradient projection algorithm is proposed to solve the price optimization problem. We study the convergence and solution quality of the proposed algorithm. We extend the base model to include upper/lower bounds on the individual product prices and solve it with some adjustments to the gradient projection algorithm. Numerical results are reported to demonstrate the performance of the proposed algorithm.



rate research

Read More

In this paper, we address a variant of the marketing mix optimization (MMO) problem which is commonly encountered in many industries, e.g., retail and consumer packaged goods (CPG) industries. This problem requires the spend for each marketing activity, if adjusted, be changed by a non-negligible degree (minimum change) and also the total number of activities with spend change be limited (maximum number of changes). With these two additional practical requirements, the original resource allocation problem is formulated as a mixed integer nonlinear program (MINLP). Given the size of a realistic problem in the industrial setting, the state-of-the-art integer programming solvers may not be able to solve the problem to optimality in a straightforward way within a reasonable amount of time. Hence, we propose a systematic reformulation to ease the computational burden. Computational tests show significant improvements in the solution process.
In this paper, we solve the multiple product price optimization problem under interval uncertainties of the price sensitivity parameters in the demand function. The objective of the price optimization problem is to maximize the overall revenue of the firm where the decision variables are the prices of the products supplied by the firm. We propose an approach that yields optimal solutions under different variations of the estimated price sensitivity parameters. We adopt a robust optimization approach by building a data-driven uncertainty set for the parameters, and then construct a deterministic counterpart for the robust optimization model. The numerical results show that two objectives are fulfilled: the method reflects the uncertainty embedded in parameter estimations, and also an interval is obtained for optimal prices. We also conducted a simulation study to which we compared the results of our approach. The comparisons show that although robust optimization is deemed to be conservative, the results of the proposed approach show little loss compared to those from the simulation.
We propose a new distributed optimization algorithm for solving a class of constrained optimization problems in which (a) the objective function is separable (i.e., the sum of local objective functions of agents), (b) the optimization variables of distributed agents, which are subject to nontrivial local constraints, are coupled by global constraints, and (c) only noisy observations are available to estimate (the gradients of) local objective functions. In many practical scenarios, agents may not be willing to share their optimization variables with others. For this reason, we propose a distributed algorithm that does not require the agents to share their optimization variables with each other; instead, each agent maintains a local estimate of the global constraint functions and share the estimate only with its neighbors. These local estimates of constraint functions are updated using a consensus-type algorithm, while the local optimization variables of each agent are updated using a first-order method based on noisy estimates of gradient. We prove that, when the agents adopt the proposed algorithm, their optimization variables converge with probability 1 to an optimal point of an approximated problem based on the penalty method.
We study the problem of learning a linear model to set the reserve price in an auction, given contextual information, in order to maximize expected revenue from the seller side. First, we show that it is not possible to solve this problem in polynomial time unless the emph{Exponential Time Hypothesis} fails. Second, we present a strong mixed-integer programming (MIP) formulation for this problem, which is capable of exactly modeling the nonconvex and discontinuous expected reward function. Moreover, we show that this MIP formulation is ideal (i.e. the strongest possible formulation) for the revenue function of a single impression. Since it can be computationally expensive to exactly solve the MIP formulation in practice, we also study the performance of its linear programming (LP) relaxation. Though it may work well in practice, we show that, unfortunately, in the worst case the optimal objective of the LP relaxation can be O(number of samples) times larger than the optimal objective of the true problem. Finally, we present computational results, showcasing that the MIP formulation, along with its LP relaxation, are able to achieve superior in- and out-of-sample performance, as compared to state-of-the-art algorithms on both real and synthetic datasets. More broadly, we believe this work offers an indication of the strength of optimization methodologies like MIP to exactly model intrinsic discontinuities in machine learning problems.
109 - Zizhuo Wang 2012
In this paper, we propose two algorithms for solving convex optimization problems with linear ascending constraints. When the objective function is separable, we propose a dual method which terminates in a finite number of iterations. In particular, the worst case complexity of our dual method improves over the best-known result for this problem in Padakandla and Sundaresan [SIAM J. Optimization, 20 (2009), pp. 1185-1204]. We then propose a gradient projection method to solve a more general class of problems in which the objective function is not necessarily separable. Numerical experiments show that both our algorithms work well in test problems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا