ترغب بنشر مسار تعليمي؟ اضغط هنا

Asymptotic optimality of the triangular lattice for a class of optimal location problems

62   0   0.0 ( 0 )
 نشر من قبل Riccardo Cristoferi
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove an asymptotic crystallization result in two dimensions for a class of nonlocal particle systems. To be precise, we consider the best approximation with respect to the 2-Wasserstein metric of a given absolutely continuous probability measure $f mathrm{d}x$ by a discrete probability measure $sum_i m_i delta_{z_i}$, subject to a constraint on the particle sizes $m_i$. The locations $z_i$ of the particles, their sizes $m_i$, and the number of particles are all unknowns of the problem. We study a one-parameter family of constraints. This is an example of an optimal location problem (or an optimal sampling or quantization problem) and it has applications in economics, signal compression, and numerical integration. We establish the asymptotic minimum value of the (rescaled) approximation error as the number of particles goes to infinity. In particular, we show that for the constrained best approximation of the Lebesgue measure by a discrete measure, the discrete measure whose support is a triangular lattice is asymptotically optimal. In addition, we prove an analogous result for a problem where the constraint is replaced by a penalization. These results can also be viewed as the asymptotic optimality of the hexagonal tiling for an optimal partitioning problem. They generalise the crystallization result of Bourne, Peletier and Theil (Communications in Mathematical Physics, 2014) from a single particle system to a class of particle systems, and prove a case of a conjecture by Bouchitt{e}, Jimenez and Mahadevan (Journal de Mathematiques Pures et Appliquees, 2011). Finally, we prove a crystallization result which states that optimal configurations with energy close to that of a triangular lattice are geometrically close to a triangular lattice.

قيم البحث

اقرأ أيضاً

126 - Jessica Martin 2021
What type of delegation contract should be offered when facing a risk of the magnitude of the pandemic we are currently experiencing and how does the likelihood of an exogenous early termination of the relationship modify the terms of a full-commitme nt contract? We study these questions by considering a dynamic principal-agent model that naturally extends the classical Holmstr{o}m-Milgrom setting to include a risk of default whose origin is independent of the inherent agency problem. We obtain an explicit characterization of the optimal wage along with the optimal action provided by the agent. The optimal contract is linear by offering both a fixed share of the output which is similar to the standard shutdown-free Holmstr{o}m-Milgrom model and a linear prevention mechanism that is proportional to the random lifetime of the contract. We then tweak the model to add a possibility for risk mitigation through investment and study its optimality.
This paper studies a class of non$-$Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a $Z-$constrained BSDE, wit h dynamics associated to a non singular underlying forward process. Due to the non$-$Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to $Z-$constrained BSDEs. As an application, we study the utility maximisation problem with transaction costs for non$-$Markovian dynamics.
93 - Lei Guo , Jane Ye 2016
This paper introduces and studies the optimal control problem with equilibrium constraints (OCPEC). The OCPEC is an optimal control problem with a mixed state and control equilibrium constraint formulated as a complementarity constraint and it can be seen as a dynamic mathematical program with equilibrium constraints. It provides a powerful modeling paradigm for many practical problems such as bilevel optimal control problems and dynamic principal-agent problems. In this paper, we propose weak, Clarke, Mordukhovich and strong stationarities for the OCPEC. Moreover, we give some sufficient conditions to ensure that the local minimizers of the OCPEC are Fritz John type weakly stationary, Mordukhovich stationary and strongly stationary, respectively. Unlike Pontryagains maximum principle for the classical optimal control problem with equality and inequality constraints, a counter example shows that for general OCPECs, there may exist two sets of multipliers for the complementarity constraints. A condition under which these two sets of multipliers coincide is given.
In this article, we derive first-order necessary optimality conditions for a constrained optimal control problem formulated in the Wasserstein space of probability measures. To this end, we introduce a new notion of localised metric subdifferential f or compactly supported probability measures, and investigate the intrinsic linearised Cauchy problems associated to non-local continuity equations. In particular, we show that when the velocity perturbations belong to the tangent cone to the convexification of the set of admissible velocities, the solutions of these linearised problems are tangent to the solution set of the corresponding continuity inclusion. We then make use of these novel concepts to provide a synthetic and geometric proof of the celebrated Pontryagin Maximum Principle for an optimal control problem with inequality final-point constraints. In addition, we propose sufficient conditions ensuring the normality of the maximum principle.
A class of optimal control problems of hybrid nature governed by semilinear parabolic equations is considered. These problems involve the optimization of switching times at which the dynamics, the integral cost, and the bounds on the control may chan ge. First- and second-order optimality conditions are derived. The analysis is based on a reformulation involving a judiciously chosen transformation of the time domains. For autonomous systems and time-independent integral cost, we prove that the Hamiltonian is constant in time when evaluated along the optimal controls and trajectories. A numerical example is provided.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا