ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Resource Allocation for Serverless Queries

104   0   0.0 ( 0 )
 نشر من قبل Rathijit Sen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Optimizing resource allocation for analytical workloads is vital for reducing costs of cloud-data services. At the same time, it is incredibly hard for users to allocate resources per query in serverless processing systems, and they frequently misallocate by orders of magnitude. Unfortunately, prior work focused on predicting peak allocation while ignoring aggressive trade-offs between resource allocation and run-time. Additionally, these methods fail to predict allocation for queries that have not been observed in the past. In this paper, we tackle both these problems. We introduce a system for optimal resource allocation that can predict performance with aggressive trade-offs, for both new and past observed queries. We introduce the notion of a performance characteristic curve (PCC) as a parameterized representation that can compactly capture the relationship between resources and performance. To tackle training data sparsity, we introduce a novel data augmentation technique to efficiently synthesize the entire PCC using a single run of the query. Lastly, we demonstrate the advantages of a constrained loss function coupled with GNNs, over traditional ML methods, for capturing the domain specific behavior through an extensive experimental evaluation over SCOPE big data workloads at Microsoft.



قيم البحث

اقرأ أيضاً

We study online resource allocation in a cloud computing platform, through a posted pricing mechanism: The cloud provider publishes a unit price for each resource type, which may vary over time; upon arrival at the cloud system, a cloud user either t akes the current prices, renting resources to execute its job, or refuses the prices without running its job there. We design pricing functions based on the current resource utilization ratios, in a wide array of demand-supply relationships and resource occupation durations, and prove worst-case competitive ratios of the pricing functions in terms of social welfare. In the basic case of a single-type, non-recycled resource (i.e., allocated resources are not later released for reuse), we prove that our pricing function design is optimal, in that any other pricing function can only lead to a worse competitive ratio. Insights obtained from the basic cases are then used to generalize the pricing functions to more realistic cloud systems with multiple types of resources, where a job occupies allocated resources for a number of time slots till completion, upon which time the resources are returned back to the cloud resource pool.
In this paper we introduce a class of Markov decision processes that arise as a natural model for many renewable resource allocation problems. Upon extending results from the inventory control literature, we prove that they admit a closed form soluti on and we show how to exploit this structure to speed up its computation. We consider the application of the proposed framework to several problems arising in very different domains, and as part of the ongoing effort in the emerging field of Computational Sustainability we discuss in detail its application to the Northern Pacific Halibut marine fishery. Our approach is applied to a model based on real world data, obtaining a policy with a guaranteed lower bound on the utility function that is structurally very different from the one currently employed.
Multi-access edge computing (MEC) can enhance the computing capability of mobile devices, while non-orthogonal multiple access (NOMA) can provide high data rates. Combining these two strategies can effectively benefit the network with spectrum and en ergy efficiency. In this paper, we investigate the task delay minimization in multi-user NOMA-MEC networks, where multiple users can offload their tasks simultaneously through the same frequency band. We adopt the partial offloading policy, in which each user can partition its computation task into offloading and locally computing parts. We aim to minimize the task delay among users by optimizing their tasks partition ratios and offloading transmit power. The delay minimization problem is first formulated, and it is shown that it is a nonconvex one. By carefully investigating its structure, we transform the original problem into an equivalent quasi-convex. In this way, a bisection search iterative algorithm is proposed in order to achieve the minimum task delay. To reduce the complexity of the proposed algorithm and evaluate its optimality, we further derive closed-form expressions for the optimal task partition ratio and offloading power for the case of two-user NOMA-MEC networks. Simulations demonstrate the convergence and optimality of the proposed algorithm and the effectiveness of the closed-form analysis.
To address the rising demand for strong packet delivery guarantees in networking, we study a novel way to perform graph resource allocation. We first introduce allocation graphs, in which nodes can independently set local resource limits based on phy sical constraints or policy decisions. In this scenario we formalize the distributed path-allocation (PAdist) problem, which consists in allocating resources to paths considering only local on-path information -- importantly, not knowing which other paths could have an allocation -- while at the same time achieving the global property of never exceeding available resources. Our core contribution, the global myopic allocation (GMA) algorithm, is a solution to this problem. We prove that GMA can compute unconditional allocations for all paths on a graph, while never over-allocating resources. Further, we prove that GMA is Pareto optimal with respect to the allocation size, and it has linear complexity in the input size. Finally, we show with simulations that this theoretical result could be indeed applied to practical scenarios, as the resulting path allocations are large enough to fit the requirements of practically relevant applications.
Generalized Benders decomposition (GBD) is a globally optimal algorithm for mixed integer nonlinear programming (MINLP) problems, which are NP-hard and can be widely found in the area of wireless resource allocation. The main idea of GBD is decomposi ng an MINLP problem into a primal problem and a master problem, which are iteratively solved until their solutions converge. However, a direct implementation of GBD is time- and memory-consuming. The main bottleneck is the high complexity of the master problem, which increases over the iterations. Therefore, we propose to leverage machine learning (ML) techniques to accelerate GBD aiming at decreasing the complexity of the master problem. Specifically, we utilize two different ML techniques, classification and regression, to deal with this acceleration task. In this way, a cut classifier and a cut regressor are learned, respectively, to distinguish between useful and useless cuts. Only useful cuts are added to the master problem and thus the complexity of the master problem is reduced. By using a resource allocation problem in device-to-device communication networks as an example, we validate that the proposed method can reduce the computational complexity of GBD without loss of optimality and has strong generalization ability. The proposed method is applicable for solving various MINLP problems in wireless networks since the designs are invariant for different problems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا