Do you want to publish a course? Click here

Design of Robust and Efficient Edge Server Placement and Server Scheduling Policies: Extended Version

131   0   0.0 ( 0 )
 Added by Peirui Cao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We study how to design edge server placement and server scheduling policies under workload uncertainty for 5G networks. We introduce a new metric called resource pooling factor to handle unexpected workload bursts. Maximizing this metric offers a strong enhancement on top of robust optimization against workload uncertainty. Using both real traces and synthetic traces, we show that the proposed server placement and server scheduling policies not only demonstrate better robustness against workload uncertainty than existing approaches, but also significantly reduce the cost of service providers. Specifically, in order to achieve close-to-zero workload rejection rate, the proposed server placement policy reduces the number of required edge servers by about 25% compared with the state-of-the-art approach; the proposed server scheduling policy reduces the energy consumption of edge servers by about 13% without causing much impact on the service quality.



rate research

Read More

In this paper we present new algorithmic solutions for several constrained geometric server placement problems. We consider the problems of computing the 1-center and obnoxious 1-center of a set of line segments, constrained to lie on a line segment, and the problem of computing the K-median of a set of points, constrained to lie on a line. The presented algorithms have applications in many types of distributed systems, as well as in various fields which make use of distributed systems for running some of their applications (like chemistry, metallurgy, physics, etc.).
Asterisk and Open IMS use SIP signal protocol to enable both of them can be connected. To facilitate both relationships, Enum server- that is able to translate the numbering address such as PSTN (E.164) to URI address (Uniform Resource Identifier)- can be used. In this research, we interconnect Open IMS and Asterisk server Enum server. We then analyze the server performance and PDD (Post Dial Delay) values resulted by the system. As the result of the experiment, we found that, for a call from Open IMS user to analog Asterisk telephone (FXS) with a arrival call each servers is 30 call/sec, the maximum PDD value is 493.656 ms. Open IMS is able to serve maximum 30 call/s with computer processor 1.55 GHz, while the Asterisk with computer processor 3.0 GHz, may serve up to 55 call/sec. Enum on server with 1.15 GHz computer processor have the capability of serving maximum of 8156 queries/sec.
One of the Internets greatest strengths is the degree to which it facilitates access to any of its resources from users anywhere in the world. However, users in the developing world have complained of websites blocking their countries. We explore this phenomenon using a measurement study. With a combination of automated page loads, manual checking, and traceroutes, we can say, with high confidence, that some websites do block users from some regions. We cannot say, with high confidence, why, or even based on what criteria, they do so except for in some cases where the website states a reason. We do report qualitative evidence that fears of abuse and the costs of serving requests to some regions may play a role.
The performance of large-scale distributed compute systems is adversely impacted by stragglers when the execution time of a job is uncertain. To manage stragglers, we consider a multi-fork approach for job scheduling, where additional parallel servers are added at forking instants. In terms of the forking instants and the number of additional servers, we compute the job completion time and the cost of server utilization when the task processing times are assumed to have a shifted exponential distribution. We use this study to provide insights into the scheduling design of the forking instants and the associated number of additional servers to be started. Numerical results demonstrate orders of magnitude improvement in cost in the regime of low completion times as compared to the prior works.
Popular dispatching policies such as the join shortest queue (JSQ), join smallest work (JSW) and their power of two variants are used in load balancing systems where the instantaneous queue length or workload information at all queues or a subset of them can be queried. In situations where the dispatcher has an associated memory, one can minimize this query overhead by maintaining a list of idle servers to which jobs can be dispatched. Recent alternative approaches that do not require querying such information include the cancel on start and cancel on complete based replication policies. The downside of such policies however is that the servers must communicate the start or completion of each service to the dispatcher and must allow cancellation of redundant copies. In this work, we consider a load balancing environment where the dispatcher cannot query load information, does not have a memory, and cannot cancel any replica that it may have created. In such a rigid environment, we allow the dispatcher to possibly append a server side cancellation criteria to each job or its replica. A job or a replica is served only if it satisfies the predefined criteria at the time of service. We focus on a criteria that is based on the waiting time experienced by a job or its replica and analyze several variants of this policy based on the assumption of asymptotic independence of queues. The proposed policies are novel and perform remarkably well in spite of the rigid operating constraints.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا