ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards Million-Server Network Simulations on Just a Laptop

78   0   0.0 ( 0 )
 نشر من قبل Maciej Besta
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The growing size of data center and HPC networks pose unprecedented requirements on the scalability of simulation infrastructure. The ability to simulate such large-scale interconnects on a simple PC would facilitate research efforts. Unfortunately, as we first show in this work, existing shared-memory packet-level simulators do not scale to the sizes of the largest networks considered today. We then illustrate a feasibility analysis and a set of enhancements that enable a simple packet-level htsim simulator to scale to the unprecedented simulation sizes on a single PC. Our code is available online and can be used to design novel schemes in the coming era of omnipresent data centers and HPC clusters.

قيم البحث

اقرأ أيضاً

To keep up with demand, servers will scale up to handle hundreds of thousands of clients simultaneously. Much of the focus of the community has been on scaling servers in terms of aggregate traffic intensity (packets transmitted per second). However, bottlenecks caused by the increasing number of concurrent clients, resulting in a large number of concurrent flows, have received little attention. In this work, we focus on identifying such bottlenecks. In particular, we define two broad categories of problems; namely, admitting more packets into the network stack than can be handled efficiently, and increasing per-packet overhead within the stack. We show that these problems contribute to high CPU usage and network performance degradation in terms of aggregate throughput and RTT. Our measurement and analysis are performed in the context of the Linux networking stack, the the most widely used publicly available networking stack. Further, we discuss the relevance of our findings to other network stacks. The goal of our work is to highlight considerations required in the design of future networking stacks to enable efficient handling of large numbers of clients and flows.
The performance of large-scale distributed compute systems is adversely impacted by stragglers when the execution time of a job is uncertain. To manage stragglers, we consider a multi-fork approach for job scheduling, where additional parallel server s are added at forking instants. In terms of the forking instants and the number of additional servers, we compute the job completion time and the cost of server utilization when the task processing times are assumed to have a shifted exponential distribution. We use this study to provide insights into the scheduling design of the forking instants and the associated number of additional servers to be started. Numerical results demonstrate orders of magnitude improvement in cost in the regime of low completion times as compared to the prior works.
There are several approaches to analyse the worst-case response times of sporadic packets transmitted over priority-preemptive wormhole networks. In this paper, we provide an overview of the different approaches, discuss their strengths and weaknesse s, and propose an approach that captures all effects considered by previous approaches while providing tight yet safe upper bounds for packet response times. We specifically address the problems created by buffering and backpressure in wormhole networks, which amplifies the problem of indirect interference in a way that has not been considered by the early analysis approaches. Didactic examples and large-scale experiments with synthetically generated packet flow sets provide evidence of the strength of the proposed approach.
Eliminating unnecessary exposure is a principle of server security. The huge IPv6 address space enhances security by making scanning infeasible, however, with recent advances of IPv6 scanning technologies, network scanning is again threatening server security. In this paper, we propose a new model named addressless server, which separates the server into an entrance module and a main service module, and assigns an IPv6 prefix instead of an IPv6 address to the main service module. The entrance module generates a legitimate IPv6 address under this prefix by encrypting the client address, so that the client can access the main server on a destination address that is different in each connection. In this way, the model provides isolation to the main server, prevents network scanning, and minimizes exposure. Moreover it provides a novel framework that supports flexible load balancing, high-availability, and other desirable features. The model is simple and does not require any modification to the client or the network. We implement a prototype and experiments show that our model can prevent the main server from being scanned at a slight performance cost.
Emerging data analysis involves the ingestion and exploration of new data sets, application of complex functions, and frequent query revisions based on observing prior query answers. We call this new type of analysis evolutionary analytics and identi fy its properties. This type of analysis is not well represented by current benchmark workloads. In this paper, we present a workload and identify several metrics to test system support for evolutionary analytics. Along with our metrics, we present methodologies for running the workload that capture this analytical scenario.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا