ﻻ يوجد ملخص باللغة العربية
The growing size of data center and HPC networks pose unprecedented requirements on the scalability of simulation infrastructure. The ability to simulate such large-scale interconnects on a simple PC would facilitate research efforts. Unfortunately, as we first show in this work, existing shared-memory packet-level simulators do not scale to the sizes of the largest networks considered today. We then illustrate a feasibility analysis and a set of enhancements that enable a simple packet-level htsim simulator to scale to the unprecedented simulation sizes on a single PC. Our code is available online and can be used to design novel schemes in the coming era of omnipresent data centers and HPC clusters.
To keep up with demand, servers will scale up to handle hundreds of thousands of clients simultaneously. Much of the focus of the community has been on scaling servers in terms of aggregate traffic intensity (packets transmitted per second). However,
The performance of large-scale distributed compute systems is adversely impacted by stragglers when the execution time of a job is uncertain. To manage stragglers, we consider a multi-fork approach for job scheduling, where additional parallel server
There are several approaches to analyse the worst-case response times of sporadic packets transmitted over priority-preemptive wormhole networks. In this paper, we provide an overview of the different approaches, discuss their strengths and weaknesse
Eliminating unnecessary exposure is a principle of server security. The huge IPv6 address space enhances security by making scanning infeasible, however, with recent advances of IPv6 scanning technologies, network scanning is again threatening server
Emerging data analysis involves the ingestion and exploration of new data sets, application of complex functions, and frequent query revisions based on observing prior query answers. We call this new type of analysis evolutionary analytics and identi