Do you want to publish a course? Click here

Transmission Failure Analysis of Multi-Protection Routing in Data Center Networks with Heterogeneous Edge-Core Servers

529   0   0.0 ( 0 )
 Added by Jou-Ming Chang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The recently proposed RCube network is a cube-based server-centric data center network (DCN), including two types of heterogeneous servers, called core and edge servers. Remarkably, it takes the latter as backup servers to deal with server failures and thus achieve high availability. This paper first points out that RCube is suitable as a candidate topology of DCNs for edge computing. Three transmission types are among core and edge servers based on the demand for applications computation and instant response. We then employ protection routing to analyze the transmission failure of RCube DCNs. Unlike traditional protection routing, which only tolerates a single link or node failure, we use the multi-protection routing scheme to improve fault-tolerance capability. To configure a protection routing in a network, according to Tapolcais suggestion, we need to construct two completely independent spanning trees (CISTs). A logic graph of RCube, denoted by $L$-$RCube(n,m,k)$, is a network with a recursive structure. Each basic building element consists of $n$ core servers and $m$ edge servers, where the order $k$ is the number of recursions applied in the structure. In this paper, we provide algorithms to construct $min{n,lfloor(n+m)/2rfloor}$ CISTs in $L$-$RCube(n,m,k)$ for $n+mgeqslant 4$ and $n>1$. From a combination of the multiple CISTs, we can configure the desired multi-protection routing. In our simulation, we configure up to 10 protection routings for RCube DCNs. As far as we know, in past research, there were at most three protection routings developed in other network structures. Finally, we summarize some crucial analysis viewpoints about the transmission efficiency of DCNs with heterogeneous edge-core servers from the simulation results.



rate research

Read More

Mobile devices with embedded sensors for data collection and environmental sensing create a basis for a cost-effective approach for data trading. For example, these data can be related to pollution and gas emissions, which can be used to check the compliance with national and international regulations. The current approach for IoT data trading relies on a centralized third-party entity to negotiate between data consumers and data providers, which is inefficient and insecure on a large scale. In comparison, a decentralized approach based on distributed ledger technologies (DLT) enables data trading while ensuring trust, security, and privacy. However, due to the lack of understanding of the communication efficiency between sellers and buyers, there is still a significant gap in benchmarking the data trading protocols in IoT environments. Motivated by this knowledge gap, we introduce a model for DLT-based IoT data trading over the Narrowband Internet of Things (NB-IoT) system, intended to support massive environmental sensing. We characterize the communication efficiency of three basic DLT-based IoT data trading protocols via NB-IoT connectivity in terms of latency and energy consumption. The model and analyses of these protocols provide a benchmark for IoT data trading applications.
System noise can negatively impact the performance of HPC systems, and the interconnection network is one of the main factors contributing to this problem. To mitigate this effect, adaptive routing sends packets on non-minimal paths if they are less congested. However, while this may mitigate interference caused by congestion, it also generates more traffic since packets traverse additional hops, causing in turn congestion on other applications and on the application itself. In this paper, we first describe how to estimate network noise. By following these guidelines, we show how noise can be reduced by using routing algorithms which select minimal paths with a higher probability. We exploit this knowledge to design an algorithm which changes the probability of selecting minimal paths according to the application characteristics. We validate our solution on microbenchmarks and real-world applications on two systems relying on a Dragonfly interconnection network, showing noise reduction and performance improvement.
The server-centric data centre network architecture can accommodate a wide variety of network topologies. Newly proposed topologies in this arena often require several rounds of analysis and experimentation in order that they might achieve their full potential as data centre networks. We propose a family of novel routing algorithms on two well-known data centre networks of this type, (Generalized) DCell and FiConn, using techniques that can be applied more generally to the class of networks we call completely connected recursively-defined networks. In doing so, we develop a classification of all possible routes from server-node to server-node on these networks, called general routes of order $t$, and find that for certain topologies of interest, our routing algorithms efficiently produce paths that are up to 16% shorter than the best previously known algorithms, and are comparable to shortest paths. In addition to finding shorter paths, we show evidence that our algorithms also have good load-balancing properties.
In this paper, we propose the first optimum process scheduling algorithm for an increasingly prevalent type of heterogeneous multicore (HEMC) system that combines high-performance big cores and energy-efficient small cores with the same instruction-set architecture (ISA). Existing algorithms are all heuristics-based, and the well-known IPC-driven approach essentially tries to schedule high scaling factor processes on big cores. Our analysis shows that, for optimum solutions, it is also critical to consider placing long running processes on big cores. Tests of SPEC 2006 cases on various big-small core combinations show that our proposed optimum approach is up to 34% faster than the IPC-driven heuristic approach in terms of total workload completion time. The complexity of our algorithm is O(NlogN) where N is the number of processes. Therefore, the proposed optimum algorithm is practical for use.
Distributed digital infrastructures for computation and analytics are now evolving towards an interconnected ecosystem allowing complex applications to be executed from IoT Edge devices to the HPC Cloud (aka the Computing Continuum, the Digital Continuum, or the Transcontinuum). Understanding end-to-end performance in such a complex continuum is challenging. This breaks down to reconciling many, typically contradicting application requirements and constraints with low-level infrastructure design choices. One important challenge is to accurately reproduce relevant behaviors of a given application workflow and representative settings of the physical infrastructure underlying this complex continuum. We introduce a rigorous methodology for such a process and validate it through E2Clab. It is the first platform to support the complete experimental cycle across the Computing Continuum: deployment, analysis, optimization. Preliminary results with real-life use cases show that E2Clab allows one to understand and improve performance, by correlating it to the parameter settings, the resource usage and the specifics of the underlying infrastructure.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا