ترغب بنشر مسار تعليمي؟ اضغط هنا

High Performance and Scalable NAT System on Commodity Platforms

59   0   0.0 ( 0 )
 نشر من قبل Junfeng Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Quick network address translation (NAT) is proposed to improve the network performance of the NAT system on the commodity server by three ways. First, the quick NAT search algorithm is designed to use the Hash search instead of the sequential search to reduce latency when looking up the NAT rule table. Second, to leverage the power of the multi-core central processing unit (CPU) and the multi-queue network interface card, Quick NAT enables multiple CPU cores to process in parallel. The localized connection tracking table and the compare-and-swap based lock-free NAT Hash tables are designed to eliminate the lock overhead. Third, Quick NAT uses the polling and zero-copy delivery to reduce the cost of interrupt and packet copies. The evaluation results show that Quick NAT obtains high scalability and line-rate throughput on the commodity server.



قيم البحث

اقرأ أيضاً

95 - Junfeng Li , Dan Li , Yukai Huang 2021
NAT gateway is an important network system in todays IPv4 network when translating a private IPv4 address to a public address. However, traditional NAT system based on Linux Netfilter cannot achieve high network throughput to meet modern requirements such as data centers. To address this challenge, we improve the network performance of NAT system by three ways. First, we leverage DPDK to enable polling and zero-copy delivery, so as to reduce the cost of interrupt and packet copies. Second, we enable multiple CPU cores to process in parallel and use lock-free hash table to minimize the contention between CPU cores. Third, we use hash search instead of sequential search when looking up the NAT rule table. Evaluation shows that our Quick NAT system significantly improves the performance of NAT on commodity platforms.
339 - Wei Yan , Yucong Yang (1 2020
Optical isolators and circulators are indispensable for photonic integrated circuits (PICs). Despite of significant progress in silicon-on-insulator (SOI) platforms, integrated optical isolators and circulators have been rarely reported on silicon ni tride (SiN) platforms. In this paper, we report monolithic integration of magneto-optical (MO) isolators on SiN platforms with record high performances based on standard silicon photonics foundry process and magneto-optical thin film deposition. We successfully grow high quality MO garnet thin films on SiN with large Faraday rotation up to -5900 deg/cm. We show a superior magneto-optical figure of merit (FoM) of MO/SiN waveguides compared to that of MO/SOI in an optimized device design. We demonstrate TM/TE mode broadband and narrow band optical isolators and circulators on SiN with high isolation ratio, low cross talk and low insertion loss. In particular, we observe 1 dB insertion loss and 28 dB isolation ratio in a SiN racetrack resonator-based isolator at 1570.2 nm wavelength. The low thermo-optic coefficient of SiN also ensures excellent temperature stability of the device. Our work paves the way for integration of high performance nonreciprocal photonic devices on SiN platforms.
58 - Kuo-Feng Hsu 2019
We present Contra, a system for performance-aware routing that can adapt to traffic changes at hardware speeds. While existing work has developed point solutions for performance-aware routing on a fixed topology (e.g., a Fattree) with a fixed routing policy (e.g., use least utilized paths), Contra can be configured to operate seamlessly over any network topology and a wide variety of sophisticated routing policies. Users of Contra write network-wide policies that rank network paths given their current performance. A compiler then analyzes such policies in conjunction with the network topology and decomposes them into switch-local P4 programs, which collectively implement a new, specialized distance-vector protocol. This protocol generates compact probes that traverse the network, gathering path metrics to optimize for the user policy dynamically. Switches respond to changing network conditions at hardware speeds by routing flowlets along the best policy-compliant paths. Our experiments show that Contra scales to large networks, and that in terms of flow completion times, it is competitive with hand-crafted systems that have been customized for specific topologies and policies.
Structured P2P overlays provide a framework for building distributed applications that are self-configuring, scalable, and resilient to node failures. Such systems have been successfully adopted in large-scale Internet services such as content delive ry networks and file sharing; however, widespread adoption in small/medium scales has been limited due in part to security concerns and difficulty bootstrapping in NAT-constrained environments. Nonetheless, P2P systems can be designed to provide guaranteed lookup times, NAT traversal, point-to-point overlay security, and distributed data stores. In this paper we propose a novel way of creating overlays that are both secure and private and a method to bootstrap them using a public overlay. Private overlay nodes use the public overlays distributed data store to discover each other, and the public overlays connections to assist with NAT hole punching and as relays providing STUN and TURN NAT traversal techniques. The security framework utilizes groups, which are created and managed by users through a web based user interface. Each group acts as a Public Key Infrastructure (PKI) relying on the use of a centrally-managed web site providing an automated Certificate Authority (CA). We present a reference implementation which has been used in a P2P VPN (Virtual Private Network). To evaluate our contributions, we apply our techniques to an overlay network modeler, event-driven simulations using simulated time delays, and deployment in the PlanetLab wide-area testbed.
Serverless computing is increasingly popular because of its lower cost and easier deployment. Several cloud service providers (CSPs) offer serverless computing on their public clouds, but it may bring the vendor lock-in risk. To avoid this limitation , many open-source serverless platforms come out to allow developers to freely deploy and manage functions on self-hosted clouds. However, building effective functions requires much expertise and thorough comprehension of platform frameworks and features that affect performance. It is a challenge for a service developer to differentiate and select the appropriate serverless platform for different demands and scenarios. Thus, we elaborate the frameworks and event processing models of four popular open-source serverless platforms and identify their salient idiosyncrasies. We analyze the root causes of performance differences between different service exporting and auto-scaling modes on those platforms. Further, we provide several insights for future work, such as auto-scaling and metric collection.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا