Do you want to publish a course? Click here

Programming Protocol-Independent Packet Processors

209   0   0.0 ( 0 )
 Added by Daniel Talayco
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

P4 is a high-level language for programming protocol-independent packet processors. P4 works in conjunction with SDN control protocols like OpenFlow. In its current form, OpenFlow explicitly specifies protocol headers on which it operates. This set has grown from 12 to 41 fields in a few years, increasing the complexity of the specification while still not providing the flexibility to add new headers. In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: Programmers should be able to change the way switches process packets once they are deployed. (2) Protocol independence: Switches should not be tied to any specific network protocols. (3) Target independence: Programmers should be able to describe packet-processing functionality independently of the specifics of the underlying hardware. As an example, we describe how to use P4 to configure a switch to add a new hierarchical label.



rate research

Read More

This paper studies the structure of several real-world traces (including Facebook, High-Performance Computing, Machine Learning, and simulation generated traces) and presents a systematic approach to quantify and compare the structure of packet traces based on the entropy contained in the trace file. Insights into the structure of packet traces can lead to improved network algorithms that are optimized toward specific traffic patterns. We then present a methodology to quantify the temporal and non-temporal components of entropy contained in a packet trace, called the trace complexity, using randomization and compression. We show that trace complexity provides unique insights into the characteristics of various applications and argue that there is a need for traffic generation models that preserve the intrinsic structure of empirically measured application traces. We then propose a traffic generator model that is able to produce a synthetic trace that matches the complexity level of its corresponding real-world trace.
A common situation occurring when dealing with multimedia traffic is having large data frames fragmented into smaller IP packets, and having these packets sent independently through the network. For real-time multimedia traffic, dropping even few packets of a frame may render the entire frame useless. Such traffic is usually modeled as having {em inter-packet dependencies}. We study the problem of scheduling traffic with such dependencies, where each packet has a deadline by which it should arrive at its destination. Such deadlines are common for real-time multimedia applications, and are derived from stringent delay constraints posed by the application. The figure of merit in such environments is maximizing the systems {em goodput}, namely, the number of frames successfully delivered. We study online algorithms for the problem of maximizing goodput of delay-bounded traffic with inter-packet dependencies, and use competitive analysis to evaluate their performance. We present competitive algorithms for the problem, as well as matching lower bounds that are tight up to a constant factor. We further present the results of a simulation study which further validates our algorithmic approach and shows that insights arising from our analysis are indeed manifested in practice.
Packet scheduling determines the ordering of packets in a queuing data structure with respect to some ranking function that is mandated by a scheduling policy. It is the core component in many recent innovations to optimize network performance and utilization. Our focus in this paper is on the design and deployment of packet scheduling in software. Software schedulers have several advantages over hardware including shorter development cycle and flexibility in functionality and deployment location. We substantially improve current software packet scheduling performance, while maintaining flexibility, by exploiting underlying features of packet ranking; namely, packet ranks are integers and, at any point in time, fall within a limited range of values. We introduce Eiffel, a novel programmable packet scheduling system. At the core of Eiffel is an integer priority queue based on the Find First Set (FFS) instruction and designed to support a wide range of policies and ranking functions efficiently. As an even more efficient alternative, we also propose a new approximate priority queue that can outperform FFS-based queues for some scenarios. To support flexibility, Eiffel introduces novel programming abstractions to express scheduling policies that cannot be captured by current, state-of-the-art scheduler programming models. We evaluate Eiffel in a variety of settings and in both kernel and userspace deployments. We show that it outperforms state of the art systems by 3-40x in terms of either number of cores utilized for network processing or number of flows given fixed processing capacity.
Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP.
Many systems require frequent and regular updates of a certain information. These updates have to be transferred regularly from the source to the destination. We consider scenarios in which an old packet becomes completely obsolete, in the presence of a new packet. In this context, if a new packet arrives at the source while it is transferring a packet, one needs to decide the packet to be dropped. New packet has recent information, but might require more time to transfer. Thus it is not clear as to which packet to be dis- carded, and this is the main focus of the paper. Recently introduced performance metrics, called average age of information (AAoI) and peak age of information (PAoI) of the information available at the destination, are the relevant performance measures. These type of systems do not require storage buffers, of size more than one, at the source queue. We consider single source / multiple sources regularly updating information to a single destination possibly over wireless channels to derive optimal drop policies that optimize the AAoI. We showed that the state independent (static) policies like dropping always the old packets or dropping always the new packets is optimal in many scenarios, among an appropriate set of stationary Markov policies. We consider relevant games when multiple sources compete. In many scenarios, the non-cooperative solution almost minimizes the social objective, the sum of AAoIs of all the sources.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا