ترغب بنشر مسار تعليمي؟ اضغط هنا

A Precise and Expressive Lattice-theoretical Framework for Efficient Network Verification

91   0   0.0 ( 0 )
 نشر من قبل Alex Horn
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Network verification promises to detect errors, such as black holes and forwarding loops, by logically analyzing the control or data plane. To do so efficiently, the state-of-the-art (e.g., Veriflow) partitions packet headers with identical forwarding behavior into the same packet equivalence class (PEC). Recently, Yang and Lam showed how to construct the minimal set of PECs, called atomic predicates. Their construction uses Binary Decision Diagrams (BDDs). However, BDDs have been shown to incur significant overhead per packet header bit, performing poorly when analyzing large-scale data centers. The overhead of atomic predicates prompted ddNF to devise a specialized data structure of Ternary Bit Vectors (TBV) instead. However, TBVs are strictly less expressive than BDDs. Moreover, unlike atomic predicates, ddNFs set of PECs is not minimal. We show that ddNFs non-minimality is due to empty PECs. In addition, empty PECs are shown to trigger wrong analysis results. This reveals an inherent tension between precision, expressiveness and performance in formal network verification. Our paper resolves this tension through a new lattice-theoretical PEC-construction algorithm, #PEC, that advances the field as follows: (i) #PEC can encode more kinds of forwarding rules (e.g., ip-tables) than ddNF and Veriflow, (ii) #PEC verifies a wider class of errors (e.g., shadowed rules) than ddNF, and (iii) on a broad range of real-world datasets, #PEC is 10X faster than atomic predicates. By achieving precision, expressiveness and performance, this paper answers a longstanding quest that has spanned three generations of formal network analysis techniques.



قيم البحث

اقرأ أيضاً

The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. In this context, verification involves proving or disproving that an NN model satisfi es certain input-output properties. Despite the reputation of learned NN models as black boxes, and the theoretical hardness of proving useful properties about them, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. However, these methods are still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we exploit the Mixed Integer Linear Programming (MIP) formulation of verification to propose a family of algorithms based on Branch-and-Bound (BaB). We show that our family contains previous verification methods as special cases. With the help of the BaB framework, we make three key contributions. Firstly, we identify new methods that combine the strengths of multiple existing approaches, accomplishing significant performance improvements over previous state of the art. Secondly, we introduce an effective branching strategy on ReLU non-linearities. This branching strategy allows us to efficiently and successfully deal with high input dimensional problems with convolutional network architecture, on which previous methods fail frequently. Finally, we propose comprehensive test data sets and benchmarks which includes a collection of previously released testcases. We use the data sets to conduct a thorough experimental comparison of existing and new algorithms and to provide an inclusive analysis of the factors impacting the hardness of verification problems.
Todays distributed network control planes support multiple routing protocols, filtering mechanisms, and route selection policies. These protocols operate at different layers, e.g. BGP operates at the EGP layer, OSPF at the IGP layer, and VLANs at lay er 2. The behavior of a networks control plane depends on how these protocols interact with each other. This makes network configurations highly complex and error-prone. State-of-the-art control plane verifiers are either too slow, or do not model certain features of the network. In this paper, we propose a new multilayer hedge graph abstraction, Tiramisu, that supports fast verification of the control plane. Tiramisu uses a combination of graph traversal algorithms and ILPs (Integer Linear Programs) to check different network policies. We use Tiramisu to verify policies of various real-world and synthetic configurations. Our experiments show that Tiramisu can verify any policy in < 0.08 s in small networks (~35 devices) and < 0.12 s in large networks (~160 devices), and it is 10-600X faster than state-of-the-art without losing generality.
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks. First, we propose novel bounding algorithms based on Lagrangian Decomposition. Previous works have used off-the-shelf sol vers to solve relaxations at each node of the BaB tree, or constructed weaker relaxations that can be solved efficiently, but lead to unnecessarily weak bounds. Our formulation restricts the optimization to a subspace of the dual domain that is guaranteed to contain the optimum, resulting in accelerated convergence. Furthermore, it allows for a massively parallel implementation, which is amenable to GPU acceleration via modern deep learning frameworks. Second, we present a novel activation-based branching strategy. By coupling an inexpensive heuristic with fast dual bounding, our branching scheme greatly reduces the size of the BaB tree compared to previous heuristic methods. Moreover, it performs competitively with a recent strategy based on learning algorithms, without its large offline training cost. Finally, we design a BaB framework, named Branch and Dual Network Bound (BaDNB), based on our novel bounding and branching algorithms. We show that BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial robustness properties.
Measuring and evaluating network resilience has become an important aspect since the network is vulnerable to both uncertain disturbances and malicious attacks. Networked systems are often composed of many dynamic components and change over time, whi ch makes it difficult for existing methods to access the changeable situation of network resilience. This paper establishes a novel quantitative framework for evaluating network resilience using the Dynamic Bayesian Network. The proposed framework can be used to evaluate the networks multi-stage resilience processes when suffering various attacks and recoveries. First, we define the dynamic capacities of network components and establish the networks five core resilience capabilities to describe the resilient networking stages including preparation, resistance, adaptation, recovery, and evolution; the five core resilience capabilities consist of rapid response capability, sustained resistance capability, continuous running capability, rapid convergence capability, and dynamic evolution capability. Then, we employ a two-time slices approach based on the Dynamic Bayesian Network to quantify five crucial performances of network resilience based on core capabilities proposed above. The proposed approach can ensure the time continuity of resilience evaluation in time-varying networks. Finally, our proposed evaluation framework is applied to different attacks and recovery conditions in typical simulations and real-world network topology. Results and comparisons with extant studies indicate that the proposed method can achieve a more accurate and comprehensive evaluation and can be applied to network scenarios under various attack and recovery intensities.
In this paper, we develop a framework for a novel perceptive mobile/cellular network that integrates radar sensing function into the mobile communication network. We propose a unified system platform that enables downlink and uplink sensing, sharing the same transmitted signals with communications. We aim to tackle the fundamental sensing parameter estimation problem in perceptive mobile networks, by addressing two key challenges associated with sophisticated mobile signals and rich multipath in mobile networks. To extract sensing parameters from orthogonal frequency division multiple access (OFDMA) and spatial division multiple access (SDMA) communication signals, we propose two approaches to formulate it to problems that can be solved by compressive sensing techniques. Most sensing algorithms have limits on the number of multipath signals for their inputs. To reduce the multipath signals, as well as removing unwanted clutter signals, we propose a background subtraction method based on simple recursive computation, and provide a closed-form expression for performance characterization. The effectiveness of these methods is validated in simulations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا