ﻻ يوجد ملخص باللغة العربية
The network virtualization paradigm envisions an Internet where arbitrary virtual networks (VNets) can be specified and embedded over a shared substrate (e.g., the physical infrastructure). As VNets can be requested at short notice and for a desired time period only, the paradigm enables a flexible service deployment and an efficient resource utilization. This paper investigates the security implications of such an architecture. We consider a simple model where an attacker seeks to extract secret information about the substrate topology, by issuing repeated VNet embedding requests. We present a general framework that exploits basic properties of the VNet embedding relation to infer the entire topology. Our framework is based on a graph motif dictionary applicable for various graph classes. Moreover, we provide upper bounds on the request complexity, the number of requests needed by the attacker to succeed.
Modern computer networks support interesting new routing models in which traffic flows from a source s to a destination t can be flexibly steered through a sequence of waypoints, such as (hardware) middleboxes or (virtualized) network functions, to c
While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granular
We propose a component that gets a request and a correction and outputs a corrected request. To get this corrected request, the entities in the correction phrase replace their corresponding entities in the request. In addition, the proposed component
The recently standardized millimeter wave-based 3GPP New Radio technology is expected to become an enabler for both enhanced Mobile Broadband (eMBB) and ultra-reliable low latency communication (URLLC) services specified to future 5G systems. One of
Machine-Learning-as-a-Service providers expose machine learning (ML) models through application programming interfaces (APIs) to developers. Recent work has shown that attackers can exploit these APIs to extract good approximations of such ML models,