Do you want to publish a course? Click here

Towards a Trust Aware Network Slice based End to End Services for Virtualised Infrastructures

111   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Future communication networks such as 5G are expected to support end-to-end delivery of services for several vertical markets with diverging requirements. Network slicing is a key construct that is used to provide end to end logical virtual networks running on a common virtualised infrastructure, which are mutually isolated. Having different network slices operating over the same 5G infrastructure creates several challenges in security and trust. This paper addresses the fundamental issue of trust of a network slice. It presents a trust model and property-based trust attestation mechanisms which can be used to evaluate the trust of the virtual network functions that compose the network slice. The proposed model helps to determine the trust of the virtual network functions as well as the properties that should be satisfied by the virtual platforms (both at boot and run time) on which these network functions are deployed for them to be trusted. We present a logic-based language that defines simple rules for the specification of properties and the conditions under which these properties are evaluated to be satisfied for trusted virtualised platforms. The proposed trust model and mechanisms enable the service providers to determine the trustworthiness of the network services as well as the users to develop trustworthy applications. .



rate research

Read More

Estimating eye-gaze from images alone is a challenging task, in large parts due to un-observable person-specific factors. Achieving high accuracy typically requires labeled data from test users which may not be attainable in real applications. We observe that there exists a strong relationship between what users are looking at and the appearance of the users eyes. In response to this understanding, we propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships. Our video dataset consists of time-synchronized screen recordings, user-facing camera views, and eye gaze data, which allows for new benchmarks in temporal gaze tracking as well as label-free refinement of gaze. Importantly, we demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures acquired through supervised personalization. Our final method yields significant performance improvements on our proposed EVE dataset, with up to a 28 percent improvement in Point-of-Gaze estimates (resulting in 2.49 degrees in angular error), paving the path towards high-accuracy screen-based eye tracking purely from webcam sensors. The dataset and reference source code are available at https://ait.ethz.ch/projects/2020/EVE
The progressive digitalization is changing the way businesses work and interact. Concepts like Internet of Things, Cloud Computing, Industry 4.0, Service 4.0, Smart Production or Smart Cities are based on systems that are linked to the Internet. The online access to the provided data creates potential to optimize processes and cost reductions, but also exposes it to a risk for an inappropriate use. Trust management systems are necessary in terms of data security, but also to assure the trustworthiness of data that is distributed. Fake news in social media is an example for problems with online data that is not trustable. Security and trustworthiness of data are major concerns today. The speed in digitalization makes it even a greater challenge for future research. This article introduces therefore a model of online trust content usable to compute the trust of an online service advertisement. It contributes to standardize business service descriptions necessary to realize visions of E-commerce 4.0, because it is the basis for the development of AI systems that are able to match an service request to a service advertisement. It is necessary for building trust enhancing architectures in B2B e-commerce. To do so, we conducted case studies, analysed websites, developed a prototype system and verified it by conducting expert interviews.
The production of counterfeit money has a long history. It refers to the creation of imitation currency that is produced without the legal sanction of government. With the growth of the cryptocurrency ecosystem, there is expanding evidence that counterfeit cryptocurrency has also appeared. In this paper, we empirically explore the presence of counterfeit cryptocurrencies on Ethereum and measure their impact. By analyzing over 190K ERC-20 tokens (or cryptocurrencies) on Ethereum, we have identified 2, 117 counterfeit tokens that target 94 of the 100 most popular cryptocurrencies. We perform an end-to-end characterization of the counterfeit token ecosystem, including their popularity, creators and holders, fraudulent behaviors and advertising channels. Through this, we have identified two types of scams related to counterfeit tokens and devised techniques to identify such scams. We observe that over 7,104 victims were deceived in these scams, and the overall financial loss sums to a minimum of $ 17 million (74,271.7 ETH). Our findings demonstrate the urgency to identify counterfeit cryptocurrencies and mitigate this threat.
Recent advances in OCR have shown that an end-to-end (E2E) training pipeline that includes both detection and recognition leads to the best results. However, many existing methods focus primarily on Latin-alphabet languages, often even only case-insensitive English characters. In this paper, we propose an E2E approach, Multiplexed Multilingual Mask TextSpotter, that performs script identification at the word level and handles different scripts with different recognition heads, all while maintaining a unified loss that simultaneously optimizes script identification and multiple recognition heads. Experiments show that our method outperforms the single-head model with similar number of parameters in end-to-end recognition tasks, and achieves state-of-the-art results on MLT17 and MLT19 joint text detection and script identification benchmarks. We believe that our work is a step towards the end-to-end trainable and scalable multilingual multi-purpose OCR system. Our code and model will be released.
The 5G network systems are evolving and have complex network infrastructures. There is a great deal of work in this area focused on meeting the stringent service requirements for the 5G networks. Within this context, security requirements play a critical role as 5G networks can support a range of services such as healthcare services, financial and critical infrastructures. 3GPP and ETSI have been developing security frameworks for 5G networks. Our work in 5G security has been focusing on the design of security architecture and mechanisms enabling dynamic establishment of secure and trusted end to end services as well as development of mechanisms to proactively detect and mitigate security attacks in virtualised network infrastructures. The focus of this paper is on the latter, namely the facilities and mechanisms, and the design of a security architecture providing facilities and mechanisms to detect and mitigate specific security attacks. We have developed and implemented a simplified version of the security architecture using Software Defined Networks (SDN) and Network Function Virtualisation (NFV) technologies. The specific security functions developed in this architecture can be directly integrated into the 5G core network facilities enhancing its security. We describe the design and implementation of the security architecture and demonstrate how it can efficiently mitigate specific types of attacks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا