Do you want to publish a course? Click here

Detecting Sybil Attacks using Proofs of Work and Location in VANETs

109   0   0.0 ( 0 )
 Added by Mahmoud Nabil
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a Sybil attack detection scheme using proofs of work and location. The idea is that each road side unit (RSU) issues a signed time-stamped tag as a proof for the vehicles anonymous location. Proofs sent from multiple consecutive RSUs is used to create vehicle trajectory which is used as vehicle anonymous identity. Also, one RSU is not able to issue trajectories for vehicles, rather the contributions of several RSUs are needed. By this way, attackers need to compromise an infeasible number of RSUs to create fake trajectories. Moreover, upon receiving the proof of location from an RSU, the vehicle should solve a computational puzzle by running proof of work (PoW) algorithm. So, it should provide a valid solution (proof of work) to the next RSU before it can obtain a proof of location. Using the PoW can prevent the vehicles from creating multiple trajectories in case of low-dense RSUs. Then, during any reported event, e.g., road congestion, the event manager uses a matching technique to identify the trajectories sent from Sybil vehicles. The scheme depends on the fact that the Sybil trajectories are bounded physically to one vehicle; therefore, their trajectories should overlap. Extensive experiments and simulations demonstrate that our scheme achieves high detection rate to Sybil attacks with low false negative and acceptable communication and computation overhead.



rate research

Read More

In federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget epsilon of differential privacy on the local model updates of these Sybil clients such that the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the prediction cost reported from each client. Our empirical study demonstrates that our defense approach effectively mitigates the impact of our Sybil attacks on model convergence.
A novel class of extreme link-flooding DDoS (Distributed Denial of Service) attacks is designed to cut off entire geographical areas such as cities and even countries from the Internet by simultaneously targeting a selected set of network links. The Crossfire attack is a target-area link-flooding attack, which is orchestrated in three complex phases. The attack uses a massively distributed large-scale botnet to generate low-rate benign traffic aiming to congest selected network links, so-called target links. The adoption of benign traffic, while simultaneously targeting multiple network links, makes detecting the Crossfire attack a serious challenge. In this paper, we present analytical and emulated results showing hitherto unidentified vulnerabilities in the execution of the attack, such as a correlation between coordination of the botnet traffic and the quality of the attack, and a correlation between the attack distribution and detectability of the attack. Additionally, we identified a warm-up period due to the bot synchronization. For attack detection, we report results of using two supervised machine learning approaches: Support Vector Machine (SVM) and Random Forest (RF) for classification of network traffic to normal and abnormal traffic, i.e, attack traffic. These machine learning models have been trained in various scenarios using the link volume as the main feature set.
With the growing amount of cyber threats, the need for development of high-assurance cyber systems is becoming increasingly important. The objective of this paper is to address the challenges of modeling and detecting sophisticated network attacks, such as multiple interleaved attacks. We present the interleaving concept and investigate how interleaving multiple attacks can deceive intrusion detection systems. Using one of the important statistical machine learning (ML) techniques, Hidden Markov Models (HMM), we develop two architectures that take into account the stealth nature of the interleaving attacks, and that can detect and track the progress of these attacks. These architectures deploy a database of HMM templates of known attacks and exhibit varying performance and complexity. For performance evaluation, in the presence of multiple multi-stage attack scenarios, various metrics are proposed which include (1) attack risk probability, (2) detection error rate, and (3) the number of correctly detected stages. Extensive simulation experiments are used to demonstrate the efficacy of the proposed architectures.
Fault injections are increasingly used to attack/test secure applications. In this paper, we define formal models of runtime monitors that can detect fault injections that result in test inversion attacks and arbitrary jumps in the control flow. Runtime verification monitors offer several advantages. The code implementing a monitor is small compared to the entire application code. Monitors have a formal semantics; and we prove that they effectively detect attacks. Each monitor is a module dedicated to detecting an attack and can be deployed as needed to secure the application. A monitor can run separately from the application or it can be ``weaved inside the application. Our monitors have been validated by detecting simulated attacks on a program that verifies a user PIN.
The rapid growth of Decentralized Finance (DeFi) boosts the Ethereum ecosystem. At the same time, attacks towards DeFi applications (apps) are increasing. However, to the best of our knowledge, existing smart contract vulnerability detection tools cannot be directly used to detect DeFi attacks. Thats because they lack the capability to recover and understand high-level DeFi semantics, e.g., a user trades a token pair X and Y in a Decentralized EXchange (DEX). In this work, we focus on the detection of two types of new attacks on DeFi apps, including direct and indirect price manipulation attacks. The former one means that an attacker directly manipulates the token price in DEX by performing an unwanted trade in the same DEX by attacking the vulnerable DeFi app. The latter one means that an attacker indirectly manipulates the token price of the vulnerable DeFi app (e.g., a lending app). To this end, we propose a platform-independent way to recover high-level DeFi semantics by first constructing the cash flow tree from raw Ethereum transactions and then lifting the low-level semantics to high-level ones, including token trade, liquidity mining, and liquidity cancel. Finally, we detect price manipulation attacks using the patterns expressed with the recovered DeFi semantics. We have implemented a prototype named tool{} and applied it to more than 350 million transactions. It successfully detected 432 real-world attacks in the wild. We confirm that they belong to four known security incidents and five zero-day ones. We reported our findings. Two CVEs have been assigned. We further performed an attack analysis to reveal the root cause of the vulnerability, the attack footprint, and the impact of the attack. Our work urges the need to secure the DeFi ecosystem.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا