ترغب بنشر مسار تعليمي؟ اضغط هنا

Performance Analysis of AODV under Black Hole Attack through Use of OPNET Simulator

33   0   0.0 ( 0 )
 نشر من قبل Andreas Baldi
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Mobile ad hoc networks (MANETs) are dynamic wireless networks without any infrastructure. These networks are weak against many types of attacks. One of these attacks is the black hole. In this attack, a malicious node advertises itself as having freshest or shortest path to specific node to absorb packets to itself. The effect of black hole attack on ad hoc network using AODV as a routing protocol will be examined in this research. Furthermore, we investigate solution for increasing security in these networks. Simulation results using OPNET simulator depict that packet delivery ratio in the presence of malicious nodes, reduces notably.

قيم البحث

اقرأ أيضاً

Deep learning models are increasingly used in mobile applications as critical components. Unlike the program bytecode whose vulnerabilities and threats have been widely-discussed, whether and how the deep learning models deployed in the applications can be compromised are not well-understood since neural networks are usually viewed as a black box. In this paper, we introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models. The core of the attack is a neural conditional branch constructed with a trigger detector and several operators and injected into the victim model as a malicious payload. The attack is effective as the conditional logic can be flexibly customized by the attacker, and scalable as it does not require any prior knowledge from the original model. We evaluated the attack effectiveness using 5 state-of-the-art deep learning models and real-world samples collected from 30 users. The results demonstrated that the injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.4% accuracy decrease. We further conducted an empirical study on real-world mobile deep learning apps collected from Google Play. We found 54 apps that were vulnerable to our attack, including popular and security-critical ones. The results call for the awareness of deep learning application developers and auditors to enhance the protection of deployed models.
The Android mining sandbox approach consists in running dynamic analysis tools on a benign version of an Android app and recording every call to sensitive APIs. Later, one can use this information to (a) prevent calls to other sensitive APIs (those n ot previously recorded) or (b) run the dynamic analysis tools again in a different version of the app -- in order to identify possible malicious behavior. Although the use of dynamic analysis for mining Android sandboxes has been empirically investigated before, little is known about the potential benefits of combining static analysis with the mining sandbox approach for identifying malicious behavior. As such, in this paper we present the results of two empirical studies: The first is a non-exact replication of a previous research work from Bao et al., which compares the performance of test case generation tools for mining Android sandboxes. The second is a new experiment to investigate the implications of using taint analysis algorithms to complement the mining sandbox approach in the task to identify malicious behavior. Our study brings several findings. For instance, the first study reveals that a static analysis component of DroidFax (a tool used for instrumenting Android apps in the Bao et al. study) contributes substantially to the performance of the dynamic analysis tools explored in the previous work. The results of the second study show that taint analysis is also practical to complement the mining sandboxes approach, improve the performance of the later strategy in at most 28.57%.
The distributed denial of service (DDoS) attack is detrimental to businesses and individuals as people are heavily relying on the Internet. Due to remarkable profits, crackers favor DDoS as cybersecurity weapons to attack a victim. Even worse, edge s ervers are more vulnerable. Current solutions lack adequate consideration to the expense of attackers and inter-defender collaborations. Hence, we revisit the DDoS attack and defense, clarifying the advantages and disadvantages of both parties. We further propose a joint defense framework to defeat attackers by incurring a significant increment of required bots and enlarging attack expenses. The quantitative evaluation and experimental assessment showcase that such expense can surge up to thousands of times. The skyrocket of expenses leads to heavy loss to the cracker, which prevents further attacks.
This paper investigates the impact of authentication on effective capacity (EC) of an underwater acoustic (UWA) channel. Specifically, the UWA channel is under impersonation attack by a malicious node (Eve) present in the close vicinity of the legiti mate node pair (Alice and Bob); Eve tries to inject its malicious data into the system by making Bob believe that she is indeed Alice. To thwart the impersonation attack by Eve, Bob utilizes the distance of the transmit node as the feature/fingerprint to carry out feature-based authentication at the physical layer. Due to authentication at Bob, due to lack of channel knowledge at the transmit node (Alice or Eve), and due to the threshold-based decoding error model, the relevant dynamics of the considered system could be modelled by a Markov chain (MC). Thus, we compute the state-transition probabilities of the MC, and the moment generating function for the service process corresponding to each state. This enables us to derive a closed-form expression of the EC in terms of authentication parameters. Furthermore, we compute the optimal transmission rate (at Alice) through gradient-descent (GD) technique and artificial neural network (ANN) method. Simulation results show that the EC decreases under severe authentication constraints (i.e., more false alarms and more transmissions by Eve). Simulation results also reveal that the (optimal transmission rate) performance of the ANN technique is quite close to that of the GD method.
Adversarial attacks are considered a potentially serious security threat for machine learning systems. Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial attacks due to strong financial incentives and the associated technological infrastructure. In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology. We focus on adversarial black-box settings, in which the attacker does not have full access to the target model and usually uses another model, commonly referred to as surrogate model, to craft adversarial examples. We consider this to be the most realistic scenario for MedIA systems. Firstly, we study the effect of weight initialization (ImageNet vs. random) on the transferability of adversarial attacks from the surrogate model to the target model. Secondly, we study the influence of differences in development data between target and surrogate models. We further study the interaction of weight initialization and data differences with differences in model architecture. All experiments were done with a perturbation degree tuned to ensure maximal transferability at minimal visual perceptibility of the attacks. Our experiments show that pre-training may dramatically increase the transferability of adversarial examples, even when the target and surrogates architectures are different: the larger the performance gain using pre-training, the larger the transferability. Differences in the development data between target and surrogate models considerably decrease the performance of the attack; this decrease is further amplified by difference in the model architecture. We believe these factors should be considered when developing security-critical MedIA systems planned to be deployed in clinical practice.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا