Do you want to publish a course? Click here

Efficient Defence against Misbehaving TCP Receiver DoS Attacks

259   0   0.0 ( 0 )
 Added by Aldar C.-F. Chan
 Publication date 2011
and research's language is English




Ask ChatGPT about the research

The congestion control algorithm of TCP relies on correct feedback from the receiver to determine the rate at which packets should be sent into the network. Hence, correct receiver feedback (in the form of TCP acknowledgements) is essential to the goal of sharing the scarce bandwidth resources fairly and avoiding congestion collapse in the Internet. However, the assumption that a TCP receiver can always be trusted (to generate feedback correctly) no longer holds as there are plenty of incentives for a receiver to deviate from the protocol. In fact, it has been shown that a misbehaving receiver (whose aim is to bring about congestion collapse) can easily generate acknowledgements to conceal packet loss, so as to drive a number of honest, innocent senders arbitrarily fast to create a significant number of non-responsive packet flows, leading to denial of service to other Internet users. We give the first formal treatment to this problem. We also give an efficient, provably secure mechanism to force a receiver to generate feedback correctly; any incorrect acknowledgement will be detected at the sender and cheating TCP receivers would be identified. The idea is as follows: for each packet sent, the sender generates a tag using a secret key (known to himself only); the receiver could generate a proof using the packet and the tag alone, and send it to the sender; the sender can then verify the proof using the secret key; an incorrect proof would indicate a cheating receiver. The scheme is very efficient in the sense that the TCP sender does not need to store the packet or the tag, and the proofs for multiple packets can be aggregated at the receiver. The scheme is based on an aggregate authenticator. In addition, the proposed solution can be applied to network-layer rate-limiting architectures requiring correct feedback.



rate research

Read More

Modern electric power grid, known as the Smart Grid, has fast transformed the isolated and centrally controlled power system to a fast and massively connected cyber-physical system that benefits from the revolutions happening in the communications and the fast adoption of Internet of Things devices. While the synergy of a vast number of cyber-physical entities has allowed the Smart Grid to be much more effective and sustainable in meeting the growing global energy challenges, it has also brought with it a large number of vulnerabilities resulting in breaches of data integrity, confidentiality and availability. False data injection (FDI) appears to be among the most critical cyberattacks and has been a focal point interest for both research and industry. To this end, this paper presents a comprehensive review in the recent advances of the defence countermeasures of the FDI attacks in the Smart Grid infrastructure. Relevant existing literature are evaluated and compared in terms of their theoretical and practical significance to the Smart Grid cybersecurity. In conclusion, a range of technical limitations of existing false data attack detection researches are identified, and a number of future research directions are recommended.
We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences. We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset. We furthermore complement the training with quantum-enhanced sampling from the D-Wave 2000Q annealer, finding results comparable with classical techniques and with marginal improvements in some cases. These results underline the relevance of probabilistic methods in constructing neural networks and highlight a novel scenario of practical relevance where quantum computers, even with limited hardware capabilites, could provide advantages over classical computers. This work is dedicated to the memory of Peter Wittek.
Deep learning has become an integral part of various computer vision systems in recent years due to its outstanding achievements for object recognition, facial recognition, and scene understanding. However, deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary. In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications. To address this phenomenon, we present, what to our knowledge, is the first ever image set based adversarial defence approach. Image set classification has shown an exceptional performance for object and face recognition, owing to its intrinsic property of handling appearance variability. We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks. We extensively experiment the performance of the proposed technique with several voting strategies. We further analyse the effects of image size, perturbation magnitude, along with the ratio of perturbed images in each image set. We also evaluate our technique with the recent state-of-the-art defence methods, and single-shot recognition task. The empirical results demonstrate superior performance on CIFAR-10, MNIST, ETH-80, and Tiny ImageNet datasets.
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of undermining a system. In the case of DNNs, the lack of better understanding of their working has prevented the development of efficient defenses. In this paper, we propose a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. Our proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. We conduct an extensive experimental study proving the efficiency of our method against multiple attacks, comparing it to numerous defenses, both in white-box and black-box setups. Additionally, the implementation of our method brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples.
Machine learning (ML) has progressed rapidly during the past decade and ML models have been deployed in various real-world applications. Meanwhile, machine learning models have been shown to be vulnerable to various security and privacy attacks. One attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model training set, to mislead any input with an added secret trigger to a target class, while keeping the accuracy for original inputs unchanged. Previous backdoor attacks mainly focus on computer vision tasks. In this paper, we present the first systematic investigation of the backdoor attack against models designed for natural language processing (NLP) tasks. Specifically, we propose three methods to construct triggers in the NLP setting, including Char-level, Word-level, and Sentence-level triggers. Our Attacks achieve an almost perfect success rate without jeopardizing the original model utility. For instance, using the word-level triggers, our backdoor attack achieves 100% backdoor accuracy with only a drop of 0.18%, 1.26%, and 0.19% in the models utility, for the IMDB, Amazon, and Stanford Sentiment Treebank datasets, respectively.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا