ﻻ يوجد ملخص باللغة العربية
The congestion control algorithm of TCP relies on correct feedback from the receiver to determine the rate at which packets should be sent into the network. Hence, correct receiver feedback (in the form of TCP acknowledgements) is essential to the goal of sharing the scarce bandwidth resources fairly and avoiding congestion collapse in the Internet. However, the assumption that a TCP receiver can always be trusted (to generate feedback correctly) no longer holds as there are plenty of incentives for a receiver to deviate from the protocol. In fact, it has been shown that a misbehaving receiver (whose aim is to bring about congestion collapse) can easily generate acknowledgements to conceal packet loss, so as to drive a number of honest, innocent senders arbitrarily fast to create a significant number of non-responsive packet flows, leading to denial of service to other Internet users. We give the first formal treatment to this problem. We also give an efficient, provably secure mechanism to force a receiver to generate feedback correctly; any incorrect acknowledgement will be detected at the sender and cheating TCP receivers would be identified. The idea is as follows: for each packet sent, the sender generates a tag using a secret key (known to himself only); the receiver could generate a proof using the packet and the tag alone, and send it to the sender; the sender can then verify the proof using the secret key; an incorrect proof would indicate a cheating receiver. The scheme is very efficient in the sense that the TCP sender does not need to store the packet or the tag, and the proofs for multiple packets can be aggregated at the receiver. The scheme is based on an aggregate authenticator. In addition, the proposed solution can be applied to network-layer rate-limiting architectures requiring correct feedback.
Modern electric power grid, known as the Smart Grid, has fast transformed the isolated and centrally controlled power system to a fast and massively connected cyber-physical system that benefits from the revolutions happening in the communications an
We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to
Deep learning has become an integral part of various computer vision systems in recent years due to its outstanding achievements for object recognition, facial recognition, and scene understanding. However, deep neural networks (DNNs) are susceptible
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of underm
Machine learning (ML) has progressed rapidly during the past decade and ML models have been deployed in various real-world applications. Meanwhile, machine learning models have been shown to be vulnerable to various security and privacy attacks. One