Do you want to publish a course? Click here

Proposal of a New Block Cipher reasonably Non-Vulnerable against Cryptanalytic Attacks

114   0   0.0 ( 0 )
 Added by Abhijit Chowdhury
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

This paper proposes a new block cipher termed as Modular Arithmetic based Block Cipher with Varying Key-Spaces (MABCVK) that uses private key-spaces of varying lengths to encrypt data files. There is a simple but intelligent use of theory of modular arithmetic in the scheme of the cipher. Based on observed implementation of the proposed cipher on a set of real data files of several types, all results are tabulated and analyzed.The schematic strength of the cipher and the freedom of using a long key-space expectedly can make it reasonably nonvulnerable against possible cryptanalytic attacks. As a part of the future scope of the work, it is also intended to formulate and implement an enhanced scheme that will use a carrier image to have a secure transmission of the private key.



rate research

Read More

Recently, an image encryption algorithm using block-based scrambling and image filtering has been proposed by Hua et al. In this paper, we analyze the security problems of the encryption algorithm in detail and break the encryption by a codebook attack. We construct an linear relation between plain-images and cipher-images by differential cryptanalysis. With this linear relation, we build a codebook containing $(M times N + 1)$ pairs of plain-images and cipher-images, where $Mtimes N$ is the size of images. The proposed codebook attack indicates that the encryption scheme is insecure. To resist the codebook attack, an improved algorithm is proposed. Experimental results show that the improved algorithm not only inherits the merits of the original scheme, but also has stronger security against the differential cryptanalysis.
Machine learning (ML) has progressed rapidly during the past decade and ML models have been deployed in various real-world applications. Meanwhile, machine learning models have been shown to be vulnerable to various security and privacy attacks. One attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model training set, to mislead any input with an added secret trigger to a target class, while keeping the accuracy for original inputs unchanged. Previous backdoor attacks mainly focus on computer vision tasks. In this paper, we present the first systematic investigation of the backdoor attack against models designed for natural language processing (NLP) tasks. Specifically, we propose three methods to construct triggers in the NLP setting, including Char-level, Word-level, and Sentence-level triggers. Our Attacks achieve an almost perfect success rate without jeopardizing the original model utility. For instance, using the word-level triggers, our backdoor attack achieves 100% backdoor accuracy with only a drop of 0.18%, 1.26%, and 0.19% in the models utility, for the IMDB, Amazon, and Stanford Sentiment Treebank datasets, respectively.
Graph modeling allows numerous security problems to be tackled in a general way, however, little work has been done to understand their ability to withstand adversarial attacks. We design and evaluate two novel graph attacks against a state-of-the-art network-level, graph-based detection system. Our work highlights areas in adversarial machine learning that have not yet been addressed, specifically: graph-based clustering techniques, and a global feature space where realistic attackers without perfect knowledge must be accounted for (by the defenders) in order to be practical. Even though less informed attackers can evade graph clustering with low cost, we show that some practical defenses are possible.
Recently, recommender systems have achieved promising performances and become one of the most widely used web applications. However, recommender systems are often trained on highly sensitive user data, thus potential data leakage from recommender systems may lead to severe privacy problems. In this paper, we make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference. In contrast with traditional membership inference against machine learning classifiers, our attack faces two main differences. First, our attack is on the user-level but not on the data sample-level. Second, the adversary can only observe the ordered recommended items from a recommender system instead of prediction results in the form of posterior probabilities. To address the above challenges, we propose a novel method by representing users from relevant items. Moreover, a shadow recommender is established to derive the labeled training data for training the attack model. Extensive experimental results show that our attack framework achieves a strong performance. In addition, we design a defense mechanism to effectively mitigate the membership inference threat of recommender systems.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا