Do you want to publish a course? Click here

Can I Take Your Subdomain? Exploring Related-Domain Attacks in the Modern Web

306   0   0.0 ( 0 )
 Added by Marco Squarcina
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Related-domain attackers control a sibling domain of their target web application, e.g., as the result of a subdomain takeover. Despite their additional power over traditional web attackers, related-domain attackers received only limited attention by the research community. In this paper we define and quantify for the first time the threats that related-domain attackers pose to web application security. In particular, we first clarify the capabilities that related-domain attackers can acquire through different attack vectors, showing that different instances of the related-domain attacker concept are worth attention. We then study how these capabilities can be abused to compromise web application security by focusing on different angles, including: cookies, CSP, CORS, postMessage and domain relaxation. By building on this framework, we report on a large-scale security measurement on the top 50k domains from the Tranco list that led to the discovery of vulnerabilities in 887 sites, where we quantified the threats posed by related-domain attackers to popular web applications.



rate research

Read More

Deep Neural Networks (DNNs) have been utilized in various applications ranging from image classification and facial recognition to medical imagery analysis and real-time object detection. As our models become more sophisticated and complex, the computational cost of training such models becomes a burden for small companies and individuals; for this reason, outsourcing the training process has been the go-to option for such users. Unfortunately, outsourcing the training process comes at the cost of vulnerability to backdoor attacks. These attacks aim at establishing hidden backdoors in the DNN such that the model performs well on benign samples but outputs a particular target label when a trigger is applied to the input. Current backdoor attacks rely on generating triggers in the image/pixel domain; however, as we show in this paper, it is not the only domain to exploit and one should always check the other doors. In this work, we propose a complete pipeline for generating a dynamic, efficient, and invisible backdoor attack in the frequency domain. We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks through extensive experiments on various datasets and network architectures. The backdoored models are shown to break various state-of-the-art defences. We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them. We conclude the work with some remarks regarding a networks learning capacity and the capability of embedding a backdoor attack in the model.
Much of the recent excitement around decentralized finance (DeFi) comes from hopes that DeFi can be a secure, private, less centralized alternative to traditional finance systems but the accuracy of these hopes has to date been understudied; people moving to DeFi sites to improve their privacy and security may actually end up with less of both. In this work, we improve the state of DeFi by conducting the first measurement of the privacy and security properties of popular DeFi applications. We find that DeFi applications suffer from the same kinds of privacy and security risks that frequent other parts of the Web. For example, we find that one common tracker has the ability to record Ethereum addresses on over 56% of websites analyzed. Further, we find that many trackers on DeFi sites can trivially link a users Ethereum address with PII (e.g., name or demographic information) or phish users. This work also proposes remedies to the vulnerabilities we identify, in the form of improvements to the most common cryptocurrency wallet. Our wallet modification replaces the users real Ethereum address with site-specific addresses, making it harder for DeFi sites and third parties to (i) learn the users real address and (ii) track them across sites.
Cyber attacks are increasingly becoming prevalent and causing significant damage to individuals, businesses and even countries. In particular, ransomware attacks have grown significantly over the last decade. We do the first study on mining insights about ransomware attacks by analyzing query logs from Bing web search engine. We first extract ransomware related queries and then build a machine learning model to identify queries where users are seeking support for ransomware attacks. We show that user search behavior and characteristics are correlated with ransomware attacks. We also analyse trends in the temporal and geographical space and validate our findings against publicly available information. Lastly, we do a case study on Nemty, a popular ransomware, to show that it is possible to derive accurate insights about cyber attacks by query log analysis.
451 - Haoliang Li 2020
Deep neural networks (DNN) have shown great success in many computer vision applications. However, they are also known to be susceptible to backdoor attacks. When conducting backdoor attacks, most of the existing approaches assume that the targeted DNN is always available, and an attacker can always inject a specific pattern to the training data to further fine-tune the DNN model. However, in practice, such attack may not be feasible as the DNN model is encrypted and only available to the secure enclave. In this paper, we propose a novel black-box backdoor attack technique on face recognition systems, which can be conducted without the knowledge of the targeted DNN model. To be specific, we propose a backdoor attack with a novel color stripe pattern trigger, which can be generated by modulating LED in a specialized waveform. We also use an evolutionary computing strategy to optimize the waveform for backdoor attack. Our backdoor attack can be conducted in a very mild condition: 1) the adversary cannot manipulate the input in an unnatural way (e.g., injecting adversarial noise); 2) the adversary cannot access the training database; 3) the adversary has no knowledge of the training model as well as the training set used by the victim party. We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88%$ based on our simulation study and up to $40%$ based on our physical-domain study by considering the task of face recognition and verification based on at most three-time attempts during authentication. Finally, we evaluate several state-of-the-art potential defenses towards backdoor attacks, and find that our attack can still be effective. We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.
In this paper we aim to compare Kurepa trees and Aronszajn trees. Moreover, we analyze the affect of large cardinal assumptions on this comparison. Using the the method of walks on ordinals, we will show it is consistent with ZFC that there is a Kurepa tree and every Kurepa tree contains a Souslin subtree, if there is an inaccessible cardinal. This is stronger than Komjaths theorem that asserts the same consistency from two inaccessible cardinals. We will show that our large cardinal assumption is optimal, i.e. if every Kurepa tree has an Aronszajn subtree then $omega_2$ is inaccessible in the constructible universe textsc{L}. Moreover, we prove it is consistent with ZFC that there is a Kurepa tree $T$ such that if $U subset T$ is a Kurepa tree with the inherited order from $T$, then $U$ has an Aronszajn subtree. This theorem uses no large cardinal assumption. Our last theorem immediately implies the following: assume $textrm{MA}_{omega_2}$ holds and $omega_2$ is not a Mahlo cardinal in $textsc{L}$. Then there is a Kurepa tree with the property that every Kurepa subset has an Aronszajn subtree. Our work entails proving a new lemma about Todorcevics $rho$ function which might be useful in other contexts.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا