ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards Understanding and Demystifying Bitcoin Mixing Services

312   0   0.0 ( 0 )
 نشر من قبل Yajin Zhou
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

One reason for the popularity of Bitcoin is due to its anonymity. Although several heuristics have been used to break the anonymity, new approaches are proposed to enhance its anonymity at the same time. One of them is the mixing service. Unfortunately, mixing services have been abused to facilitate criminal activities, e.g., money laundering. As such, there is an urgent need to systematically understand Bitcoin mixing services. In this paper, we take the first step to understand state-of-the-art Bitcoin mixing services. Specifically, we propose a generic abstraction model for mixing services and observe that there are two mixing mechanisms in the wild, i.e. {swapping} and {obfuscating}. Based on this model, we conduct a transaction-based analysis and successfully reveal the mixing mechanisms of four representative services. Besides, we propose a method to identify mixing transactions that leverage the obfuscating mechanism. The proposed approach is able to identify over $92$% of the mixing transactions. Based on identified transactions, we then estimate the profit of mixing services and provide a case study of tracing the money flow of stolen Bitcoins.



قيم البحث

اقرأ أيضاً

Membership inference attacks seek to infer membership of individual training instances of a model to which an adversary has black-box access through a machine learning-as-a-service API. In providing an in-depth characterization of membership privacy risks against machine learning models, this paper presents a comprehensive study towards demystifying membership inference attacks from two complimentary perspectives. First, we provide a generalized formulation of the development of a black-box membership inference attack model. Second, we characterize the importance of model choice on model vulnerability through a systematic evaluation of a variety of machine learning models and model combinations using multiple datasets. Through formal analysis and empirical evidence from extensive experimentation, we characterize under what conditions a model may be vulnerable to such black-box membership inference attacks. We show that membership inference vulnerability is data-driven and corresponding attack models are largely transferable. Though different model types display different vulnerabilities to membership inference, so do different datasets. Our empirical results additionally show that (1) using the type of target model under attack within the attack model may not increase attack effectiveness and (2) collaborative learning exposes vulnerabilities to membership inference risks when the adversary is a participant. We also discuss countermeasure and mitigation strategies.
79 - Adam J. Aviv , Ravi Kuber 2018
In this study, we examine the ways in which user attitudes towards privacy and security relating to mobile devices and the data stored thereon may impact the strength of unlock authentication, focusing on Androids graphical unlock patterns. We conduc ted an online study with Amazon Mechanical Turk ($N=750$) using self-reported unlock authentication choices, as well as Likert scale agreement/disagreement responses to a set of seven privacy/security prompts. We then analyzed the responses in multiple dimensions, including a straight average of the Likert responses as well as using Principle Component Analysis to expose latent factors. We found that responses to two of the seven questions proved relevant and significant. These two questions considered attitudes towards general concern for data stored on mobile devices, and attitudes towards concerns for unauthorized access by known actors. Unfortunately, larger conclusions cannot be drawn on the efficacy of the broader set of questions for exposing connections between unlock authentication strength (Pearson Rank $r=-0.08$, $p<0.1$). However, both of our factor solutions exposed differences in responses for demographics groups, including age, gender, and residence type. The findings of this study suggests that there is likely a link between perceptions of privacy/security on mobile devices and the perceived threats therein, but more research is needed, particularly on developing better survey and measurement techniques of privacy/security attitudes that relate to mobile devices specifically.
We focus on the problem of botnet orchestration and discuss how attackers can leverage decentralised technologies to dynamically control botnets with the goal of having botnets that are resilient against hostile takeovers. We cover critical elements of the Bitcoin blockchain and its usage for `floating command and control servers. We further discuss how blockchain-based botnets can be built and include a detailed discussion of our implementation. We also showcase how specific Bitcoin APIs can be used in order to write extraneous data to the blockchain. Finally, while in this paper, we use Bitcoin to build our resilient botnet proof of concept, the threat is not limited to Bitcoin blockchain and can be generalized.
Future communication networks such as 5G are expected to support end-to-end delivery of services for several vertical markets with diverging requirements. Network slicing is a key construct that is used to provide end to end logical virtual networks running on a common virtualised infrastructure, which are mutually isolated. Having different network slices operating over the same 5G infrastructure creates several challenges in security and trust. This paper addresses the fundamental issue of trust of a network slice. It presents a trust model and property-based trust attestation mechanisms which can be used to evaluate the trust of the virtual network functions that compose the network slice. The proposed model helps to determine the trust of the virtual network functions as well as the properties that should be satisfied by the virtual platforms (both at boot and run time) on which these network functions are deployed for them to be trusted. We present a logic-based language that defines simple rules for the specification of properties and the conditions under which these properties are evaluated to be satisfied for trusted virtualised platforms. The proposed trust model and mechanisms enable the service providers to determine the trustworthiness of the network services as well as the users to develop trustworthy applications. .
The appeal of serverless (FaaS) has triggered a growing interest on how to use it in data-intensive applications such as ETL, query processing, or machine learning (ML). Several systems exist for training large-scale ML models on top of serverless in frastructures (e.g., AWS Lambda) but with inconclusive results in terms of their performance and relative advantage over serverful infrastructures (IaaS). In this paper we present a systematic, comparative study of distributed ML training over FaaS and IaaS. We present a design space covering design choices such as optimization algorithms and synchronization protocols, and implement a platform, LambdaML, that enables a fair comparison between FaaS and IaaS. We present experimental results using LambdaML, and further develop an analytic model to capture cost/performance tradeoffs that must be considered when opting for a serverless infrastructure. Our results indicate that ML training pays off in serverless only for models with efficient (i.e., reduced) communication and that quickly converge. In general, FaaS can be much faster but it is never significantly cheaper than IaaS.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا