ترغب بنشر مسار تعليمي؟ اضغط هنا

Privacy Leakage in Mobile Computing: Tools, Methods, and Characteristics

180   0   0.0 ( 0 )
 نشر من قبل Muhammad Haris Mughees Mr.
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The number of smartphones, tablets, sensors, and connected wearable devices are rapidly increasing. Today, in many parts of the globe, the penetration of mobile computers has overtaken the number of traditional personal computers. This trend and the always-on nature of these devices have resulted in increasing concerns over the intrusive nature of these devices and the privacy risks that they impose on users or those associated with them. In this paper, we survey the current state of the art on mobile computing research, focusing on privacy risks and data leakage effects. We then discuss a number of methods, recommendations, and ongoing research in limiting the privacy leakages and associated risks by mobile computing.



قيم البحث

اقرأ أيضاً

Graph embeddings have been proposed to map graph data to low dimensional space for downstream processing (e.g., node classification or link prediction). With the increasing collection of personal data, graph embeddings can be trained on private and s ensitive data. For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. We propose a membership inference attack to infer whether a graph node corresponding to individual users data was member of the models training or not. We consider a blackbox setting where the adversary exploits the output prediction scores, and a whitebox setting where the adversary has also access to the released node embeddings. This attack provides an accuracy up to 28% (blackbox) 36% (whitebox) beyond random guess by exploiting the distinguishable footprint between train and test data records left by the graph embedding. We propose a Graph Reconstruction attack where the adversary aims to reconstruct the target graph given the corresponding graph embeddings. Here, the adversary can reconstruct the graph with more than 80% of accuracy and link inference between two nodes around 30% more confidence than a random guess. We then propose an attribute inference attack where the adversary aims to infer a sensitive attribute. We show that graph embeddings are strongly correlated to node attributes letting the adversary inferring sensitive information (e.g., gender or location).
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution.
Federated learning enables mutually distrusting participants to collaboratively learn a distributed machine learning model without revealing anything but the models output. Generic federated learning has been studied extensively, and several learning protocols, as well as open-source frameworks, have been developed. Yet, their over pursuit of computing efficiency and fast implementation might diminish the security and privacy guarantees of participants training data, about which little is known thus far. In this paper, we consider an honest-but-curious adversary who participants in training a distributed ML model, does not deviate from the defined learning protocol, but attempts to infer private training data from the legitimately received information. In this setting, we design and implement two practical attacks, reverse sum attack and reverse multiplication attack, neither of which will affect the accuracy of the learned model. By empirically studying the privacy leakage of two learning protocols, we show that our attacks are (1) effective - the adversary successfully steal the private training data, even when the intermediate outputs are encrypted to protect data privacy; (2) evasive - the adversarys malicious behavior does not deviate from the protocol specification and deteriorate any accuracy of the target model; and (3) easy - the adversary needs little prior knowledge about the data distribution of the target participant. We also experimentally show that the leaked information is as effective as the raw training data through training an alternative classifier on the leaked information. We further discuss potential countermeasures and their challenges, which we hope may lead to several promising research directions.
Location-based queries enable fundamental services for mobile road network travelers. While the benefits of location-based services (LBS) are numerous, exposure of mobile travelers location information to untrusted LBS providers may lead to privacy b reaches. In this paper, we propose StarCloak, a utility-aware and attack-resilient approach to building a privacy-preserving query system for mobile users traveling on road networks. StarCloak has several desirable properties. First, StarCloak supports user-defined k-user anonymity and l-segment indistinguishability, along with user-specified spatial and temporal utility constraints, for utility-aware and personalized location privacy. Second, unlike conventional solutions which are indifferent to underlying road network structure, StarCloak uses the concept of stars and proposes cloaking graphs for effective location cloaking on road networks. Third, StarCloak achieves strong attack-resilience against replay and query injection-based attacks through randomized star selection and pruning. Finally, to enable scalable query processing with high throughput, StarCloak makes cost-aware star selection decisions by considering query evaluation and network communication costs. We evaluate StarCloak on two real-world road network datasets under various privacy and utility constraints. Results show that StarCloak achieves improved query success rate and throughput, reduced anonymization time and network usage, and higher attack-resilience in comparison to XStar, its most relevant competitor.
Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market. As deep learning tasks are mostly computation-intensive, it has become a trend to process raw data on devices and se nd the deep neural network (DNN) features to the cloud, where the features are further processed to return final results. However, there is always unexpected leakage with the release of features, with which an adversary could infer a significant amount of information about the original data. We propose a privacy-preserving reinforcement learning framework on top of the mobile cloud infrastructure from the perspective of DNN structures. The framework aims to learn a policy to modify the base DNNs to prevent information leakage while maintaining high inference accuracy. The policy can also be readily transferred to large-size DNNs to speed up learning. Extensive evaluations on a variety of DNNs have shown that our framework can successfully find privacy-preserving DNN structures to defend different privacy attacks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا