ترغب بنشر مسار تعليمي؟ اضغط هنا

User configurable 3D object regeneration for spatial privacy

62   0   0.0 ( 0 )
 نشر من قبل Arpit Nama
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Environmental understanding capability of $textit{augmented}$ (AR) and $textit{mixed reality}$ (MR) devices are continuously improving through advances in sensing, computer vision, and machine learning. Various AR/MR applications demonstrate such capabilities i.e. scanning a space using a handheld or head mounted device and capturing a digital representation of the space that are accurate copies of the real space. However, these capabilities impose privacy risks to users: personally identifiable information can leak from captured 3D maps of the sensitive spaces and/or captured sensitive objects within the mapped space. Thus, in this work, we demonstrate how we can leverage 3D object regeneration for preserving privacy of 3D point clouds. That is, we employ an intermediary layer of protection to transform the 3D point cloud before providing it to the third-party applications. Specifically, we use an existing adversarial autoencoder to generate copies of 3D objects where the likeness of the copies from the original can be varied. To test the viability and performance of this method as a privacy preserving mechanism, we use a 3D classifier to classify and identify these transformed point clouds i.e. perform $textit{super}$-class and $textit{intra}$-class classification. To measure the performance of the proposed privacy framework, we define privacy, $Piin[0,1]$, and utility metrics, $Qin[0,1]$, which are desired to be maximized. Experimental evaluation shows that the privacy framework can indeed variably effect the privacy of a 3D object by varying the privilege level $lin[0,1]$ i.e. if a low $l<0.17$ is maintained, $Pi_1,Pi_2>0.4$ is ensured where $Pi_1,Pi_2$ are super- and intra-class privacy. Lastly, the privacy framework can ensure relatively high intra-class privacy and utility i.e. $Pi_2>0.63$ and $Q>0.70$, if the privilege level is kept within the range of $0.17<l<0.25$.



قيم البحث

اقرأ أيضاً

We present the design and design rationale for the user interfaces for Privacy Enhancements for Android (PE for Android). These UIs are built around two core ideas, namely that developers should explicitly declare the purpose of why sensitive data is being used, and these permission-purpose pairs should be split by first party and third party uses. We also present a taxonomy of purposes and ways of how these ideas can be deployed in the existing Android ecosystem.
Recent work has demonstrated that by monitoring the Real Time Bidding (RTB) protocol, one can estimate the monetary worth of different users for the programmatic advertising ecosystem, even when the so-called winning bids are encrypted. In this paper we describe how to implement the above techniques in a practical and privacy preserving manner. Specifically, we study the privacy consequences of reporting back to a centralized server, features that are necessary for estimating the value of encrypted winning bids. We show that by appropriately modulating the granularity of the necessary information and by scrambling the communication channel to the server, one can increase the privacy performance of the system in terms of K-anonymity. Weve implemented the above ideas on a browser extension and disseminated it to some 200 users. Analyzing the results from 6 months of deployment, we show that the average value of users for the programmatic advertising ecosystem has grown more than 75% in the last 3 years.
The exponential growth of mobile devices has raised concerns about sensitive data leakage. In this paper, we make the first attempt to identify suspicious location-related HTTP transmission flows from the users perspective, by answering the question: Is the transmission user-intended? In contrast to previous network-level detection schemes that mainly rely on a given set of suspicious hostnames, our approach can better adapt to the fast growth of app market and the constantly evolving leakage patterns. On the other hand, compared to existing system-level detection schemes built upon program taint analysis, where all sensitive transmissions as treated as illegal, our approach better meets the user needs and is easier to deploy. In particular, our proof-of-concept implementation (FlowIntent) captures sensitive transmissions missed by TaintDroid, the state-of-the-art dynamic taint analysis system on Android platforms. Evaluation using 1002 location sharing instances collected from more than 20,000 apps shows that our approach achieves about 91% accuracy in detecting illegitimate location transmissions.
We show that aggregated model updates in federated learning may be insecure. An untrusted central server may disaggregate user updates from sums of updates across participants given repeated observations, enabling the server to recover privileged inf ormation about individual users private training data via traditional gradient inference attacks. Our method revolves around reconstructing participant information (e.g: which rounds of training users participated in) from aggregated model updates by leveraging summary information from device analytics commonly used to monitor, debug, and manage federated learning systems. Our attack is parallelizable and we successfully disaggregate user updates on settings with up to thousands of participants. We quantitatively and qualitatively demonstrate significant improvements in the capability of various inference attacks on the disaggregated updates. Our attack enables the attribution of learned properties to individual users, violating anonymity, and shows that a determined central server may undermine the secure aggregation protocol to break individual users data privacy in federated learning.
Mobile applications (hereafter, apps) collect a plethora of information regarding the user behavior and his device through third-party analytics libraries. However, the collection and usage of such data raised several privacy concerns, mainly because the end-user - i.e., the actual owner of the data - is out of the loop in this collection process. Also, the existing privacy-enhanced solutions that emerged in the last years follow an all or nothing approach, leaving the user the sole option to accept or completely deny the access to privacy-related data. This work has the two-fold objective of assessing the privacy implications on the usage of analytics libraries in mobile apps and proposing a data anonymization methodology that enables a trade-off between the utility and privacy of the collected data and gives the user complete control over the sharing process. To achieve that, we present an empirical privacy assessment on the analytics libraries contained in the 4500 most-used Android apps of the Google Play Store between November 2020 and January 2021. Then, we propose an empowered anonymization methodology, based on MobHide, that gives the end-user complete control over the collection and anonymization process. Finally, we empirically demonstrate the applicability and effectiveness of such anonymization methodology thanks to HideDroid, a fully-fledged anonymization app for the Android ecosystem.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا