No Arabic abstract
We present the design and design rationale for the user interfaces for Privacy Enhancements for Android (PE for Android). These UIs are built around two core ideas, namely that developers should explicitly declare the purpose of why sensitive data is being used, and these permission-purpose pairs should be split by first party and third party uses. We also present a taxonomy of purposes and ways of how these ideas can be deployed in the existing Android ecosystem.
Mobile applications (hereafter, apps) collect a plethora of information regarding the user behavior and his device through third-party analytics libraries. However, the collection and usage of such data raised several privacy concerns, mainly because the end-user - i.e., the actual owner of the data - is out of the loop in this collection process. Also, the existing privacy-enhanced solutions that emerged in the last years follow an all or nothing approach, leaving the user the sole option to accept or completely deny the access to privacy-related data. This work has the two-fold objective of assessing the privacy implications on the usage of analytics libraries in mobile apps and proposing a data anonymization methodology that enables a trade-off between the utility and privacy of the collected data and gives the user complete control over the sharing process. To achieve that, we present an empirical privacy assessment on the analytics libraries contained in the 4500 most-used Android apps of the Google Play Store between November 2020 and January 2021. Then, we propose an empowered anonymization methodology, based on MobHide, that gives the end-user complete control over the collection and anonymization process. Finally, we empirically demonstrate the applicability and effectiveness of such anonymization methodology thanks to HideDroid, a fully-fledged anonymization app for the Android ecosystem.
Access to privacy-sensitive information on Android is a growing concern in the mobile community. Albeit Google Play recently introduced some privacy guidelines, it is still an open problem to soundly verify whether apps actually comply with such rules. To this aim, in this paper, we discuss a novel methodology based on a fruitful combination of static analysis, dynamic analysis, and machine learning techniques, which allows assessing such compliance. More in detail, our methodology checks whether each app i) contains a privacy policy that complies with the Google Play privacy guidelines, and ii) accesses privacy-sensitive information only upon the acceptance of the policy by the user. Furthermore, the methodology also allows checking the compliance of third-party libraries embedded in the apps w.r.t. the same privacy guidelines. We implemented our methodology in a tool, 3PDroid, and we carried out an assessment on a set of recent and most-downloaded Android apps in the Google Play Store. Experimental results suggest that more than 95% of apps access users privacy-sensitive information, but just a negligible subset of them (around 1%) fully complies with the Google Play privacy guidelines.
Environmental understanding capability of $textit{augmented}$ (AR) and $textit{mixed reality}$ (MR) devices are continuously improving through advances in sensing, computer vision, and machine learning. Various AR/MR applications demonstrate such capabilities i.e. scanning a space using a handheld or head mounted device and capturing a digital representation of the space that are accurate copies of the real space. However, these capabilities impose privacy risks to users: personally identifiable information can leak from captured 3D maps of the sensitive spaces and/or captured sensitive objects within the mapped space. Thus, in this work, we demonstrate how we can leverage 3D object regeneration for preserving privacy of 3D point clouds. That is, we employ an intermediary layer of protection to transform the 3D point cloud before providing it to the third-party applications. Specifically, we use an existing adversarial autoencoder to generate copies of 3D objects where the likeness of the copies from the original can be varied. To test the viability and performance of this method as a privacy preserving mechanism, we use a 3D classifier to classify and identify these transformed point clouds i.e. perform $textit{super}$-class and $textit{intra}$-class classification. To measure the performance of the proposed privacy framework, we define privacy, $Piin[0,1]$, and utility metrics, $Qin[0,1]$, which are desired to be maximized. Experimental evaluation shows that the privacy framework can indeed variably effect the privacy of a 3D object by varying the privilege level $lin[0,1]$ i.e. if a low $l<0.17$ is maintained, $Pi_1,Pi_2>0.4$ is ensured where $Pi_1,Pi_2$ are super- and intra-class privacy. Lastly, the privacy framework can ensure relatively high intra-class privacy and utility i.e. $Pi_2>0.63$ and $Q>0.70$, if the privilege level is kept within the range of $0.17<l<0.25$.
Android unlock patterns remain quite common. Our study, as well as others, finds that roughly 25% of respondents use a pattern when unlocking their phone. Despite known security issues, the design of the pattern interface remains unchanged since first launch. We propose Double Patterns, a natural and easily adoptable advancement on Android unlock patterns that maintains the core design features, but instead of selecting a single pattern, a user selects two, concurrent Android unlock patterns entered one-after-the-other super-imposed on the same 3x3 grid. We evaluated Double Patterns for both security and usability by conducting an online study with $n=634$ participants in three treatments: a control treatment, a first pattern entry blocklist, and a blocklist for both patterns. We find that in all settings, user chosen Double Patterns are more secure than traditional patterns based on standard guessability metrics, more similar to that of 4-/6-digit PINs, and even more difficult to guess for a simulated attacker. Users express positive sentiments in qualitative feedback, particularly those who currently (or previously) used Android unlock patterns, and overall, participants found the Double Pattern interface quite usable, with high recall retention and comparable entry times to traditional patterns. In particular, current Android pattern users, the target population for Double Patterns, reported SUS scores in the 80th percentile and high perceptions of security and usability in responses to open- and closed-questions. Based on these findings, we would recommend adding Double Patterns as an advancement to Android patterns, much like allowing for added PIN length.
In response to the Covid-19 pandemic, educational institutions quickly transitioned to remote learning. The problem of how to perform student assessment in an online environment has become increasingly relevant, leading many institutions and educators to turn to online proctoring services to administer remote exams. These services employ various student monitoring methods to curb cheating, including restricted (lockdown) browser modes, video/screen monitoring, local network traffic analysis, and eye tracking. In this paper, we explore the security and privacy perceptions of the student test-takers being proctored. We analyze user reviews of proctoring services browser extensions and subsequently perform an online survey (n=102). Our findings indicate that participants are concerned about both the amount and the personal nature of the information shared with the exam proctoring companies. However, many participants also recognize a trade-off between pandemic safety concerns and the arguably invasive means by which proctoring services ensure exam integrity. Our findings also suggest that institutional power dynamics and students trust in their institutions may dissuade students opposition to remote proctoring.