ترغب بنشر مسار تعليمي؟ اضغط هنا

Apps Auto-Login Function Security Testing via Android OS-Level Virtualization

80   0   0.0 ( 0 )
 نشر من قبل Wenna Song
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Limited by the small keyboard, most mobile apps support the automatic login feature for better user experience. Therefore, users avoid the inconvenience of retyping their ID and password when an app runs in the foreground again. However, this auto-login function can be exploited to launch the so-called data-clone attack: once the locally-stored, auto-login depended data are cloned by attackers and placed into their own smartphones, attackers can break through the login-device number limit and log in to the victims account stealthily. A natural countermeasure is to check the consistency of devicespecific attributes. As long as the new device shows different device fingerprints with the previous one, the app will disable the auto-login function and thus prevent data-clone attacks. In this paper, we develop VPDroid, a transparent Android OS-level virtualization platform tailored for security testing. With VPDroid, security analysts can customize different device artifacts, such as CPU model, Android ID, and phone number, in a virtual phone without user-level API hooking. VPDroids isolation mechanism ensures that user-mode apps in the virtual phone cannot detect device-specific discrepancies. To assess Android apps susceptibility to the data-clone attack, we use VPDroid to simulate data-clone attacks with 234 most-downloaded apps. Our experiments on five different virtual phone environments show that VPDroids device attribute customization can deceive all tested apps that perform device-consistency checks, such as Twitter, WeChat, and PayPal. 19 vendors have confirmed our report as a zero-day vulnerability. Our findings paint a cautionary tale: only enforcing a device-consistency check at client side is still vulnerable to an advanced data-clone attack.



قيم البحث

اقرأ أيضاً

Third-party security apps are an integral part of the Android app ecosystem. Many users install them as an extra layer of protection for their devices. There are hundreds of such security apps, both free and paid in Google Play Store and some of them are downloaded millions of times. By installing security apps, the smartphone users place a significant amount of trust towards the security companies who developed these apps, because a fully functional mobile security app requires access to many smartphone resources such as the storage, text messages and email, browser history, and information about other installed applications. Often these resources contain highly sensitive personal information. As such, it is essential to understand the mobile security apps ecosystem to assess whether is it indeed beneficial to install them. To this end, in this paper, we present the first empirical study of Android security apps. We analyse 100 Android security apps from multiple aspects such as metadata, static analysis, and dynamic analysis and presents insights to their operations and behaviours. Our results show that 20% of the security apps we studied potentially resell the data they collect from smartphones to third parties; in some cases, even without the user consent. Also, our experiments show that around 50% of the security apps fail to identify malware installed on a smartphone.
Mobile banking apps, belonging to the most security-critical app category, render massive and dynamic transactions susceptible to security risks. Given huge potential financial loss caused by vulnerabilities, existing research lacks a comprehensive e mpirical study on the security risks of global banking apps to provide useful insights and improve the security of banking apps. Since data-related weaknesses in banking apps are critical and may directly cause serious financial loss, this paper first revisits the state-of-the-art available tools and finds that they have limited capability in identifying data-related security weaknesses of banking apps. To complement the capability of existing tools in data-related weakness detection, we propose a three-phase automated security risk assessment system, named AUSERA, which leverages static program analysis techniques and sensitive keyword identification. By leveraging AUSERA, we collect 2,157 weaknesses in 693 real-world banking apps across 83 countries, which we use as a basis to conduct a comprehensive empirical study from different aspects, such as global distribution and weakness evolution during version updates. We find that apps owned by subsidiary banks are always less secure than or equivalent to those owned by parent banks. In addition, we also track the patching of weaknesses and receive much positive feedback from banking entities so as to improve the security of banking apps in practice. To date, we highlight that 21 banks have confirmed the weaknesses we reported. We also exchange insights with 7 banks, such as HSBC in UK and OCBC in Singapore, via in-person or online meetings to help them improve their apps. We hope that the insights developed in this paper will inform the communities about the gaps among multiple stakeholders, including banks, academic researchers, and third-party security companies.
Android users are increasingly concerned with the privacy of their data and security of their devices. To improve the security awareness of users, recent automatic techniques produce security-centric descriptions by performing program analysis. Howev er, the generated text does not always address users concerns as they are generally too technical to be understood by ordinary users. Moreover, different users have varied linguistic preferences, which do not match the text. Motivated by this challenge, we develop an innovative scheme to help users avoid malware and privacy-breaching apps by generating security descriptions that explain the privacy and security related aspects of an Android app in clear and understandable terms. We implement a prototype system, PERSCRIPTION, to generate personalised security-centric descriptions that automatically learn users security concerns and linguistic preferences to produce user-oriented descriptions. We evaluate our scheme through experiments and user studies. The results clearly demonstrate the improvement on readability and users security awareness of PERSCRIPTIONs descriptions compared to existing description generators.
High-level synthesis (HLS) is a key component for the hardware acceleration of applications, especially thanks to the diffusion of reconfigurable devices in many domains, from data centers to edge devices. HLS reduces development times by allowing de signers to raise the abstraction level and use automated methods for hardware generation. Since security concerns are becoming more and more relevant for data-intensive applications, we investigate how to abstract security properties and use HLS for their integration with the accelerator functionality. We use the case of dynamic information flow tracking, showing how classic software-level abstractions can be efficiently used to hide implementation details to the designers.
Access to privacy-sensitive information on Android is a growing concern in the mobile community. Albeit Google Play recently introduced some privacy guidelines, it is still an open problem to soundly verify whether apps actually comply with such rule s. To this aim, in this paper, we discuss a novel methodology based on a fruitful combination of static analysis, dynamic analysis, and machine learning techniques, which allows assessing such compliance. More in detail, our methodology checks whether each app i) contains a privacy policy that complies with the Google Play privacy guidelines, and ii) accesses privacy-sensitive information only upon the acceptance of the policy by the user. Furthermore, the methodology also allows checking the compliance of third-party libraries embedded in the apps w.r.t. the same privacy guidelines. We implemented our methodology in a tool, 3PDroid, and we carried out an assessment on a set of recent and most-downloaded Android apps in the Google Play Store. Experimental results suggest that more than 95% of apps access users privacy-sensitive information, but just a negligible subset of them (around 1%) fully complies with the Google Play privacy guidelines.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا