Do you want to publish a course? Click here

A Study of Data Store-based Home Automation

98   0   0.0 ( 0 )
 Added by Kevin Moran P
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Home automation platforms provide a new level of convenience by enabling consumers to automate various aspects of physical objects in their homes. While the convenience is beneficial, security flaws in the platforms or integrated third-party products can have serious consequences for the integrity of a users physical environment. In this paper we perform a systematic security evaluation of two popular smart home platforms, Googles Nest platform and Philips Hue, that implement home automation routines (i.e., trigger-action programs involving apps and devices) via manipulation of state variables in a centralized data store. Our semi-automated analysis examines, among other things, platform access control enforcement, the rigor of non-system enforcement procedures, and the potential for misuse of routines. This analysis results in ten key findings with serious security implications. For instance, we demonstrate the potential for the misuse of smart home routines in the Nest platform to perform a lateral privilege escalation, illustrate how Nests product review system is ineffective at preventing multiple stages of this attack that it examines, and demonstrate how emerging platforms may fail to provide even bare-minimum security by allowing apps to arbitrarily add/remove other apps from the users smart home. Our findings draw attention to the unique security challenges of platforms that execute routines via centralized data stores and highlight the importance of enforcing security by design in emerging home automation platforms.



rate research

Read More

Security researchers have recently discovered significant security and safety issues related to home automation and developed approaches to address them. Such approaches often face design and evaluation challenges which arise from their restricted perspective of home automation that is bounded by the IoT apps they analyze. The challenges of past work can be overcome by relying on a deeper understanding of realistic home automation usage. More specifically, the availability of natural home automation scenarios, i.e., sequences of home automation events that may realistically occur in an end-users home, could help security researchers design better security/safety systems. This paper presents Helion, a framework for building a natural perspective of home automation. Helion identifies the regularities in user-driven home automation, i.e., from user-driven routines that are increasingly being created by users through intuitive platform UIs. Our intuition for designing Helion is that smart home event sequences created by users exhibit an inherent set of semantic patterns, or naturalness that can be modeled and used to generate valid and useful scenarios. To evaluate our approach, we first empirically demonstrate that this naturalness hypothesis holds, with a corpus of 30,518 home automation events, constructed from 273 routines collected from 40 users. We then demonstrate that the scenarios generated by Helion are reasonable and valid from an end-user perspective, through an evaluation with 16 external evaluators. We further show the usefulness of Helions scenarios by generating 17 home security/safety policies with significantly less effort than existing approaches. We conclude by discussing key takeaways and future research challenges enabled by Helions natural perspective of home automation.
71 - Yangde Wang 2021
Pattern lock is a general technique used to realize identity authentication and access authorization on mobile terminal devices such as Android platform devices, but it is vulnerable to the attack proposed by recent researches that exploit information leaked by users while drawing patterns. However, the existing attacks on pattern lock are environmentally sensitive, and rely heavily on manual work, which constrains the practicability of these attack approaches. To attain a more practical attack, this paper designs the PatternMonitor, a whole pipeline with a much higher level of automation system againsts pattern lock, which extracts the guessed candidate patterns from a video containing pattern drawing: instead of manually cutting the target video and setting thresholds, it first employs recognition models to locate the target phone and keypoints of pattern drawing hand, which enables the gesture can be recognized even when the fingertips are shaded. Then, we extract the frames from the video where the drawing starts and ends. These pre-processed frames are inputs of target tracking model to generate trajectories, and further transformed into possible candidate patterns by performing our designed algorithm. To the best of our knowledge, our work is the first attack system to generate candidate patterns by only relying on hand movement instead of accurate fingertips capture. The experimental results demonstrates that our work is as accurate as previous work, which gives more than 90% success rate within 20 attempts.
Although there are over 1,600,000 third-party Android apps in the Google Play Store, little has been conclusively shown about how their individual (and collective) permission usage has evolved over time. Recently, Android 6 overhauled the way permissions are granted by users, by switching to run-time permission requests instead of install-time permission requests. This is a welcome change, but recent research has shown that many users continue to accept run-time permissions blindly, leaving them at the mercy of third-party app developers and adversaries. Beyond intentionally invading privacy, highly privileged apps increase the attack surface of smartphones and are more attractive targets for adversaries. This work focuses exclusively on dangerous permissions, i.e., those permissions identified by Android as guarding access to sensitive user data. By taking snapshots of the Google Play Store over a 20-month period, we characterise changes in the number and type of dangerous permissions used by Android apps when they are updated, to gain a greater understanding of the evolution of permission usage. We found that approximately 25,000 apps asked for additional permissions every three months. Worryingly, we made statistically significant observations that free apps and highly popular apps were more likely to ask for additional permissions when they were updated. By looking at patterns in dangerous permission usage, we find evidence that suggests developers may still be failing to correctly specify the permissions their apps need.
We present VStore, a data store for supporting fast, resource-efficient analytics over large archival videos. VStore manages video ingestion, storage, retrieval, and consumption. It controls video formats along the video data path. It is challenged by i) the huge combinatorial space of video format knobs; ii) the complex impacts of these knobs and their high profiling cost; iii) optimizing for multiple resource types. It explores an idea called backward derivation of configuration: in the opposite direction along the video data path, VStore passes the video quantity and quality expected by analytics backward to retrieval, to storage, and to ingestion. In this process, VStore derives an optimal set of video formats, optimizing for different resources in a progressive manner. VStore automatically derives large, complex configurations consisting of more than one hundred knobs over tens of video formats. In response to queries, VStore selects video formats catering to the executed operators and the target accuracy. It streams video data from disks through decoder to operators. It runs queries as fast as 362x of video realtime.
Smart speakers and voice-based virtual assistants are core components for the success of the IoT paradigm. Unfortunately, they are vulnerable to various privacy threats exploiting machine learning to analyze the generated encrypted traffic. To cope with that, deep adversarial learning approaches can be used to build black-box countermeasures altering the network traffic (e.g., via packet padding) and its statistical information. This letter showcases the inadequacy of such countermeasures against machine learning attacks with a dedicated experimental campaign on a real network dataset. Results indicate the need for a major re-engineering to guarantee the suitable protection of commercially available smart speakers.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا