No Arabic abstract
Trigger-action platforms (TAPs) allow users to connect independent web-based or IoT services to achieve useful automation. They provide a simple interface that helps end-users create trigger-compute-action rules that pass data between disparate Internet services. Unfortunately, TAPs introduce a large-scale security risk: if they are compromised, attackers will gain access to sensitive data for millions of users. To avoid this risk, we propose eTAP, a privacy-enhancing trigger-action platform that executes trigger-compute-action rules without accessing users private data in plaintext or learning anything about the results of the computation. We use garbled circuits as a primitive, and leverage the unique structure of trigger-compute-action rules to make them practical. We formally state and prove the security guarantees of our protocols. We prototyped eTAP, which supports the most commonly used operations on popular commercial TAPs like IFTTT and Zapier. Specifically, it supports Boolean, arithmetic, and string operations on private trigger data and can run 100% of the top-500 rules of IFTTT users and 93.4% of all publicly-available rules on Zapier. Based on ten existing rules that exercise a wide variety of operations, we show that eTAP has a modest performance impact: on average rule execution latency increases by 70 ms (55%) and throughput reduces by 59%.
The recent spades of cyber security attacks have compromised end users data safety and privacy in Medical Cyber-Physical Systems (MCPS). Traditional standard encryption algorithms for data protection are designed based on a viewpoint of system architecture rather than a viewpoint of end users. As such encryption algorithms are transferring the protection on the data to the protection on the keys, data safety and privacy will be compromised once the key is exposed. In this paper, we propose a secure data storage and sharing method consisted by a selective encryption algorithm combined with fragmentation and dispersion to protect the data safety and privacy even when both transmission media (e.g. cloud servers) and keys are compromised. This method is based on a user-centric design that protects the data on a trusted device such as end users smartphone and lets the end user to control the access for data sharing. We also evaluate the performance of the algorithm on a smartphone platform to prove the efficiency.
The proliferation of Internet of Things (IoT) is reshaping our lifestyle. With IoT sensors and devices communicating with each other via the Internet, people can customize automation rules to meet their needs. Unless carefully defined, however, such rules can easily become points of security failure as the number of devices and complexity of rules increase. Device owners may end up unintentionally providing access or revealing private information to unauthorized entities due to complex chain reactions among devices. Prior work on trigger-action programming either focuses on conflict resolution or usability issues, or fails to accurately and efficiently detect such attack chains. This paper explores security vulnerabilities when users have the freedom to customize automation rules using trigger-action programming. We define two broad classes of attack--privilege escalation and privacy leakage--and present a practical model-checking-based system called SAFECHAIN that detects hidden attack chains exploiting the combination of rules. Built upon existing model-checking techniques, SAFECHAIN identifies attack chains by modeling the IoT ecosystem as a Finite State Machine. To improve practicability, SAFECHAIN avoids the need to accurately model an environment by frequently re-checking the automation rules given the current states, and employs rule-aware optimizations to further reduce overhead. Our comparative analysis shows that SAFECHAIN can efficiently and accurately identify attack chains, and our prototype implementation of SAFECHAIN can verify 100 rules in less than one second with no false positives.
Differential privacy is a rigorous mathematical framework for evaluating and protecting data privacy. In most existing studies, there is a vulnerable assumption that records in a dataset are independent when differential privacy is applied. However, in real-world datasets, records are likely to be correlated, which may lead to unexpected data leakage. In this survey, we investigate the issue of privacy loss due to data correlation under differential privacy models. Roughly, we classify existing literature into three lines: 1) using parameters to describe data correlation in differential privacy, 2) using models to describe data correlation in differential privacy, and 3) describing data correlation based on the framework of Pufferfish. Firstly, a detailed example is given to illustrate the issue of privacy leakage on correlated data in real scenes. Then our main work is to analyze and compare these methods, and evaluate situations that these diverse studies are applied. Finally, we propose some future challenges on correlated differential privacy.
In the big data era, more and more cloud-based data-driven applications are developed that leverage individual data to provide certain valuable services (the utilities). On the other hand, since the same set of individual data could be utilized to infer the individuals certain sensitive information, it creates new channels to snoop the individuals privacy. Hence it is of great importance to develop techniques that enable the data owners to release privatized data, that can still be utilized for certain premised intended purpose. Existing data releasing approaches, however, are either privacy-emphasized (no consideration on utility) or utility-driven (no guarantees on privacy). In this work, we propose a two-step perturbation-based utility-aware privacy-preserving data releasing framework. First, certain predefined privacy and utility problems are learned from the public domain data (background knowledge). Later, our approach leverages the learned knowledge to precisely perturb the data owners data into privatized data that can be successfully utilized for certain intended purpose (learning to succeed), without jeopardizing certain predefined privacy (training to fail). Extensive experiments have been conducted on Human Activity Recognition, Census Income and Bank Marketing datasets to demonstrate the effectiveness and practicality of our framework.
The proportion of Energy consumption in the building industry is great, as well as the amount of cooling and heating system. Scholars have been working on energy conservation of Heating, ventilation, and air-conditioning and other systems in buildings. The application of occupant behavior data for building energy optimization has started gaining attention from scholars. However, occupant behavior data concerns many aspects of occupants privacy. Different types of occupant behavior data contain occupants private information to different levels. It is crucial to conduct privacy protection of occupant behavior data when using occupant behavior for energy conservation. This paper presents the aspects of privacy issue when using occupant behavior data, and methods to protect data privacy with blockchain technology. Both two options of using blockchain for privacy protection, sending data records as transactions and storing files on the blockchain, are explained and evaluated with temperature records from an open access paper. Sending data as transactions can be used between sensors and local building management system. While storing files on blockchain can be used for collaboration of different building management systems. Advantages, drawbacks, and potentials of using blockchain for data and file transfer are discussed. The results should be helpful for using occupant behavior data for building energy optimization.