Do you want to publish a course? Click here

A Privacy-Preserving Data Inference Framework for Internet of Health Things Networks

72   0   0.0 ( 0 )
 Added by Gang Luo
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Privacy protection in electronic healthcare applications is an important consideration due to the sensitive nature of personal health data. Internet of Health Things (IoHT) networks have privacy requirements within a healthcare setting. However, these networks have unique challenges and security requirements (integrity, authentication, privacy and availability) must also be balanced with the need to maintain efficiency in order to conserve battery power, which can be a significant limitation in IoHT devices and networks. Data are usually transferred without undergoing filtering or optimization, and this traffic can overload sensors and cause rapid battery consumption when interacting with IoHT networks. This consequently poses restrictions on the practical implementation of these devices. As a solution to address the issues, this paper proposes a privacy-preserving two-tier data inference framework, this can conserve battery consumption by reducing the data size required to transmit through inferring the sensed data and can also protect the sensitive data from leakage to adversaries. Results from experimental evaluations on privacy show the validity of the proposed scheme as well as significant data savings without compromising the accuracy of the data transmission, which contributes to energy efficiency of IoHT sensor devices.



rate research

Read More

66 - Shuo Chen , Rongxing Lu , 2017
The singular value decomposition (SVD) is a widely used matrix factorization tool which underlies plenty of useful applications, e.g. recommendation system, abnormal detection and data compression. Under the environment of emerging Internet of Things (IoT), there would be an increasing demand for data analysis to better humans lives and create new economic growth points. Moreover, due to the large scope of IoT, most of the data analysis work should be done in the network edge, i.e. handled by fog computing. However, the devices which provide fog computing may not be trustable while the data privacy is often the significant concern of the IoT application users. Thus, when performing SVD for data analysis purpose, the privacy of user data should be preserved. Based on the above reasons, in this paper, we propose a privacy-preserving fog computing framework for SVD computation. The security and performance analysis shows the practicability of the proposed framework. Furthermore, since different applications may utilize the result of SVD operation in different ways, three applications with different objectives are introduced to show how the framework could flexibly achieve the purposes of different applications, which indicates the flexibility of the design.
127 - Han Qiu , Meikang Qiu , Meiqin Liu 2019
The recent spades of cyber security attacks have compromised end users data safety and privacy in Medical Cyber-Physical Systems (MCPS). Traditional standard encryption algorithms for data protection are designed based on a viewpoint of system architecture rather than a viewpoint of end users. As such encryption algorithms are transferring the protection on the data to the protection on the keys, data safety and privacy will be compromised once the key is exposed. In this paper, we propose a secure data storage and sharing method consisted by a selective encryption algorithm combined with fragmentation and dispersion to protect the data safety and privacy even when both transmission media (e.g. cloud servers) and keys are compromised. This method is based on a user-centric design that protects the data on a trusted device such as end users smartphone and lets the end user to control the access for data sharing. We also evaluate the performance of the algorithm on a smartphone platform to prove the efficiency.
Collaborative inference has recently emerged as an intriguing framework for applying deep learning to Internet of Things (IoT) applications, which works by splitting a DNN model into two subpart models respectively on resource-constrained IoT devices and the cloud. Even though IoT applications raw input data is not directly exposed to the cloud in such framework, revealing the local-part models intermediate output still entails privacy risks. For mitigation of privacy risks, differential privacy could be adopted in principle. However, the practicality of differential privacy for collaborative inference under various conditions remains unclear. For example, it is unclear how the calibration of the privacy budget epsilon will affect the protection strength and model accuracy in presence of the state-of-the-art reconstruction attack targeting collaborative inference, and whether a good privacy-utility balance exists. In this paper, we provide the first systematic study to assess the effectiveness of differential privacy for protecting collaborative inference in presence of the reconstruction attack, through extensive empirical evaluations on various datasets. Our results show differential privacy can be used for collaborative inference when confronted with the reconstruction attack, with insights provided about privacyutility trade-offs. Specifically, across the evaluated datasets, we observe there exists a suitable privacy budget range (particularly 100<=epsilon<=200 in our evaluation) providing a good tradeoff between utility and privacy protection. Our key observation drawn from our study is that differential privacy tends to perform better in collaborative inference for datasets with smaller intraclass variations, which, to our knowledge, is the first easy-toadopt practical guideline.
This paper is a general survey of all the security issues existing in the Internet of Things (IoT) along with an analysis of the privacy issues that an end-user may face as a consequence of the spread of IoT. The majority of the survey is focused on the security loopholes arising out of the information exchange technologies used in Internet of Things. No countermeasure to the security drawbacks has been analyzed in the paper.
Federated learning has emerged as a popular paradigm for collaboratively training a model from data distributed among a set of clients. This learning setting presents, among others, two unique challenges: how to protect privacy of the clients data during training, and how to ensure integrity of the trained model. We propose a two-pronged solution that aims to address both challenges under a single framework. First, we propose to create secure enclaves using a trusted execution environment (TEE) within the server. Each client can then encrypt their gradients and send them to verifiable enclaves. The gradients are decrypted within the enclave without the fear of privacy breaches. However, robustness check computations in a TEE are computationally prohibitive. Hence, in the second step, we perform a novel gradient encoding that enables TEEs to encode the gradients and then offloading Byzantine check computations to accelerators such as GPUs. Our proposed approach provides theoretical bounds on information leakage and offers a significant speed-up over the baseline in empirical evaluation.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا