Do you want to publish a course? Click here

Privacy-preserving Medical Treatment System through Nondeterministic Finite Automata

93   0   0.0 ( 0 )
 Added by Yang Yang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a privacy-preserving medical treatment system using nondeterministic finite automata (NFA), hereafter referred to as P-Med, designed for the remote medical environment. P-Med makes use of the nondeterministic transition characteristic of NFA to flexibly represent the medical model, which includes illness states, treatment methods and state transitions caused by exerting different treatment methods. A medical model is encrypted and outsourced to the cloud to deliver telemedicine services. Using P-Med, patient-centric diagnosis and treatment can be made on-the-fly while protecting the confidentiality of a patients illness states and treatment recommendation results. Moreover, a new privacy-preserving NFA evaluation method is given in P-Med to get a confidential match result for the evaluation of an encrypted NFA and an encrypted data set, which avoids the cumbersome inner state transition determination. We demonstrate that P-Med realizes treatment procedure recommendation without privacy leakage to unauthorized parties. We conduct extensive experiments and analyses to evaluate efficiency.



rate research

Read More

As machine learning becomes a practice and commodity, numerous cloud-based services and frameworks are provided to help customers develop and deploy machine learning applications. While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties. Past work have shown that a malicious machine learning service provider or end user can easily extract critical information about the training samples, from the model parameters or even just model outputs. In this paper, we propose a novel and generic methodology to preserve the privacy of training data in machine learning applications. Specifically we introduce an obfuscate function and apply it to the training data before feeding them to the model training task. This function adds random noise to existing samples, or augments the dataset with new samples. By doing so sensitive information about the properties of individual samples, or statistical properties of a group of samples, is hidden. Meanwhile the model trained from the obfuscated dataset can still achieve high accuracy. With this approach, the customers can safely disclose the data or models to third-party providers or end users without the need to worry about data privacy. Our experiments show that this approach can effective defeat four existing types of machine learning privacy attacks at negligible accuracy cost.
127 - Han Qiu , Meikang Qiu , Meiqin Liu 2019
The recent spades of cyber security attacks have compromised end users data safety and privacy in Medical Cyber-Physical Systems (MCPS). Traditional standard encryption algorithms for data protection are designed based on a viewpoint of system architecture rather than a viewpoint of end users. As such encryption algorithms are transferring the protection on the data to the protection on the keys, data safety and privacy will be compromised once the key is exposed. In this paper, we propose a secure data storage and sharing method consisted by a selective encryption algorithm combined with fragmentation and dispersion to protect the data safety and privacy even when both transmission media (e.g. cloud servers) and keys are compromised. This method is based on a user-centric design that protects the data on a trusted device such as end users smartphone and lets the end user to control the access for data sharing. We also evaluate the performance of the algorithm on a smartphone platform to prove the efficiency.
Logistics Information System (LIS) is an interactive system that provides information for logistics managers to monitor and track logistics business. In recent years, with the rise of online shopping, LIS is becoming increasingly important. However, since the lack of effective protection of personal information, privacy protection issue has become the most problem concerned by users. Some data breach events in LIS released users personal information, including address, phone number, transaction details, etc. In this paper, to protect users privacy in LIS, a privacy-preserving LIS with traceability (PPLIST) is proposed by combining multi-signature with pseudonym. In our PPLIST scheme, to protect privacy, each user can generate and use different pseudonyms in different logistics services. The processing of one logistics is recorded and unforgeable. Additionally, if the logistics information is abnormal, a trace party can de-anonymize users, and find their real identities. Therefore, our PPLIST efficiently balances the relationship between privacy and traceability.
184 - Thomas Hanneforth 2013
Algorithms for (nondeterministic) finite-state tree automata (FTAs) are often tested on random FTAs, in which all internal transitions are equiprobable. The run-time results obtained in this manner are usually overly optimistic as most such generated random FTAs are trivial in the sense that the number of states of an equivalent minimal deterministic FTA is extremely small. It is demonstrated that nontrivial random FTAs are obtained only for a narrow band of transition probabilities. Moreover, an analytic analysis yields a formula to approximate the transition probability that yields the most complex random FTAs, which should be used in experiments.
Mobile crowdsensing (MCS) is an emerging sensing data collection pattern with scalability, low deployment cost, and distributed characteristics. Traditional MCS systems suffer from privacy concerns and fair reward distribution. Moreover, existing privacy-preserving MCS solutions usually focus on the privacy protection of data collection rather than that of data processing. To tackle faced problems of MCS, in this paper, we integrate federated learning (FL) into MCS and propose a privacy-preserving MCS system, called textsc{CrowdFL}. Specifically, in order to protect privacy, participants locally process sensing data via federated learning and only upload encrypted training models. Particularly, a privacy-preserving federated averaging algorithm is proposed to average encrypted training models. To reduce computation and communication overhead of restraining dropped participants, discard and retransmission strategies are designed. Besides, a privacy-preserving posted pricing incentive mechanism is designed, which tries to break the dilemma of privacy protection and data evaluation. Theoretical analysis and experimental evaluation on a practical MCS application demonstrate the proposed textsc{CrowdFL} can effectively protect participants privacy and is feasible and efficient.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا