ترغب بنشر مسار تعليمي؟ اضغط هنا

Privacy-accuracy trade-offs in noisy digital exposure notifications

225   0   0.0 ( 0 )
 نشر من قبل Yun William Yu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Since the global spread of Covid-19 began to overwhelm the attempts of governments to conduct manual contact-tracing, there has been much interest in using the power of mobile phones to automate the contact-tracing process through the development of exposure notification applications. The rough idea is simple: use Bluetooth or other data-exchange technologies to record contacts between users, enable users to report positive diagnoses, and alert users who have been exposed to sick users. Of course, there are many privacy concerns associated with this idea. Much of the work in this area has been concerned with designing mechanisms for tracing contacts and alerting users that do not leak additional information about users beyond the existence of exposure events. However, although designing practical protocols is of crucial importance, it is essential to realize that notifying users about exposure events may itself leak confidential information (e.g. that a particular contact has been diagnosed). Luckily, while digital contact tracing is a relatively new task, the generic problem of privacy and data disclosure has been studied for decades. Indeed, the framework of differential privacy further permits provable query privacy by adding random noise. In this article, we translate two results from statistical privacy and social recommendation algorithms to exposure notification. We thus prove some naive bounds on the degree to which accuracy must be sacrificed if exposure notification frameworks are to be made more private through the injection of noise.


قيم البحث

اقرأ أيضاً

Many socially valuable activities depend on sensitive information, such as medical research, public health policies, political coordination, and personalized digital services. This is often posed as an inherent privacy trade-off: we can benefit from data analysis or retain data privacy, but not both. Across several disciplines, a vast amount of effort has been directed toward overcoming this trade-off to enable productive uses of information without also enabling undesired misuse, a goal we term `structured transparency. In this paper, we provide an overview of the frontier of research seeking to develop structured transparency. We offer a general theoretical framework and vocabulary, including characterizing the fundamental components -- input privacy, output privacy, input verification, output verification, and flow governance -- and fundamental problems of copying, bundling, and recursive oversight. We argue that these barriers are less fundamental than they often appear. Recent progress in developing `privacy-enhancing technologies (PETs), such as secure computation and federated learning, may substantially reduce lingering use-misuse trade-offs in a number of domains. We conclude with several illustrations of structured transparency -- in open research, energy management, and credit scoring systems -- and a discussion of the risks of misuse of these tools.
Growing use of machine learning in policy and social impact settings have raised concerns for fairness implications, especially for racial minorities. These concerns have generated considerable interest among machine learning and artificial intellige nce researchers, who have developed new methods and established theoretical bounds for improving fairness, focusing on the source data, regularization and model training, or post-hoc adjustments to model scores. However, little work has studied the practical trade-offs between fairness and accuracy in real-world settings to understand how these bounds and methods translate into policy choices and impact on society. Our empirical study fills this gap by investigating the impact of mitigating disparities on accuracy, focusing on the common context of using machine learning to inform benefit allocation in resource-constrained programs across education, mental health, criminal justice, and housing safety. Here we describe applied work in which we find fairness-accuracy trade-offs to be negligible in practice. In each setting studied, explicitly focusing on achieving equity and using our proposed post-hoc disparity mitigation methods, fairness was substantially improved without sacrificing accuracy. This observation was robust across policy contexts studied, scale of resources available for intervention, time, and relative size of the protected groups. These empirical results challenge a commonly held assumption that reducing disparities either requires accepting an appreciable drop in accuracy or the development of novel, complex methods, making reducing disparities in these applications more practical.
The purpose of Secure Multi-Party Computation is to enable protocol participants to compute a public function of their private inputs while keeping their inputs secret, without resorting to any trusted third party. However, opening the public output of such computations inevitably reveals some information about the private inputs. We propose a measure generalising both Renyi entropy and g-entropy so as to quantify this information leakage. In order to control and restrain such information flows, we introduce the notion of function substitution which replaces the computation of a function that reveals sensitive information with that of an approximate function. We exhibit theoretical bounds for the privacy gains that this approach provides and experimentally show that this enhances the confidentiality of the inputs while controlling the distortion of computed output values. Finally, we investigate the inherent compromise between accuracy of computation and privacy of inputs and we demonstrate how to realise such optimal trade-offs.
During a pandemic, contact tracing is an essential tool to drive down the infection rate within a population. To accelerate the laborious manual contact tracing process, digital contact tracing (DCT) tools can track contact events transparently and p rivately by using the sensing and signaling capabilities of the ubiquitous cell phone. However, an effective DCT must not only preserve user privacy but also augment the existing manual contact tracing process. Indeed, not every member of a population may own a cell phone or have a DCT app installed and enabled. We present KHOVID to fulfill the combined goal of manual contact-tracing interoperability and DCT user privacy. At KHOVIDs core is a privacy-friendly mechanism to encode user trajectories using geolocation data. Manual contact tracing data can be integrated through the same geolocation format. The accuracy of the geolocation data from DCT is improved using Bluetooth proximity detection, and we propose a novel method to encode Bluetooth ephemeral IDs. This contribution describes the detailed design of KHOVID; presents a prototype implementation including an app and server software; and presents a validation based on simulation and field experiments. We also compare the strengths of KHOVID with other, earlier proposals of DCT.
Contact tracing is an essential tool for public health officials and local communities to fight the spread of novel diseases, such as for the COVID-19 pandemic. The Singaporean government just released a mobile phone app, TraceTogether, that is desig ned to assist health officials in tracking down exposures after an infected individual is identified. However, there are important privacy implications of the existence of such tracking apps. Here, we analyze some of those implications and discuss ways of ameliorating the privacy concerns without decreasing usefulness to public health. We hope in writing this document to ensure that privacy is a central feature of conversations surrounding mobile contact tracing apps and to encourage community efforts to develop alternative effective solutions with stronger privacy protection for the users. Importantly, though we discuss potential modifications, this document is not meant as a formal research paper, but instead is a response to some of the privacy characteristics of direct contact tracing apps like TraceTogether and an early-stage Request for Comments to the community. Date written: 2020-03-24 Minor correction: 2020-03-30
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا