Do you want to publish a course? Click here

Dynamic Graph Streaming Algorithm for Digital Contact Tracing

68   0   0.0 ( 0 )
 Added by Gautam Mahapatra
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Digital contact tracing of an infected person, testing the possible infection for the contacted persons, and isolation play a crucial role in alleviating the outbreak. Here, we design a dynamic graph streaming algorithm that can trace the contacts under the control of the Public Health Authorities (PHA). Our algorithm receives proximity data from the mobile devices as contact data streams and uses a sliding window model to construct a dynamic contact graph sketch. Prominently, we introduce the edge label of the contact graph as a binary contact vector, which acts like a sliding window and holds the latest D days (incubation period) of temporal social interactions. Notably, the algorithm prepares the direct and indirect (multilevel) contact list from the contact graph sketch for a given set of infected persons. Finally, the algorithm also uses a disjoint set data structure to construct the infection pathways for the trace list. The present study offers the design of algorithms with underlying data structures for digital contact trace relevant to the proximity data produced by Bluetooth enabled mobile devices. Our analysis reveals that for COVID-19 close contact parameters, the storage space requires maintaining the contact graph of ten million users; having 14 days of close contact data in the PHA server takes 55 Gigabytes of memory and preparation of the contact list for a given set of the infected person depends on the size of the infected list. Our centralized digital contact tracing framework can also be applicable for other relevant diseases parameterized by an incubation period and proximity duration of contacts.



rate research

Read More

The spread of an infectious disease through a population can be modeled using a network or a graph. In digital advertising, internet device graphs are graph data sets that organize identifiers produced by mobile phones, PCs, TVs, and tablets as they access media on the internet. Characterized by immense scale, they have become ubiquitous as they enable targeted advertising, content customization and tracking. This paper posits that internet device graphs, in particular those based on IP colocation, can provide significant utility in predicting and modeling the spread of infectious disease. Starting the week of March 16th, 2020, in the United States, many individuals began to `shelter-in-place as schools and workplaces across the nation closed because of the COVID-19 pandemic. This paper quantifies the effect of the shelter-in-place orders on a large scale internet device graph with more than a billion nodes by studying the graph before and after orders went into effect. The effects are clearly visible. The structure of the graph suggests behavior least conducive to transmission of infection occurred in the US between April 12th and 19th, 2020. This paper also discusses the utility of device graphs for i) contact tracing, ii) prediction of `hot spots, iii) simulation of infectious disease spread, and iv) delivery of advertisement-based warnings to potentially exposed individuals. The paper also posits an overarching question: can systems and datasets amassed by entities in the digital ad ecosystem aid in the fight against COVID-19?
During a pandemic, contact tracing is an essential tool to drive down the infection rate within a population. To accelerate the laborious manual contact tracing process, digital contact tracing (DCT) tools can track contact events transparently and privately by using the sensing and signaling capabilities of the ubiquitous cell phone. However, an effective DCT must not only preserve user privacy but also augment the existing manual contact tracing process. Indeed, not every member of a population may own a cell phone or have a DCT app installed and enabled. We present KHOVID to fulfill the combined goal of manual contact-tracing interoperability and DCT user privacy. At KHOVIDs core is a privacy-friendly mechanism to encode user trajectories using geolocation data. Manual contact tracing data can be integrated through the same geolocation format. The accuracy of the geolocation data from DCT is improved using Bluetooth proximity detection, and we propose a novel method to encode Bluetooth ephemeral IDs. This contribution describes the detailed design of KHOVID; presents a prototype implementation including an app and server software; and presents a validation based on simulation and field experiments. We also compare the strengths of KHOVID with other, earlier proposals of DCT.
We present new lower bounds that show that a polynomial number of passes are necessary for solving some fundamental graph problems in the streaming model of computation. For instance, we show that any streaming algorithm that finds a weighted minimum $s$-$t$ cut in an $n$-vertex undirected graph requires $n^{2-o(1)}$ space unless it makes $n^{Omega(1)}$ passes over the stream. To prove our lower bounds, we introduce and analyze a new four-player communication problem that we refer to as the hidden-pointer chasing problem. This is a problem in spirit of the standard pointer chasing problem with the key difference that the pointers in this problem are hidden to players and finding each one of them requires solving another communication problem, namely the set intersection problem. Our lower bounds for graph problems are then obtained by reductions from the hidden-pointer chasing problem. Our hidden-pointer chasing problem appears flexible enough to find other applications and is therefore interesting in its own right. To showcase this, we further present an interesting application of this problem beyond streaming algorithms. Using a reduction from hidden-pointer chasing, we prove that any algorithm for submodular function minimization needs to make $n^{2-o(1)}$ value queries to the function unless it has a polynomial degree of adaptivity.
104 - Daniel Tang 2020
Many countries are currently gearing up to use smart-phone apps to perform contact tracing as part of the effort to manage the COVID-19 pandemic and prevent resurgences of the disease after the initial outbreak. With the announcement of the Apple/Google partnership to introduce contact-tracing functionality to iOS and Android, it seems likely that this will be adopted in many countries. An important part of the functionality of the app will be to decide whether a person should be advised to self-isolate, be tested or end isolation. However, the privacy preserving nature of the Apple/Google contact tracing algorithm means that centralised curation of these decisions is not possible so each phone must use its own risk model to inform decisions. Ideally, the risk model should use Bayesian inference to decide the best course of action given the test results of the user and those of other users. Here we present a decentralised algorithm that estimates the Bayesian posterior probability of viral transmission events and evaluates when a user should be notified, tested or released from isolation while preserving user privacy. The algorithm also allows the disease models on the phones to learn from everyones contact-tracing data and will allow Epidemiologists to better understand the dynamics of the disease. The algorithm is a message passing algorithm, based on belief propagation, so each smart-phone can be used to execute a small part of the algorithm without releasing any sensitive information. In this way, the network of all participating smart-phones forms a distributed computation device that performs Bayesian inference, informs each user when they should start/end isolation or be tested and learns about the disease from users data.
We study space-pass tradeoffs in graph streaming algorithms for parameter estimation and property testing problems such as estimating the size of maximum matchings and maximum cuts, weight of minimum spanning trees, or testing if a graph is connected or cycle-free versus being far from these properties. We develop a new lower bound technique that proves that for many problems of interest, including all the above, obtaining a $(1+epsilon)$-approximation requires either $n^{Omega(1)}$ space or $Omega(1/epsilon)$ passes, even on highly restricted families of graphs such as bounded-degree planar graphs. For multiple of these problems, this bound matches those of existing algorithms and is thus (asymptotically) optimal. Our results considerably strengthen prior lower bounds even for arbitrary graphs: starting from the influential work of [Verbin, Yu; SODA 2011], there has been a plethora of lower bounds for single-pass algorithms for these problems; however, the only multi-pass lower bounds proven very recently in [Assadi, Kol, Saxena, Yu; FOCS 2020] rules out sublinear-space algorithms with exponentially smaller $o(log{(1/epsilon)})$ passes for these problems. One key ingredient of our proofs is a simple streaming XOR Lemma, a generic hardness amplification result, that we prove: informally speaking, if a $p$-pass $s$-space streaming algorithm can only solve a decision problem with advantage $delta > 0$ over random guessing, then it cannot solve XOR of $ell$ independent copies of the problem with advantage much better than $delta^{ell}$. This result can be of independent interest and useful for other streaming lower bounds as well.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا