ترغب بنشر مسار تعليمي؟ اضغط هنا

Let $S$ be a set of $n$ sites, each associated with a point in $mathbb{R}^2$ and a radius $r_s$ and let $mathcal{D}(S)$ be the disk graph on $S$. We consider the problem of designing data structures that maintain the connectivity structure of $mathca l{D}(S)$ while allowing the insertion and deletion of sites. For unit disk graphs we describe a data structure that has $O(log^2n)$ amortized update time and $O((log n)/(loglog n))$ amortized query time. For disk graphs where the ratio $Psi$ between the largest and smallest radius is bounded, we consider the decremental and the incremental case separately, in addition to the fully dynamic case. In the fully dynamic case we achieve amortized $O(Psi lambda_6(log n) log^{9}n)$ update time and $O(log n)$ query time, where $lambda_s(n)$ is the maximum length of a Davenport-Schinzel sequence of order $s$ on $n$ symbols. This improves the update time of the currently best known data structure by a factor of $Psi$ at the cost of an additional $O(log log n)$ factor in the query time. In the incremental case we manage to achieve a logarithmic dependency on $Psi$ with a data structure with $O(alpha(n))$ query and $O(logPsi lambda_6(log n) log^{9}n)$ update time. For the decremental setting we first develop a new dynamic data structure that allows us to maintain two sets $B$ and $P$ of disks, such than at a deletion of a disk from $B$ we can efficiently report all disks in $P$ that no longer intersect any disk of $B$. Having this data structure at hand, we get decremental data structures with an amortized query time of $O((log n)/(log log n))$ supporting $m$ deletions in $O((nlog^{5}n + m log^{9}n) lambda_6(log n) + nlogPsilog^4n)$ overall time for bounded radius ratio $Psi$ and $O(( nlog^{6} n + m log^{10}n) lambda_6(log n))$ for general disk graphs.
We give an $(varepsilon,delta)$-differentially private algorithm for the multi-armed bandit (MAB) problem in the shuffle model with a distribution-dependent regret of $Oleft(left(sum_{ain [k]:Delta_a>0}frac{log T}{Delta_a}right)+frac{ksqrt{logfrac{1} {delta}}log T}{varepsilon}right)$, and a distribution-independent regret of $Oleft(sqrt{kTlog T}+frac{ksqrt{logfrac{1}{delta}}log T}{varepsilon}right)$, where $T$ is the number of rounds, $Delta_a$ is the suboptimality gap of the arm $a$, and $k$ is the total number of arms. Our upper bound almost matches the regret of the best known algorithms for the centralized model, and significantly outperforms the best known algorithm in the local model.
We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and Pal for the biparti te case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.
We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics. In each episode, the learner suffers the loss accumulated along the trajectory realized by the poli cy chosen for the episode, and observes aggregate bandit feedback: the trajectory is revealed along with the cumulative loss suffered, rather than the individual losses encountered along the trajectory. Our main result is a computationally efficient algorithm with $O(sqrt{K})$ regret for this setting, where $K$ is the number of episodes. We establish this result via an efficient reduction to a novel bandit learning setting we call Distorted Linear Bandits (DLB), which is a variant of bandit linear optimization where actions chosen by the learner are adversarially distorted before they are committed. We then develop a computationally-efficient online algorithm for DLB for which we prove an $O(sqrt{T})$ regret bound, where $T$ is the number of time steps. Our algorithm is based on online mirror descent with a self-concordant barrier regularization that employs a novel increasing learning rate schedule.
We present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space. This is the first separation betwe en oblivious streaming and adversarially-robust streaming, and resolves one of the central open questions in adversarial robust streaming.
119 - Haim Kaplan , Jay Tenenbaum 2021
Locality Sensitive Hashing (LSH) is an effective method of indexing a set of items to support efficient nearest neighbors queries in high-dimensional spaces. The basic idea of LSH is that similar items should produce hash collisions with higher proba bility than dissimilar items. We study LSH for (not necessarily convex) polygons, and use it to give efficient data structures for similar shape retrieval. Arkin et al. represent polygons by their turning function - a function which follows the angle between the polygons tangent and the $ x $-axis while traversing the perimeter of the polygon. They define the distance between polygons to be variations of the $ L_p $ (for $p=1,2$) distance between their turning functions. This metric is invariant under translation, rotation and scaling (and the selection of the initial point on the perimeter) and therefore models well the intuitive notion of shape resemblance. We develop and analyze LSH near neighbor data structures for several variations of the $ L_p $ distance for functions (for $p=1,2$). By applying our schemes to the turning functions of a collection of polygons we obtain efficient near neighbor LSH-based structures for polygons. To tune our structures to turning functions of polygons, we prove some new properties of these turning functions that may be of independent interest. As part of our analysis, we address the following problem which is of independent interest. Find the vertical translation of a function $ f $ that is closest in $ L_1 $ distance to a function $ g $. We prove tight bounds on the approximation guarantee obtained by the translation which is equal to the difference between the averages of $ g $ and $ f $.
We revisit one of the most basic and widely applicable techniques in the literature of differential privacy - the sparse vector technique [Dwork et al., STOC 2009]. This simple algorithm privately tests whether the value of a given query on a databas e is close to what we expect it to be. It allows to ask an unbounded number of queries as long as the answer is close to what we expect, and halts following the first query for which this is not the case. We suggest an alternative, equally simple, algorithm that can continue testing queries as long as any single individual does not contribute to the answer of too many queries whose answer deviates substantially form what we expect. Our analysis is subtle and some of its ingredients may be more widely applicable. In some cases our new algorithm allows to privately extract much more information from the database than the original. We demonstrate this by applying our algorithm to the shifting heavy-hitters problem: On every time step, each of $n$ users gets a new input, and the task is to privately identify all the current heavy-hitters. That is, on time step $i$, the goal is to identify all data elements $x$ such that many of the users have $x$ as their current input. We present an algorithm for this problem with improved error guarantees over what can be obtained using existing techniques. Specifically, the error of our algorithm depends on the maximal number of times that a single user holds a heavy-hitter as input, rather than the total number of times in which a heavy-hitter exists.
We design an efficient data structure for computing a suitably defined approximate depth of any query point in the arrangement $mathcal{A}(S)$ of a collection $S$ of $n$ halfplanes or triangles in the plane or of halfspaces or simplices in higher dim ensions. We then use this structure to find a point of an approximate maximum depth in $mathcal{A}(S)$. Specifically, given an error parameter $epsilon>0$, we compute, for any query point $q$, an underestimate $d^-(q)$ of the depth of $q$, that counts only objects containing $q$, but is allowed to exclude objects when $q$ is $epsilon$-close to their boundary. Similarly, we compute an overestimate $d^+(q)$ that counts all objects containing $q$ but may also count objects that do not contain $q$ but $q$ is $epsilon$-close to their boundary. Our algorithms for halfplanes and halfspaces are linear in the number of input objects and in the number of queries, and the dependence of their running time on $epsilon$ is considerably better than that of earlier techniques. Our improvements are particularly substantial for triangles and in higher dimensions.
An $epsilon$-approximate incidence between a point and some geometric object (line, circle, plane, sphere) occurs when the point and the object lie at distance at most $epsilon$ from each other. Given a set of points and a set of objects, computing t he approximate incidences between them is a major step in many database and web-based applications in computer vision and graphics, including robust model fitting, approximate point pattern matching, and estimating the fundamental matrix in epipolar (stereo) geometry. In a typical approximate incidence problem of this sort, we are given a set $P$ of $m$ points in two or three dimensions, a set $S$ of $n$ objects (lines, circles, planes, spheres), and an error parameter $epsilon>0$, and our goal is to report all pairs $(p,s)in Ptimes S$ that lie at distance at most $epsilon$ from one another. We present efficient output-sensitive approximation algorithms for quite a few cases, including points and lines or circles in the plane, and points and planes, spheres, lines, or circles in three dimensions. Several of these cases arise in the applications mentioned above.
We review the theory of, and develop algorithms for transforming a finite point set in ${bf R}^d$ into a set in emph{radial isotropic position} by a nonsingular linear transformation followed by rescaling each image point to the unit sphere. This pro blem arises in a wide spectrum of applications in computer science and mathematics. Our algorithms use gradient descent methods for a particular convex function $f$ whose minimum defines the transformation, and our main focus is on analyzing their performance. Although the minimum can be computed exactly, by expensive symbolic algebra techniques, gradient descent only approximates the desired minimum to any desired level of accuracy. We show that computing the gradient of $f$ amounts to computing the Singular Value Decomposition (SVD) of a certain matrix associated with the input set, making it simple to implement. We believe it to be superior to other approximate techniques (mainly the ellipsoid algorithm) used previously to find this transformation, and it should run much faster in practice. We prove that $f$ is smooth, which yields convergence rate proportional to $1/epsilon$, where $epsilon$ is the desired approximation accuracy. To complete the analysis, we provide upper bounds on the norm of the optimal solution which depend on new parameters measuring the degeneracy in our input. We believe that our parameters capture degeneracy better than other, seemingly weaker, parameters used in previous works. We next analyze the strong convexity of $f$, and present two worst-case lower bounds on the smallest eigenvalue of its Hessian. This gives another worst-case bound on the convergence rate of another variant of gradient decent that depends only logarithmically on $1/epsilon$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا