ترغب بنشر مسار تعليمي؟ اضغط هنا

Submodular Optimization Over Streams with Inhomogeneous Decays

96   0   0.0 ( 0 )
 نشر من قبل Junzhou Zhao
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Cardinality constrained submodular function maximization, which aims to select a subset of size at most $k$ to maximize a monotone submodular utility function, is the key in many data mining and machine learning applications such as data summarization and maximum coverage problems. When data is given as a stream, streaming submodular optimization (SSO) techniques are desired. Existing SSO techniques can only apply to insertion-only streams where each element has an infinite lifespan, and sliding-window streams where each element has a same lifespan (i.e., window size). However, elements in some data streams may have arbitrary different lifespans, and this requires addressing SSO over streams with inhomogeneous-decays (SSO-ID). This work formulates the SSO-ID problem and presents three algorithms: BasicStreaming is a basic streaming algorithm that achieves an $(1/2-epsilon)$ approximation factor; HistApprox improves the efficiency significantly and achieves an $(1/3-epsilon)$ approximation factor; HistStreaming is a streaming version of HistApprox and uses heuristics to further improve the efficiency. Experiments conducted on real data demonstrate that HistStreaming can find high quality solutions and is up to two orders of magnitude faster than the naive Greedy algorithm.



قيم البحث

اقرأ أيضاً

Submodular continuous functions are a category of (generally) non-convex/non-concave functions with a wide spectrum of applications. We characterize these functions and demonstrate that they can be maximized efficiently with approximation guarantees. Specifically, i) We introduce the weak DR property that gives a unified characterization of submodularity for all set, integer-lattice and continuous functions; ii) for maximizing monotone DR-submodular continuous functions under general down-closed convex constraints, we propose a Frank-Wolfe variant with $(1-1/e)$ approximation guarantee, and sub-linear convergence rate; iii) for maximizing general non-monotone submodular continuous functions subject to box constraints, we propose a DoubleGreedy algorithm with $1/3$ approximation guarantee. Submodular continuous functions naturally find applications in various real-world settings, including influence and revenue maximization with continuous assignments, sensor energy management, multi-resolution data summarization, facility location, etc. Experimental results show that the proposed algorithms efficiently generate superior solutions compared to baseline algorithms.
In the submodular cover problem, we are given a non-negative monotone submodular function $f$ over a ground set $E$ of items, and the goal is to choose a smallest subset $S subseteq E$ such that $f(S) = Q$ where $Q = f(E)$. In the stochastic version of the problem, we are given $m$ stochastic items which are different random variables that independently realize to some item in $E$, and the goal is to find a smallest set of stochastic items whose realization $R$ satisfies $f(R) = Q$. The problem captures as a special case the stochastic set cover problem and more generally, stochastic covering integer programs. We define an $r$-round adaptive algorithm to be an algorithm that chooses a permutation of all available items in each round $k in [r]$, and a threshold $tau_k$, and realizes items in the order specified by the permutation until the function value is at least $tau_k$. The permutation for each round $k$ is chosen adaptively based on the realization in the previous rounds, but the ordering inside each round remains fixed regardless of the realizations seen inside the round. Our main result is that for any integer $r$, there exists a poly-time $r$-round adaptive algorithm for stochastic submodular cover whose expected cost is $tilde{O}(Q^{{1}/{r}})$ times the expected cost of a fully adaptive algorithm. Prior to our work, such a result was not known even for the case of $r=1$ and when $f$ is the coverage function. On the other hand, we show that for any $r$, there exist instances of the stochastic submodular cover problem where no $r$-round adaptive algorithm can achieve better than $Omega(Q^{{1}/{r}})$ approximation to the expected cost of a fully adaptive algorithm. Our lower bound result holds even for coverage function and for algorithms with unbounded computational power.
In this paper we study the fundamental problems of maximizing a continuous non-monotone submodular function over the hypercube, both with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, and communication systems. Our main result is the first $frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points. For the special case of DR-submodular maximization, i.e. when the submodular functions is also coordinate wise concave along all coordinates, we provide a different $frac{1}{2}$-approximation algorithm that runs in quasilinear time. Both of these results improve upon prior work [Bian et al, 2017, Soma and Yoshida, 2017]. Our first algorithm uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related machine learning applications.
Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and m achine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, FOCS 2015] run in $O(n^{2}log nMcdottextrm{EO} +n^{3}log^{O(1)}nM)$ time and $O(n^{3}log^{2}ncdot textrm{EO} +n^{4}log^{O(1)}n$) time respectively, where $M$ is the largest absolute value of the function (assuming the range is integers) and $textrm{EO}$ is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only $Omega(n)$, the current shortest non-deterministic proof certifying the optimum value of a function requires $Theta(n^{2})$ function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in $O(nM^{3}log ncdottextrm{EO})$ time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in $[-1,1]$, we give an algorithm which in $tilde{O}(n^{5/3}cdottextrm{EO}/varepsilon^{2})$ time returns an $varepsilon$-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time subgradient updates. . The latter is crucial for beating the $n^{2}$ bound as we show that algorithms which access only subgradients of the Lovasz extension, and these include the theoretically best algorithms mentioned above, must make $Omega(n)$ subgradient calls (even for functions whose range is ${-1,0,1}$).
We study the problem of maximizing a non-monotone submodular function subject to a cardinality constraint in the streaming model. Our main contribution is a single-pass (semi-)streaming algorithm that uses roughly $O(k / varepsilon^2)$ memory, where $k$ is the size constraint. At the end of the stream, our algorithm post-processes its data structure using any offline algorithm for submodular maximization, and obtains a solution whose approximation guarantee is $frac{alpha}{1+alpha}-varepsilon$, where $alpha$ is the approximation of the offline algorithm. If we use an exact (exponential time) post-processing algorithm, this leads to $frac{1}{2}-varepsilon$ approximation (which is nearly optimal). If we post-process with the algorithm of Buchbinder and Feldman (Math of OR 2019), that achieves the state-of-the-art offline approximation guarantee of $alpha=0.385$, we obtain $0.2779$-approximation in polynomial time, improving over the previously best polynomial-time approximation of $0.1715$ due to Feldman et al. (NeurIPS 2018). It is also worth mentioning that our algorithm is combinatorial and deterministic, which is rare for an algorithm for non-monotone submodular maximization, and enjoys a fast update time of $O(frac{log k + log (1/alpha)}{varepsilon^2})$ per element.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا