Do you want to publish a course? Click here

Jacobi-Style Iteration for Distributed Submodular Maximization

70   0   0.0 ( 0 )
 Added by Kun Qian
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper presents a novel Jacobi-style iteration algorithm for solving the problem of distributed submodular maximization, in which each agent determines its own strategy from a finite set so that the global submodular objective function is jointly maximized. Building on the multi-linear extension of the global submodular function, we expect to achieve the solution from a probabilistic, rather than deterministic, perspective, and thus transfer the considered problem from a discrete domain into a continuous domain. Since it is observed that an unbiased estimation of the gradient of multi-linear extension function~can be obtained by sampling the agents local decisions, a projected stochastic gradient algorithm is proposed to solve the problem. Our algorithm enables the distributed updates among all individual agents and is proved to asymptotically converge to a desirable equilibrium solution. Such an equilibrium solution is guaranteed to achieve at least 1/2-suboptimal bound, which is comparable to the state-of-art in the literature. Moreover, we further enhance the proposed algorithm by handling the scenario in which agents communication delays are present. The enhanced algorithmic framework admits a more realistic distributed implementation of our approach. Finally, a movie recommendation task is conducted on a real-world movie rating data set, to validate the numerical performance of the proposed algorithms.



rate research

Read More

Many large-scale machine learning problems--clustering, non-parametric learning, kernel machines, etc.--require selecting a small yet representative subset from a large dataset. Such problems can often be reduced to maximizing a submodular set function subject to various constraints. Classical approaches to submodular optimization require centralized access to the full dataset, which is impractical for truly large-scale problems. In this paper, we consider the problem of submodular function maximization in a distributed fashion. We develop a simple, two-stage protocol GreeDi, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and show that under certain natural conditions, performance close to the centralized approach can be achieved. We begin with monotone submodular maximization subject to a cardinality constraint, and then extend this approach to obtain approximation guarantees for (not necessarily monotone) submodular maximization subject to more general constraints including matroid or knapsack constraints. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar based clustering on tens of millions of examples using Hadoop.
Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for solving network utility maximization problems with self-concordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition.
An important issue in todays electricity markets is the management of flexibilities offered by new practices, such as smart home appliances or electric vehicles. By inducing changes in the behavior of residential electric utilities, demand response (DR) seeks to adjust the demand of power to the supply for increased grid stability and better integration of renewable energies. A key role in DR is played by emergent independent entities called load aggregators (LAs). We develop a new decentralized algorithm to solve a convex relaxation of the classical Alternative Current Optimal Power Flow (ACOPF) problem, which relies on local information only. Each computational step can be performed in an entirely privacy-preserving manner, and system-wide coordination is achieved via node-specific distribution locational marginal prices (DLMPs). We demonstrate the efficiency of our approach on a 15-bus radial distribution network.
A variety of large-scale machine learning problems can be cast as instances of constrained submodular maximization. Existing approaches for distributed submodular maximization have a critical drawback: The capacity - number of instances that can fit in memory - must grow with the data set size. In practice, while one can provision many machines, the capacity of each machine is limited by physical constraints. We propose a truly scalable approach for distributed submodular maximization under fixed capacity. The proposed framework applies to a broad class of algorithms and constraints and provides theoretical guarantees on the approximation factor for any available capacity. We empirically evaluate the proposed algorithm on a variety of data sets and demonstrate that it achieves performance competitive with the centralized greedy solution.
Continuous submodular functions are a category of generally non-convex/non-concave functions with a wide spectrum of applications. The celebrated property of this class of functions - continuous submodularity - enables both exact minimization and approximate maximization in poly. time. Continuous submodularity is obtained by generalizing the notion of submodularity from discrete domains to continuous domains. It intuitively captures a repulsive effect amongst different dimensions of the defined multivariate function. In this paper, we systematically study continuous submodularity and a class of non-convex optimization problems: continuous submodular function maximization. We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property. Thus we also derive a subclass of continuous submodular functions, termed continuous DR-submodular functions, which enjoys the full DR property. Then we present operations that preserve continuous (DR-)submodularity, thus yielding general rules for composing new submodular functions. We establish intriguing properties for the problem of constrained DR-submodular maximization, such as the local-global relation. We identify several applications of continuous submodular optimization, ranging from influence maximization, MAP inference for DPPs to provable mean field inference. For these applications, continuous submodularity formalizes valuable domain knowledge relevant for optimizing this class of objectives. We present inapproximability results and provable algorithms for two problem settings: constrained monotone DR-submodular maximization and constrained non-monotone DR-submodular maximization. Finally, we extensively evaluate the effectiveness of the proposed algorithms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا