Do you want to publish a course? Click here

How Do You Want Your Greedy: Simultaneous or Repeated?

118   0   0.0 ( 0 )
 Added by Christopher Harshaw
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We present SimultaneousGreedys, a deterministic algorithm for constrained submodular maximization. At a high level, the algorithm maintains $ell$ solutions and greedily updates them in a simultaneous fashion. SimultaneousGreedys achieves the tightest known approximation guarantees for both $k$-extendible systems and the more general $k$-systems, which are $(k+1)^2/k = k + mathcal{O}(1)$ and $(1 + sqrt{k+2})^2 = k + mathcal{O}(sqrt{k})$, respectively. This is in contrast to previous algorithms, which are designed to provide tight approximation guarantees in one setting, but not both. We also improve the analysis of RepeatedGreedy, showing that it achieves an approximation ratio of $k + mathcal{O}(sqrt{k})$ for $k$-systems when allowed to run for $mathcal{O}(sqrt{k})$ iterations, an improvement in both the runtime and approximation over previous analyses. We demonstrate that both algorithms may be modified to run in nearly linear time with an arbitrarily small loss in the approximation. Both SimultaneousGreedys and RepeatedGreedy are flexible enough to incorporate the intersection of $m$ additional knapsack constraints, while retaining similar approximation guarantees: both algorithms yield an approximation guarantee of roughly $k + 2m + mathcal{O}(sqrt{k+m})$ for $k$-systems and SimultaneousGreedys enjoys an improved approximation guarantee of $k+2m + mathcal{O}(sqrt{m})$ for $k$-extendible systems. To complement our algorithmic contributions, we provide a hardness result which states that no algorithm making polynomially many oracle queries can achieve an approximation better than $k + 1/2 + varepsilon$. We also present SubmodularGreedy.jl, a Julia package which implements these algorithms and may be downloaded at https://github.com/crharshaw/SubmodularGreedy.jl . Finally, we test the effectiveness of these algorithms on real datasets.



rate research

Read More

State-of-the-art summarization systems are trained and evaluated on massive datasets scraped from the web. Despite their prevalence, we know very little about the underlying characteristics (data noise, summarization complexity, etc.) of these datasets, and how these affect system performance and the reliability of automatic metrics like ROUGE. In this study, we manually analyze 600 samples from three popular summarization datasets. Our study is driven by a six-class typology which captures different noise types (missing facts, entities) and degrees of summarization difficulty (extractive, abstractive). We follow with a thorough analysis of 27 state-of-the-art summarization models and 5 popular metrics, and report our key insights: (1) Datasets have distinct data quality and complexity distributions, which can be traced back to their collection process. (2) The performance of models and reliability of metrics is dependent on sample complexity. (3) Faithful summaries often receive low scores because of the poor diversity of references. We release the code, annotated data and model outputs.
In many signal processing applications, the aim is to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as atoms allow us to define atomic norms that can be used to formulate convex regularizations for the reconstruction problem. Efficient algorithms are available to solve these formulations in certain special cases, but an approach that works well for general atomic norms, both in terms of speed and reconstruction accuracy, remains to be found. This paper describes an optimization algorithm called CoGEnT that produces solutions with succinct atomic representations for reconstruction problems, generally formulated with atomic-norm constraints. CoGEnT combines a greedy selection scheme based on the conditional gradient approach with a backward (or truncation) step that exploits the quadratic nature of the objective to reduce the basis size. We establish convergence properties and validate the algorithm via extensive numerical experiments on a suite of signal processing applications. Our algorithm and analysis also allow for inexact forward steps and for occasional enhancements of the current representation to be performed. CoGEnT can outperform the basic conditional gradient method, and indeed many methods that are tailored to specific applications, when the enhancement and truncation steps are defined appropriately. We also introduce several novel applications that are enabled by the atomic-norm framework, including tensor completion, moment problems in signal processing, and graph deconvolution.
Up to ages of ~100 Myr, massive clusters are still swamped in large amounts of gas and dust, with considerable and uneven levels of extinction. At the same time, large grains (ices?) produced by type II supernovae profoundly alter the interstellar medium (ISM), thus resulting in extinction properties very different from those of the diffuse ISM. To obtain physically meaningful parameters of stars, from basic luminosities and effective temperatures to masses and ages, we must understand and measure the local extinction law. This problem affects all the massive young clusters discussed in his volume. We have developed a powerful method to unambiguously determine the extinction law in an uniform way across a cluster field, using multi-band photometry of red giant stars belonging to the red clump (RC). In the Large Magellanic Cloud, with about 20 RC stars per arcmin^2, we can easily derive a solid and self-consistent absolute extinction curve over the entire wavelength range of the photometry. Here, we present the extinction law of the Tarantula nebula (30 Dor) based on thousands of stars observed as part of the Hubble Tarantula Treasury Project.
A common practice in many auctions is to offer bidders an opportunity to improve their bids, known as a Best and Final Offer (BAFO) stage. This final bid can depend on new information provided about either the asset or the competitors. This paper examines the effects of new information regarding competitors, seeking to determine what information the auctioneer should provide assuming the set of allowable bids is discrete. The rational strategy profile that maximizes the revenue of the auctioneer is the one where each bidder makes the highest possible bid that is lower than his valuation of the item. This strategy profile is an equilibrium for a large enough number of bidders, regardless of the information released. We compare the number of bidders needed for this profile to be an equilibrium under different information settings. We find that it becomes an equilibrium with fewer bidders when less additional information is made available to the bidders regarding the competition. It follows that when the number of bidders is a priori unknown, there are some advantages to the auctioneer to not reveal information.
Being able to control the acoustic events (AEs) to which we want to listen would allow the development of more controllable hearable devices. This paper addresses the AE sound selection (or removal) problems, that we define as the extraction (or suppression) of all the sounds that belong to one or multiple desired AE classes. Although this problem could be addressed with a combination of source separation followed by AE classification, this is a sub-optimal way of solving the problem. Moreover, source separation usually requires knowing the maximum number of sources, which may not be practical when dealing with AEs. In this paper, we propose instead a universal sound selection neural network that enables to directly select AE sounds from a mixture given user-specified target AE classes. The proposed framework can be explicitly optimized to simultaneously select sounds from multiple desired AE classes, independently of the number of sources in the mixture. We experimentally show that the proposed method achieves promising AE sound selection performance and could be generalized to mixtures with a number of sources that are unseen during training.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا