No Arabic abstract
State-of-the-art summarization systems are trained and evaluated on massive datasets scraped from the web. Despite their prevalence, we know very little about the underlying characteristics (data noise, summarization complexity, etc.) of these datasets, and how these affect system performance and the reliability of automatic metrics like ROUGE. In this study, we manually analyze 600 samples from three popular summarization datasets. Our study is driven by a six-class typology which captures different noise types (missing facts, entities) and degrees of summarization difficulty (extractive, abstractive). We follow with a thorough analysis of 27 state-of-the-art summarization models and 5 popular metrics, and report our key insights: (1) Datasets have distinct data quality and complexity distributions, which can be traced back to their collection process. (2) The performance of models and reliability of metrics is dependent on sample complexity. (3) Faithful summaries often receive low scores because of the poor diversity of references. We release the code, annotated data and model outputs.
We present SimultaneousGreedys, a deterministic algorithm for constrained submodular maximization. At a high level, the algorithm maintains $ell$ solutions and greedily updates them in a simultaneous fashion. SimultaneousGreedys achieves the tightest known approximation guarantees for both $k$-extendible systems and the more general $k$-systems, which are $(k+1)^2/k = k + mathcal{O}(1)$ and $(1 + sqrt{k+2})^2 = k + mathcal{O}(sqrt{k})$, respectively. This is in contrast to previous algorithms, which are designed to provide tight approximation guarantees in one setting, but not both. We also improve the analysis of RepeatedGreedy, showing that it achieves an approximation ratio of $k + mathcal{O}(sqrt{k})$ for $k$-systems when allowed to run for $mathcal{O}(sqrt{k})$ iterations, an improvement in both the runtime and approximation over previous analyses. We demonstrate that both algorithms may be modified to run in nearly linear time with an arbitrarily small loss in the approximation. Both SimultaneousGreedys and RepeatedGreedy are flexible enough to incorporate the intersection of $m$ additional knapsack constraints, while retaining similar approximation guarantees: both algorithms yield an approximation guarantee of roughly $k + 2m + mathcal{O}(sqrt{k+m})$ for $k$-systems and SimultaneousGreedys enjoys an improved approximation guarantee of $k+2m + mathcal{O}(sqrt{m})$ for $k$-extendible systems. To complement our algorithmic contributions, we provide a hardness result which states that no algorithm making polynomially many oracle queries can achieve an approximation better than $k + 1/2 + varepsilon$. We also present SubmodularGreedy.jl, a Julia package which implements these algorithms and may be downloaded at https://github.com/crharshaw/SubmodularGreedy.jl . Finally, we test the effectiveness of these algorithms on real datasets.
Up to ages of ~100 Myr, massive clusters are still swamped in large amounts of gas and dust, with considerable and uneven levels of extinction. At the same time, large grains (ices?) produced by type II supernovae profoundly alter the interstellar medium (ISM), thus resulting in extinction properties very different from those of the diffuse ISM. To obtain physically meaningful parameters of stars, from basic luminosities and effective temperatures to masses and ages, we must understand and measure the local extinction law. This problem affects all the massive young clusters discussed in his volume. We have developed a powerful method to unambiguously determine the extinction law in an uniform way across a cluster field, using multi-band photometry of red giant stars belonging to the red clump (RC). In the Large Magellanic Cloud, with about 20 RC stars per arcmin^2, we can easily derive a solid and self-consistent absolute extinction curve over the entire wavelength range of the photometry. Here, we present the extinction law of the Tarantula nebula (30 Dor) based on thousands of stars observed as part of the Hubble Tarantula Treasury Project.
The reliable evaluation of the r-process production of the actinides and careful estimates of the uncertainties affecting these predictions are key ingredients especially in nucleo-cosmochronology studies based on the analysis of very metal-poor stars or on the composition of meteorites. This type of information is also required in order to make the best possible use of future high precision data on the actinide composition of galactic cosmic rays, of the local interstellar medium, or of meteoritic grains of presumed circumstellar origin. This paper provides the practitioners in these various fields with the most detailed and careful analysis of the r-process actinide production available to-date. In total, thirty-two different multi-event canonical calculations using different nuclear ingredients or astrophysics conditions are presented, and are considered to give a fair picture of the level of reliability of the predictions of the actinide production, at least in the framework of a simple r-process model. This simplicity is imposed by our inability to identify the proper astrophysical sites for the r-process. Constraints on the actinide yield predictions and associated uncertainties are suggested on grounds of the measured abundances of r-nuclides, including Th and U, in the star CS 31082-001, and under the critical and questionable assumption of the `universality of the r-process. We also define alternative constraints based on the nucleo-cosmochronological results derived from the present actinide content of meteorites. Implications to the different above-cited fields, and in particular nucleo-cosmochronometry are discussed.
The bright, well-known K5 giant Aldebaran, alpha Tau, is probably the star with the largest number of direct angular diameter determinations, achieved over a long time by several authors using various techniques. In spite of this wealth of data, or perhaps as a direct result of it, there is not a very good agreement on a single angular diameter value. This is particularly unsettling if one considers that Aldebaran is also used as a primary calibrator for some angular resolution methods, notably for optical and infrared long baseline interferometry. Directly connected to Aldebarans angular diameter and its uncertainties is its effective temperature, which also has been used for several empirical calibrations. Among the proposed explanations for the elusiveness of an accurate determination of the angular diameter of Aldebaran are the possibility of temporal variations as well as a possible dependence of the angular diameter on the wavelength. We present here a few, very accurate new determinations obtained by means of lunar occultations and long baseline interferometry. We derive an average value of 19.96+-0.03 milliarcseconds for the uniform disk diameter. The corresponding limb-darkened value is 20.58+-0.03 milliarcseconds, or 44.2+-0.9 R(sun). We discuss this result, in connection with previous determinations and with possible problems that may affect such measurements.
We give an overview about equations of state (EOS) which are currently available for simulations of core-collapse supernovae and neutron star mergers. A few selected important aspects of the EOS, such as the symmetry energy, the maximum mass of neutron stars, and cluster formation, are confronted with constraints from experiments and astrophysical observations. There are just very few models which are compatible even with this very restricted set of constraints. These remaining models illustrate the uncertainty of the uniform nuclear matter EOS at high densities. In addition, at finite temperatures the medium modifications of nuclear clusters represent a conceptual challenge. In conclusion, there has been significant development in the recent years, but there is still need for further improved general purpose EOS tables.