No Arabic abstract
Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn inappropriate layouts or parametrizations that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
The concept of realism in quantum mechanics means that results of measurement are caused by physical variables, hidden or observable. Local hidden variables were proved unable to explain results of measurements on entangled particles tested far away from one another. Then, some physicists embraced the idea of nonlocal hidden variables. The present article proves that this idea is problematic, that it runs into an impasse vis-`a-vis the special relativity.
Fine structure of giant resonances (GR) has been established in recent years as a global phenomenon across the nuclear chart and for different types of resonances. A quantitative description of the fine structure in terms of characteristic scales derived by wavelet techniques is discussed. By comparison with microscpic calculations of GR strength distributions one can extract information on the role of different decay mechanisms contributing to the width of GRs. The observed cross-section fluctuations contain information on the level density (LD) of states with a given spin and parity defined by the multipolarity of the GR.
This year marks the thirtieth anniversary of the only supernova from which we have detected neutrinos - SN 1987A. The twenty or so neutrinos that were detected were mined to great depth in order to determine the events that occurred in the explosion and to place limits upon all manner of neutrino properties. Since 1987 the scale and sensitivity of the detectors capable of identifying neutrinos from a Galactic supernova have grown considerably so that current generation detectors are capable of detecting of order ten thousand neutrinos for a supernova at the Galactic Center. Next generation detectors will increase that yield by another order of magnitude. Simultaneous with the growth of neutrino detection capability, our understanding of how massive stars explode and how the neutrino interacts with hot and dense matter has also increased by a tremendous degree. The neutrino signal will contain much information on all manner of physics of interest to a wide community. In this review we describe the expected features of the neutrino signal, the detectors which will detect it, and the signatures one might try to look for in order to get at these physics.
Despite the groundbreaking successes of neural networks, contemporary models require extensive training with massive datasets and exhibit poor out-of-sample generalization. One proposed solution is to build systematicity and domain-specific constraints into the model, echoing the tenets of classical, symbolic cognitive architectures. In this paper, we consider the limitations of this approach by examining human adults ability to learn an abstract reasoning task from a brief instructional tutorial and explanatory feedback for incorrect responses, demonstrating that human learning dynamics and ability to generalize outside the range of the training examples differ drastically from those of a representative neural network model, and that the model is brittle to changes in features not anticipated by its authors. We present further evidence from human data that the ability to consistently solve the puzzles was associated with education, particularly basic mathematics education, and with the ability to provide a reliably identifiable, valid description of the strategy used. We propose that rapid learning and systematic generalization in humans may depend on a gradual, experience-dependent process of learning-to-learn using instructions and explanations to guide the construction of explicit abstract rules that support generalizable inferences.
We make extensive numerical studies of masses and radii of proto-neutron stars during the first second after their birth in core-collapse supernova events. We use a quasi-static approach for the computation of proto-neutron star structure, built on parameterized entropy and electron fraction profiles, that are then evolved with neutrino cooling processes. We vary the equation of state of nuclear matter, the proto-neutron star mass and the parameters of the initial profiles, to take into account our ignorance of the supernova progenitor properties. We show that if masses and radii of a proto-neutron star can be determined in the first second after the birth, e.g. from gravitational wave emission, no information could be obtained on the corresponding cold neutron star and therefore on the cold nuclear equation of state. Similarly, it seems unlikely that any property of the proto-neutron star equation of state (hot and not beta-equilibrated) could be determined either, mostly due to the lack of information on the entropy, or equivalently temperature, distribution in such objects.