Tracking the origin of the accelerating expansion of the Universe remains one of the most challenging research activities today. The final answer will depend on the precision and on the consistency of future data. The sensitivity of future surveys and the control of the errors are crucial. We focus on futur supernovae surveys in the light of the figure of merit defined by the Dark Energy Task Force. We compare different optimisation and emphasize the importance of the understanding of the systematic error level in this approach and their impact on the conclusions. We discuss different representations of the results to distinguish $Lambda$CDM from other theoretical models. We conclude that all representations should be controlled through combined analyses and consistency checks to avoid some bias.
We compare current and forecasted constraints on dynamical dark energy models from Type Ia supernovae and the cosmic microwave background using figures of merit based on the volume of the allowed dark energy parameter space. For a two-parameter dark energy equation of state that varies linearly with the scale factor, and assuming a flat universe, the area of the error ellipse can be reduced by a factor of ~10 relative to current constraints by future space-based supernova data and CMB measurements from the Planck satellite. If the dark energy equation of state is described by a more general basis of principal components, the expected improvement in volume-based figures of merit is much greater. While the forecasted precision for any single parameter is only a factor of 2-5 smaller than current uncertainties, the constraints on dark energy models bounded by -1<w<1 improve for approximately 6 independent dark energy parameters resulting in a reduction of the total allowed volume of principal component parameter space by a factor of ~100. Typical quintessence models can be adequately described by just 2-3 of these parameters even given the precision of future data, leading to a more modest but still significant improvement. In addition to advances in supernova and CMB data, percent-level measurement of absolute distance and/or the expansion rate is required to ensure that dark energy constraints remain robust to variations in spatial curvature.
We determined frictional figures of merit for a pair of layered honeycomb nanostructures, such as graphane, fluorographene, MoS$_2$ and WO$_2$ moving over each other, by carrying out ab-initio calculations of interlayer interaction under constant loading force. Using Prandtl-Tomlinson model we derived critical stiffness required to avoid stick-slip behavior. We showed that these layered structures have low critical stiffness even under high loading forces due to their charged surfaces repelling each other. The intrinsic stiffness of these materials exceed critical stiffness and thereby avoid the stick-slip regime and attain nearly dissipationless continuous sliding. Remarkably, tungsten dioxide displays much better performance relative to others and heralds a potential superlubricant. The absence of mechanical instabilities leading to conservative lateral forces is also confirmed directly by the simulations of sliding layers.
The optimization and scheduling of scientific observations done with instrumentation supported by adaptive optics could greatly benefit from the forecast of PSF figures of merit (FWHM, Strehl Ratio, Encircle Energy and contrast), that depend on the AO instrument, the scientific target and turbulence conditions during the observing night. In this contribution we explore the the possibility to forecast a few among the most useful PSF figures of merit (SR and FWHM). To achieve this goal, we use the optical turbulence forecasted by the mesoscale atmospheric model Astro-Meso-NH on a short timescale as an input for PSF simulation software developed and tailored for specific AO instruments. A preliminary validation will be performed by comparing the results with on-sky measured PSF figures of merit obtained on specific targets using the SCAO systems SOUL (FLAO upgrade) feeding the camera LUCI at LBT and SAXO, the extreme SCAO system feeding the high resolution SPHERE instrument at VLT. This study will pave the way to the implementation of an operational forecasts of such a figure of merits on the base of existing operational forecast system of the atmosphere (turbulence and atmospheric parameters). In this contribution we focus our attention on the forecast of the PSF on-axis.
Before global-scale quantum networks become operational, it is important to consider how to evaluate their performance so that they can be built to achieve the desired performance. We propose two practical figures of merit for the performance of a quantum network: the average connection time and the average largest entanglement cluster size. These quantities are based on the generation of elementary links in a quantum network, which is a crucial initial requirement that must be met before any long-range entanglement distribution can be achieved and is inherently probabilistic with current implementations. We obtain bounds on these figures of merit for a particular class of quantum repeater protocols consisting of repeat-until-success elementary link generation followed by joining measurements at intermediate nodes that extend the entanglement range. Our results lead to requirements on quantum memory coherence times, requirements on repeater chain lengths in order to surpass the repeaterless rate limit, and requirements on other aspects of quantum network implementations. These requirements are based solely on the inherently probabilistic nature of elementary link generation in quantum networks, and they apply to networks with arbitrary topology.
The precision of cosmological parameters derived from galaxy cluster surveys is limited by uncertainty in relating observable signals to cluster mass. We demonstrate that a small mass-calibration follow-up program can significantly reduce this uncertainty and improve parameter constraints, particularly when the follow-up targets are judiciously chosen. To this end, we apply a simulated annealing algorithm to maximize the dark energy information at fixed observational cost, and find that optimal follow-up strategies can reduce the observational cost required to achieve a specified precision by up to an order of magnitude. Considering clusters selected from optical imaging in the Dark Energy Survey, we find that approximately 200 low-redshift X-ray clusters or massive Sunyaev-Zeldovich clusters can improve the dark energy figure of merit by 50%, provided that the follow-up mass measurements involve no systematic error. In practice, the actual improvement depends on (1) the uncertainty in the systematic error in follow-up mass measurements, which needs to be controlled at the 5% level to avoid severe degradation of the results; and (2) the scatter in the optical richness-mass distribution, which needs to be made as tight as possible to improve the efficacy of follow-up observations.