No Arabic abstract
Mass loss rate formulae are derived from observations or from suites of models. For theoretical models, the following have all been identified as factors greatly influencing the atmospheric structure and mass loss rates: Pulsation with piston amplitude scaling appropriately with stellar L; dust nucleation and growth, with radiation pressure and grain-gas interactions and appropriate dependence on temperature and density; non-grey opacity with at least 51 frequency samples; non-LTE and departures from radiative equilibrium in the compressed and expanding flows; and non-equilibrium processes affecting the composition (grain formation; molecular chemistry). No one set of models yet includes all the factors known to be important. In fact, it is very difficult to construct a model that can simultaneously include these factors and be useful for computing spectra. Therefore, although theoretical model grids are needed to separate the effects of M,L,R and/or $T_{mathrm{eff}}$ or Z on the mass loss rates, these models must be carefully checked against observations. Getting the right order of magnitude for the mass loss rate is only the first step in such a comparison, and is not sufficient to determine whether the mass loss formula is correct. However, there are observables that do test the validity of mass loss formulae as they depend directly on $dlog dot M/dlog L$, $dlog dot M/dlog R$, or $dlog dot M/dlog P$.
In the present paper, we investigate the cosmographic problem using the bias-variance trade-off. We find that both the z-redshift and the $y=z/(1+z)$-redshift can present a small bias estimation. It means that the cosmography can describe the supernova data more accurately. Minimizing risk, it suggests that cosmography up to the second order is the best approximation. Forecasting the constraint from future measurements, we find that future supernova and redshift drift can significantly improve the constraint, thus having the potential to solve the cosmographic problem. We also exploit the values of cosmography on the deceleration parameter and equation of state of dark energy $w(z)$. We find that supernova cosmography cannot give stable estimations on them. However, much useful information was obtained, such as that the cosmography favors a complicated dark energy with varying $w(z)$, and the derivative $dw/dz<0$ for low redshift. The cosmography is helpful to model the dark energy.
We present the simplest nuclear energy density functional (NEDF) to date, determined by only 4 significant phenomenological parameters, yet capable of fitting measured nuclear masses with better accuracy than the Bethe-Weizsacker mass formula, while also describing density structures (charge radii, neutron skins etc.) and time-dependent phenomena (induced fission, giant resonances, low energy nuclear collisions, etc.). The 4 significant parameters are necessary to describe bulk nuclear properties (binding energies and charge radii); an additional 2 to 3 parameters have little influence on the bulk nuclear properties, but allow independent control of the density dependence of the symmetry energy and isovector excitations, in particular the Thomas-Reiche-Kuhn sum rule. This Hohenberg-Kohn-style of density functional theory successfully realizes Weizsackers ideas and provides a computationally tractable model for a variety of static nuclear properties and dynamics, from finite nuclei to neutron stars, where it will also provide a new insight into the physics of the r-process, nucleosynthesis, and neutron star crust structure. This new NEDF clearly separates the bulk geometric properties - volume, surface, symmetry, and Coulomb energies which amount to 8MeV per nucleon or up to 2000MeV per nucleus for heavy nuclei - from finer details related to shell effects, pairing, isospin breaking, etc. which contribute at most a few MeV for the entire nucleus. Thus it provides a systematic framework for organizing various contributions to the NEDF. Measured and calculated physical observables - symmetry and saturation properties, the neutron matter equation of state, and the frequency of giant dipole resonances - lead directly to new terms not considered in current NEDF parameterizations.
The DNA molecule, apart from carrying the genetic information, plays a crucial role in a variety of biological processes and find applications in drug design, nanotechnology and nanoelectronics. The molecule undergoes significant structural transitions under the influence of forces due to physiological and non-physiological environments. Here, we summarize the insights gained from simulations and single-molecule experiments on the structural transitions and mechanics of DNA under force, as well as its elastic properties, in various environmental conditions, and discuss appealing future directions.
String theory has transformed our understanding of geometry, topology and spacetime. Thus, for this special issue of Foundations of Physics commemorating Forty Years of String Theory, it seems appropriate to step back and ask what we do not understand. As I will discuss, time remains the least understood concept in physical theory. While we have made significant progress in understanding space, our understanding of time has not progressed much beyond the level of a century ago when Einstein introduced the idea of space-time as a combined entity. Thus, I will raise a series of open questions about time, and will review some of the progress that has been made as a roadmap for the future.
We present updated values for the mass-mixing parameters relevant to neutrino oscillations, with particular attention to emerging hints in favor of theta_13>0. We also discuss the status of absolute neutrino mass observables, and a possible approach to constrain theoretical uncertainties in neutrinoless double beta decay. Desiderata for all these issues are also briefly mentioned.