No Arabic abstract
The launch of the James Webb Space Telescope will open up a new window for observations at the highest redshifts, reaching out to z~15. However, even with this new facility, the first stars will remain out of reach, as they are born in small minihalos with luminosities too faint to be detected even by the longest exposure times. In this paper, we investigate the basic properties of the Ultimately Large Telescope, a facility that can detect Population III star formation regions at high redshift. Observations will take place in the near-infrared and therefore a moon-based facility is proposed. An instrument needs to reach magnitudes as faint as 39mag$_mathrm{AB}$, corresponding to a primary mirror size of about 100m in diameter. Assuming JWST NIRCam filters, we estimate that Pop III sources will have unique signatures in a colour-colour space and can be identified unambiguously.
Isolated population III stars are postulated to exist at approximately z=10-30 and may attain masses up to a few hundred solar masses. The James Webb Space telescope (JWST) is the next large space based infrared telescope and is scheduled for launch in 2014. Using a 6.5 meter primary mirror, it will probably be able to detect some of the first galaxies forming in the early Universe. A natural question is whether it will also be able to see any isolated population III stars. Here, we calculate the apparent broadband AB-magnitudes for 300 solar masses population III stars in JWST filters at z=10-20. Our calculations are based on realistic stellar atmospheres and take into account the potential flux contribution from the surrounding HII region. The gravitational magnification boost achieved when pointing JWST through a foreground galaxy cluster is also considered. Using this machinery, we derive the conditions required for JWST to be able to detect population III stars in isolation. We find that a detection of individual population III stars with JWST is unlikely at these redshifts. However, the main problem is not necessarily that these stars are too faint, once gravitational lensing is taken into account, but that their surface number densities are too low.
Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. The central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles, they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.
We examine the possibility of soft cosmology, namely small deviations from the usual framework due to the effective appearance of soft-matter properties in the Universe sectors. One effect of such a case would be the dark energy to exhibit a different equation-of-state parameter at large scales (which determine the universe expansion) and at intermediate scales (which determine the sub-horizon clustering and the large scale structure formation). Concerning soft dark matter, we show that it can effectively arise due to the dark-energy clustering, even if dark energy is not soft. We propose a novel parametrization introducing the softness parameters of the dark sectors. As we see, although the background evolution remains unaffected, due to the extreme sensitivity and significant effects on the global properties even a slightly non-trivial softness parameter can improve the clustering behavior and alleviate e.g. the $fsigma_8$ tension. Lastly, an extension of the cosmological perturbation theory and a detailed statistical mechanical analysis, in order to incorporate complexity and estimate the scale-dependent behavior from first principles, is necessary and would provide a robust argumentation in favour of soft cosmology.
We investigate and discuss protostellar discs in terms of where the various non-ideal magnetohydrodynamics (MHD) processes are important. We find that the traditional picture of a magnetised disc (where Ohmic resistivity is dominant near the mid-plane, surrounded by a region dominated by the Hall effect, with the remainder of the disc dominated by ambipolar diffusion) is a great oversimplification. In simple parameterised discs, we find that the Hall effect is typically the dominant term throughout the majority of the disc. More importantly, we find that in much of our parameterised discs, at least two non-ideal processes have coefficients within a factor of 10 of one another, indicating that both are important and that naming a dominant term underplays the importance of the other terms. Discs that were self-consistently formed in our previous studies are also dominated by the Hall effect, and the ratio of ambipolar diffusion and Hall coefficients is typically less than 10, suggesting that both terms are equally important and listing a dominant term is misleading. These conclusions become more robust once the magnetic field geometry is taken into account. In agreement with the literature we review, we conclude that non-ideal MHD processes are important for the formation and evolution of protostellar discs. Ignoring any of the non-ideal processes, especially ambipolar diffusion and the Hall effect, yields an incorrect description of disc evolution.
Recently, the formation of primordial black holes (PBHs) from the collapse of primordial fluctuations has received much attention. The abundance of PBHs formed during radiation domination is sensitive to the tail of the probability distribution of primordial fluctuations. We quantify the level of fine-tuning due to this sensitivity. For example, if the main source of dark matter is PBHs with mass $10^{-12}M_odot$, then anthropic reasoning suggests that the dark matter to baryon ratio should range between 1 and 300. For this to happen, the root-mean-square amplitude of the curvature perturbation has to be fine-tuned within a $7.1%$ range. As another example, if the recently detected gravitational-wave events are to be explained by PBHs, the corresponding degree of fine-tuning is $3.8%$. We also find, however, that these fine-tunings can be relaxed if the primordial fluctuations are highly non-Gaussian, or if the PBHs are formed during an early-matter-dominated phase. We also note that no fine-tuning is needed for the scenario of a reheating of the universe by evaporated PBHs with Planck-mass relics left to serve as dark matter.