Do you want to publish a course? Click here

A User-Friendly Dark Energy Model Generator

106   0   0.0 ( 0 )
 Added by Dragan Huterer
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

We provide software with a graphical user interface to calculate the phenomenology of a wide class of dark energy models featuring multiple scalar fields. The user chooses a subclass of models and, if desired, initial conditions, or else a range of initial parameters for Monte Carlo. The code calculates the energy density of components in the universe, the equation of state of dark energy, and the linear growth of density perturbations, all as a function of redshift and scale factor. The output also includes an approximate conversion into the average equation of state, as well as the common $(w_0, w_a)$ parametrization. The code is available here: http://github.com/kahinton/Dark-Energy-UI-and-MC



rate research

Read More

Model independent reconstructions of dark energy have received some attention. The approach that addresses the reconstruction of the dimensionless coordinate distance and its two first derivatives using a polynomial fit in different redshift windows is well developed cite{DalyDjorgovski1,DalyDjorgovski2,DalyDjorgovski3}. In this work we offer new insights into the problem by focusing on two types of observational probes: SNeIa and GRBs. Our results allow to highlight some of the intrinsic weaknesses of the method. One of the directions we follow is to consider updated observational samples. Our results indicate than conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves (which are not quite ideal). This is particularly true of GRBs, which manifest themselves as poor performers in this context. In contrast to original works, we conclude they cannot be used for cosmological purposes, and the state of the art does not allow to regard them on the same quality basis as SNeIa. The next direction we contribute to is the question of how the adjusting of some parameters (window width, overlap, selection criteria) affect the results. We find again there is a considerable sensitivity to these features. Then, we try to establish what is the current redshift range for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend. But, on the other hand, we believe it offers an interesting complement to other approaches given that it works on minimal assumptions.
202 - Fei Yu , Jingfei Zhang , Jianbo Lu 2010
So far, there have been no theories or observational data that deny the presence of interaction between dark energy and dark matter. We extend naturally the holographic dark energy (HDE) model, proposed by Granda and Oliveros, in which the dark energy density includes not only the square of the Hubble scale, but also the time derivative of the Hubble scale to the case with interaction and the analytic forms for the cosmic parameters are obtained under the specific boundary conditions. The various behaviors concerning the cosmic expansion depend on the introduced numerical parameters which are also constrained. The more general interacting model inherits the features of the previous ones of HDE, keeping the consistency of the theory.
When implementing functionality which requires sparse matrices, there are numerous storage formats to choose from, each with advantages and disadvantages. To achieve good performance, several formats may need to be used in one program, requiring explicit selection and conversion between the formats. This can be both tedious and error-prone, especially for non-expert users. Motivated by this issue, we present a user-friendly sparse matrix class for the C++ language, with a high-level application programming interface deliberately similar to the widely used MATLAB language. The class internally uses two main approaches to achieve efficient execution: (i) a hybrid storage framework, which automatically and seamlessly switches between three underlying storage formats (compressed sparse column, coordinate list, Red-Black tree) depending on which format is best suited for specific operations, and (ii) template-based meta-programming to automatically detect and optimise execution of common expression patterns. To facilitate relatively quick conversion of research code into production environments, the class and its associated functions provide a suite of essential sparse linear algebra functionality (eg., arithmetic operations, submatrix manipulation) as well as high-level functions for sparse eigendecompositions and linear equation solvers. The latter are achieved by providing easy-to-use abstractions of the low-level ARPACK and SuperLU libraries. The source code is open and provided under the permissive Apache 2.0 license, allowing unencumbered use in commercial products.
When using large-batch training to speed up stochastic gradient descent, learning rates must adapt to new batch sizes in order to maximize speed-ups and preserve model quality. Re-tuning learning rates is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, an algorithm that reliably adapts learning rates to large-batch training. By continually adapting to the gradients variance, AdaScale automatically achieves speed-ups for a wide range of batch sizes. We formally describe this quality with AdaScales convergence bound, which maintains final objective values, even as batch sizes grow large and the number of iterations decreases. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular linear learning rate scaling rules. This includes large-batch training with no model degradation for machine translation, image classification, object detection, and speech recognition tasks. AdaScales qualitative behavior is similar to that of warm-up heuristics, but unlike warm-up, this behavior emerges naturally from a principled mechanism. The algorithm introduces negligible computational overhead and no new hyperparameters, making AdaScale an attractive choice for large-scale training in practice.
The braneworld model proposed by Dvali, Gabadadze and Porrati (DGP) leads to an accelerated universe without cosmological constant or other form of dark energy for the positive branch $(epsilon =+1)$. For the negative branch $(epsilon =-1)$ we have investigated the behavior of a model with an holographic Ricci-like dark energy and dark matter, where the IR cutoff takes the form $alpha H^2 + beta dot{H}$, being $H$ the Hubble parameter and $alpha$, $beta$ positive constants of the model. We perform an analytical study of the model in the late-time dark energy dominated epoch, where we obtain a solution for $r_cH(z)$, where $r_c$ is the leakage scale of gravity into the bulk, and conditions for the negative branch on the holographic parameters $alpha$ and $beta$, in order to hold the conditions of weak energy and accelerated universe. On the other hand, we compare the model versus the late-time cosmological data using the latest type Ia supernova sample of the Joint Light-curve Analysis (JLA), in order to constraint the holographic parameters in the negative branch, as well as $r_cH_0$ in the positive branch, where $H_0$ is the Hubble constant. We find that the model has a good fit to the data and that the most likely values for $(r_cH_0, alpha, beta)$ lie in the permitted region found from an analytical solution in a dark energy dominated universe. We give a justification to use holographic cut-off in 4D for the dark energy in the 5 dimensional DGP model. Finally, using the Bayesian Information Criterion we find that this model it is disfavored compared with the flat $Lambda$CDM model.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا