We have posted the source code for our cloud model for public use as a tool for the intercomparison of planetary radiation transport models attempting to incorporate the physics of cloud condensation.
The source code suggestions provided by current IDEs are mostly dependent on static type learning. These suggestions often end up proposing irrelevant suggestions for a peculiar context. Recently, deep learning-based approaches have shown great potential in the modeling of source code for various software engineering tasks. However, these techniques lack adequate generalization and resistance to acclimate the use of such models in a real-world software development environment. This letter presents textit{DeepVS}, an end-to-end deep neural code completion tool that learns from existing codebases by exploiting the bidirectional Gated Recurrent Unit (BiGRU) neural net. The proposed tool is capable of providing source code suggestions instantly in an IDE by using pre-trained BiGRU neural net. The evaluation of this work is two-fold, quantitative and qualitative. Through extensive evaluation on ten real-world open-source software systems, the proposed method shows significant performance enhancement and its practicality. Moreover, the results also suggest that textit{DeepVS} tool is capable of suggesting zero-day (unseen) code tokens by learning coding patterns from real-world software systems.
Source code summarization aims at generating concise descriptions of given programs functionalities. While Transformer-based approaches achieve promising performance, they do not explicitly incorporate the code structure information which is important for capturing code semantics. Besides, without explicit constraints, multi-head attentions in Transformer may suffer from attention collapse, leading to poor code representations for summarization. Effectively integrating the code structure information into Transformer is under-explored in this task domain. In this paper, we propose a novel approach named SG-Trans to incorporate code structural properties into Transformer. Specifically, to capture the hierarchical characteristics of code, we inject the local symbolic information (e.g., code tokens) and global syntactic structure (e.g., data flow) into the self-attention module as inductive bias. Extensive evaluation shows the superior performance of SG-Trans over the state-of-the-art approaches.
We discuss results from simulations of black hole formation in failing core-collapse supernovae performed with the code GR1D, a new open-source Eulerian spherically-symmetric general-relativistic hydrodynamics code. GR1D includes rotation in an approximate way (1.5D), comes with multiple finite-temperature nuclear equations of state (EOS), and treats neutrinos in the post-core-bounce phase via a 3-flavor leakage scheme and a heating prescription. We chose the favored K_0=220 MeV-variant of the Lattimer & Swesty (1990) EOS and present collapse calculations using the progenitor models of Limongi & Chieffi (2006). We show that there is no direct (or ``prompt) black hole formation in the collapse of ordinary massive stars (8 M_Sun ~< M_ZAMS ~< 100 M_Sun) and present first results from black hole formation simulations that include rotation.
In recent years there has been a considerable effort in optimising formal methods for application to code. This has been driven by tools such as CPAChecker, DIVINE, and CBMC. At the same time tools such as Uppaal have been massively expanding the realm of more traditional model checking technologies to include strategy synthesis algorithms - an aspect becoming more and more needed as software becomes increasingly parallel. Instead of reimplementing the advances made by Uppaal in this area, we suggest in this paper to develop a bridge between the source code and the engine of Uppaal. Our approach uses the widespread intermediate language LLVM and makes recent advances of the Uppaal ecosystem readily available to analysis of source code.
A major source of uncertainty in AGB models is the partial-mixing process of hydrogen, required for the formation of the so-called $^{13}$C pocket. Among the attempts to derive a self-consistent treatment of this physical process, there are 2D and 3D simulations of magnetic buoyancy. The $^{13}$C pocket resulting from mixing induced by magnetic buoyancy extends over a region larger than those so far assumed, showing an almost flat $^{13}$C distribution and a negligible amount of $^{14}$N. Recently, it has been proved to be a good candidate to match the records of isotopic abundance ratios of $s$-elements in presolar SiC grains. However, up to date such a magnetic mixing has been applied in post-process calculations only, being never implemented in a stellar evolutionary code. Here we present new stellar models, performed with the 1-d hydrostatic FUNS evolutionary code, which include magnetic buoyancy. We comment the resulting $s$-process distributions and show preliminary comparisons to spectroscopic observations and pre-solar grains measurements.