Do you want to publish a course? Click here

If you like C/O variations, you should have put a ring on it

132   0   0.0 ( 0 )
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

The C/O-ratio as traced with C$_2$H emission in protoplanetary disks is fundamental for constraining the formation mechanisms of exoplanets and our understanding of volatile depletion in disks, but current C$_2$H observations show an apparent bimodal distribution which is not well understood, indicating that the C/O distribution is not described by a simple radial dependence. The transport of icy pebbles has been suggested to alter the local elemental abundances in protoplanetary disks, through settling, drift and trapping in pressure bumps resulting in a depletion of volatiles in the surface and an increase of the elemental C/O. We combine all disks with spatially resolved ALMA C$_2$H observations with high-resolution continuum images and constraints on the CO snowline to determine if the C$_2$H emission is indeed related to the location of the icy pebbles. We report a possible correlation between the presence of a significant CO-icy dust reservoir and high C$_2$H emission, which is only found in disks with dust rings outside the CO snowline. In contrast, compact dust disks (without pressure bumps) and warm transition disks (with their dust ring inside the CO snowline) are not detected in C$_2$H, suggesting that such disks may never have contained a significant CO ice reservoir. This correlation provides evidence for the regulation of the C/O profile by the complex interplay of CO snowline and pressure bump locations in the disk. These results demonstrate the importance of including dust transport in chemical disk models, for a proper interpretation of exoplanet atmospheric compositions, and a better understanding of volatile depletion in disks, in particular the use of CO isotopologues to determine gas surface densities.



rate research

Read More

Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively on English -- a language with rigid word order and a lack of inflectional morphology. In this study, we present decoding experiments for multilingual BERT across 18 languages in order to test the generalizability of the claim that dependency syntax is reflected in attention patterns. We show that full trees can be decoded above baseline accuracy from single attention heads, and that individual relations are often tracked by the same heads across languages. Furthermore, in an attempt to address recent debates about the status of attention as an explanatory mechanism, we experiment with fine-tuning mBERT on a supervised parsing objective while freezing different series of parameters. Interestingly, in steering the objective to learn explicit linguistic structure, we find much of the same structure represented in the resulting attention patterns, with interesting differences with respect to which parameters are frozen.
We propose to reinterpret a standard discriminative classifier of p(y|x) as an energy based model for the joint distribution p(x,y). In this setting, the standard class probabilities can be easily computed as well as unnormalized values of p(x) and p(x|y). Within this framework, standard discriminative architectures may beused and the model can also be trained on unlabeled data. We demonstrate that energy based training of the joint distribution improves calibration, robustness, andout-of-distribution detection while also enabling our models to generate samplesrivaling the quality of recent GAN approaches. We improve upon recently proposed techniques for scaling up the training of energy based models and presentan approach which adds little overhead compared to standard classification training. Our approach is the first to achieve performance rivaling the state-of-the-artin both generative and discriminative learning within one hybrid model.
Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent. Previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them. But rather than being specialized in one single quality, a good open-domain conversational agent should be able to seamlessly blend them all into one cohesive conversational flow. In this work, we investigate several ways to combine models trained towards isolated capabilities, ranging from simple model aggregation schemes that require minimal additional training, to various forms of multi-task training that encompass several skills at all training stages. We further propose a new dataset, BlendedSkillTalk, to analyze how these capabilities would mesh together in a natural conversation, and compare the performance of different architectures and training schemes. Our experiments show that multi-tasking over several tasks that focus on particular capabilities results in better blended conversation performance compared to models trained on a single skill, and that both unified or two-stage approaches perform well if they are constructed to avoid unwanted bias in skill selection or are fine-tuned on our new task.
Neural predictive models have achieved remarkable performance improvements in various natural language processing tasks. However, most neural predictive models suffer from the lack of explainability of predictions, limiting their practical utility. This paper proposes a neural predictive approach to make a prediction and generate its corresponding explanation simultaneously. It leverages the knowledge entailed in explanations as an additional distillation signal for more efficient learning. We conduct a preliminary study on Chinese medical multiple-choice question answering, English natural language inference, and commonsense question answering tasks. The experimental results show that the proposed approach can generate reasonable explanations for its predictions even with a small-scale training corpus. The proposed method also achieves improved prediction accuracy on three datasets, which indicates that making predictions can benefit from generating the explanation in the decision process.
Direct scattering transform of nonlinear wave fields with solitons may lead to anomalous numerical errors of soliton phase and position parameters. With the focusing one-dimensional nonlinear Schrodinger equation serving as a model, we investigate this fundamental issue theoretically. Using the dressing method we find the landscape of soliton scattering coefficients in the plane of the complex spectral parameter for multi-soliton wave fields truncated within a finite domain, allowing us to capture the nature of particular numerical errors. They depend on the size of the computational domain $L$ leading to a counterintuitive exponential divergence when increasing $L$ in the presence of a small uncertainty in soliton eigenvalues. In contrast to classical textbooks, we reveal how one of the scattering coefficients loses its analytical properties due to the lack of the wave field compact support in case of $L to infty$. Finally, we demonstrate that despite this inherit direct scattering transform feature, the wave fields of arbitrary complexity can be reliably analysed.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا