Do you want to publish a course? Click here

الأغلفة الخطية الأربعة المتقاطعة بأربعة مستقيمات

574   0   0   0.0 ( 0 )
 Publication date 2016
  fields Mathematics
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

No English abstract

References used
محمد البردوني - الاغلفة الخطية الاربعة لمدارات مناحي تناظر السطوح الجبرية في الفضاء em
rate research

Read More

This research was carried out during two seasons 2014 – 2015 to study effect of seed coats on germination of endocarp, seed and embryos of two wild genotypes (M1, M2) of Mahaleb rootstock (Prunus mahaleb L.) prevailing in Alhaffa area, Lattakia. Th e experiments were done at Lattakia Scientific Agricultural Research center and Faculty of Agriculture at Tishreen University. The Results of the seeds planted on agar 0.7% in dark at 15 C° showed that no germination of the M1 endocarp VS late and low germination at 10% for the M2 endocarp starting after 98 days. As for seeds, the germination was (66.66%) and (53.33%) for M1 and M2, respectively. Removal of the woody coat (endocarp) and seed coat makes the germination faster. the embryos germination was (80%) and (60%) for M1 and M2, respectively.
The nature of no word delimiter or inflection that can indicate segment boundaries or word semantics increases the difficulty of Chinese text understanding, and also intensifies the demand for word-level semantic knowledge to accomplish the tagging g oal in Chinese segmenting and labeling tasks. However, for unsupervised Chinese cross-domain segmenting and labeling tasks, the model trained on the source domain frequently suffers from the deficient word-level semantic knowledge of the target domain. To address this issue, we propose a novel paradigm based on attention augmentation to introduce crucial cross-domain knowledge via a translation system. The proposed paradigm enables the model attention to draw cross-domain knowledge indicated by the implicit word-level cross-lingual alignment between the input and its corresponding translation. Aside from the model requiring cross-lingual input, we also establish an off-the-shelf model which eludes the dependency on cross-lingual translations. Experiments demonstrate that our proposal significantly advances the state-of-the-art results of cross-domain Chinese segmenting and labeling tasks.
Probes are models devised to investigate the encoding of knowledge---e.g. syntactic structure---in contextual representations. Probes are often designed for simplicity, which has led to restrictions on probe design that may not allow for the full exp loitation of the structure of encoded information; one such restriction is linearity. We examine the case of a structural probe (Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic structure in contextual representations through learning only linear transformations. By observing that the structural probe learns a metric, we are able to kernelize it and develop a novel non-linear variant with an identical number of parameters. We test on 6 languages and find that the radial-basis function (RBF) kernel, in conjunction with regularization, achieves a statistically significant improvement over the baseline in all languages---implying that at least part of the syntactic knowledge is encoded non-linearly. We conclude by discussing how the RBF kernel resembles BERT's self-attention layers and speculate that this resemblance leads to the RBF-based probe's stronger performance.
In this paper, spline collocation method is considered for solving two forms of problems. The first form is general linear sixth-order boundary-value problem (BVP), and the second form is nonlinear sixth-order initial value problem (IVP). The existen ce, uniqueness, error estimation and convergence analysis of purpose methods are investigated. The study shows that proposed spline method with three collocation points can find the spline solutions and their derivatives up to sixth-order of the two BVP and IVP, thus is very effective tools in numerically solving such problems. Several examples are given to verify the reliability and efficiency of the proposed method. Comparisons are made to reconfirm the efficiency and accuracy of the suggested techniques.
Recently, it has been argued that encoder-decoder models can be made more interpretable by replacing the softmax function in the attention with its sparse variants. In this work, we introduce a novel, simple method for achieving sparsity in attention : we replace the softmax activation with a ReLU, and show that sparsity naturally emerges from such a formulation. Training stability is achieved with layer normalization with either a specialized initialization or an additional gating function. Our model, which we call Rectified Linear Attention (ReLA), is easy to implement and more efficient than previously proposed sparse attention mechanisms. We apply ReLA to the Transformer and conduct experiments on five machine translation tasks. ReLA achieves translation performance comparable to several strong baselines, with training and decoding speed similar to that of the vanilla attention. Our analysis shows that ReLA delivers high sparsity rate and head diversity, and the induced cross attention achieves better accuracy with respect to source-target word alignment than recent sparsified softmax-based models. Intriguingly, ReLA heads also learn to attend to nothing (i.e. switch off') for some queries, which is not possible with sparsified softmax alternatives.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا