Do you want to publish a course? Click here

Quality Spectrum of Yemeni Honey

سمات الجودة للعسل اليماني

399   2   16   0 ( 0 )
 Publication date 2001
  fields Plant Protecion
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

Floral honey types produced locally were tested for their quality attributes, using the standard procedures of the Association of the Official Analytical Chemists (AOAC).

References used
AOAC. ١٩٩٠. Official Methods of Analysis, ١٥th ed. Association of Official Analytical Chemists, Arlington, VA., USA
Abu-Tarboush, H. M., H. A. Al-Kahtani, and M. S. El-Sarrage. ١٩٩٣. Floral type identification and quality evaluation of some honey types. Food Chemistry
Alkathiri, M. A., and M. S. Khanbash. ١٩٩٦. Organoleptic characteristics for some local and imported honeys. Dirasat
rate research

Read More

In quality estimation (QE), the quality of translation can be predicted by referencing the source sentence and the machine translation (MT) output without access to the reference sentence. However, there exists a paradox in that constructing a datase t for creating a QE model requires non-trivial human labor and time, and it may even requires additional effort compared to the cost of constructing a parallel corpus. In this study, to address this paradox and utilize the various applications of QE, even in low-resource languages (LRLs), we propose a method for automatically constructing a pseudo-QE dataset without using human labor. We perform a comparative analysis on the pseudo-QE dataset using multilingual pre-trained language models. As we generate the pseudo dataset, we conduct experiments using various external machine translators as test sets to verify the accuracy of the results objectively. Also, the experimental results show that multilingual BART demonstrates the best performance, and we confirm the applicability of QE in LRLs using pseudo-QE dataset construction methods.
The quality was not the creation of this new era, its roots date back to the old Arab scholars. They have studied it in-depth in their scientific publications, the poets were sitting and competing with each other in Makkah's markets to choose the best poetry, through a committee of quality experts who judge their poetry. The meanings –which have the topic of composition and its rules- having a great deal of quality. Al- Jurjani had studied and treated this topic exhaustively, when he criticized Al-Jahiz who were interested in the literal meaning which has no value unless it has a noble meanings. Add to that, the Arabic script has a big merit in the language, Allah Subhanahu Wa Ta'ala (S.W.T). Said: ﴾Nun By the pen And by the record which (men) write1﴿ Allah S.W.T. has oath in the pen, because it has a big honor, and the person who writes well occupy a high place in the nation. The reliable writer is close to the Sultan in most things if not all of them. In view of the punctuation marks which have a big importance in the language, to know the devision and connecting of the speech, and also the beginning and the end of it, the quality standard had been taking into consideration in all of the above matters, and obtained an elaborated study in old times of Arabs.
Current Machine Translation (MT) systems achieve very good results on a growing variety of language pairs and datasets. However, they are known to produce fluent translation outputs that can contain important meaning errors, thus undermining their re liability in practice. Quality Estimation (QE) is the task of automatically assessing the performance of MT systems at test time. Thus, in order to be useful, QE systems should be able to detect such errors. However, this ability is yet to be tested in the current evaluation practices, where QE systems are assessed only in terms of their correlation with human judgements. In this work, we bridge this gap by proposing a general methodology for adversarial testing of QE for MT. First, we show that despite a high correlation with human judgements achieved by the recent SOTA, certain types of meaning errors are still problematic for QE to detect. Second, we show that on average, the ability of a given model to discriminate between meaning-preserving and meaning-altering perturbations is predictive of its overall performance, thus potentially allowing for comparing QE systems without relying on manual quality annotation.
We report the results of the WMT 2021 shared task on Quality Estimation, where the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels. This edition focused on two main novel additio ns: (i) prediction for unseen languages, i.e. zero-shot settings, and (ii) prediction of sentences with catastrophic errors. In addition, new data was released for a number of languages, especially post-edited data. Participating teams from 19 institutions submitted altogether 1263 systems to different task variants and language pairs.
Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT. Training QE models require massive parallel data with hand-crafted q uality annotations, which are time-consuming and labor-intensive to obtain. To address the issue of the absence of annotated training data, previous studies attempt to develop unsupervised QE methods. However, very few of them can be applied to both sentence- and word-level QE tasks, and they may suffer from noises in the synthetic data. To reduce the negative impact of noises, we propose a self-supervised method for both sentence- and word-level QE, which performs quality estimation by recovering the masked target words. Experimental results show that our method outperforms previous unsupervised methods on several QE tasks in different language pairs and domains.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا