Do you want to publish a course? Click here

Domain adaptation assumes that samples from source and target domains are freely accessible during a training phase. However, such assumption is rarely plausible in the real-world and may causes data-privacy issues, especially when the label of the s ource domain can be a sensitive attribute as an identifier. SemEval-2021 task 10 focuses on these issues. We participate in the task and propose novel frameworks based on self-training method. In our systems, two different frameworks are designed to solve text classification and sequence labeling. These approaches are tested to be effective which ranks the third among all system in subtask A, and ranks the first among all system in subtask B.
This paper presents the Source-Free Domain Adaptation shared task held within SemEval-2021. The aim of the task was to explore adaptation of machine-learning models in the face of data sharing constraints. Specifically, we consider the scenario where annotations exist for a domain but cannot be shared. Instead, participants are provided with models trained on that (source) data. Participants also receive some labeled data from a new (development) domain on which to explore domain adaptation algorithms. Participants are then tested on data representing a new (target) domain. We explored this scenario with two different semantic tasks: negation detection (a text classification task) and time expression recognition (a sequence tagging task).
Existing approaches for machine translation (MT) mostly translate given text in the source language into the target language and without explicitly referring to information indispensable for producing proper translation. This includes not only inform ation in other textual elements and modalities than texts in the same document and but also extra-document and non-linguistic information and such as norms and skopos. To design better translation production work-flows and we need to distinguish translation issues that could be resolved by the existing text-to-text approaches and those beyond them. To this end and we conducted an analytic assessment of MT outputs and taking an English-to-Japanese news translation task as a case study. First and examples of translation issues and their revisions were collected by a two-stage post-editing (PE) method: performing minimal PE to obtain translation attainable based on the given textual information and further performing full PE to obtain truly acceptable translation referring to any information if necessary. Then and the collected revision examples were manually analyzed. We revealed dominant issues and information indispensable for resolving them and such as fine-grained style specifications and terminology and domain-specific knowledge and and reference documents and delineating a clear distinction between translation and what text-to-text MT can ultimately attain.
Probing neural models for the ability to perform downstream tasks using their activation patterns is often used to localize what parts of the network specialize in performing what tasks. However, little work addressed potential mediating factors in s uch comparisons. As a test-case mediating factor, we consider the prediction's context length, namely the length of the span whose processing is minimally required to perform the prediction. We show that not controlling for context length may lead to contradictory conclusions as to the localization patterns of the network, depending on the distribution of the probing dataset. Indeed, when probing BERT with seven tasks, we find that it is possible to get 196 different rankings between them when manipulating the distribution of context lengths in the probing dataset. We conclude by presenting best practices for conducting such comparisons in the future.
المرشحات الفعالة للمعالجة الآلية واللحظية لتوافقيات الشبكة ثلاثية الطور : هذا العمل يقترح طريقة عملية وفعالة لتخفيض التوافقيات في الشبكة الكهربائية
تعاني محطات معالجة مياه الصرف الصحي كثيرا من مشكلة ادارة ومعالجة الحمأة الناتجة عن المعالجة إذ تشكل عمليات معالجة وطرح الحمأة حوالي (30-40) % من كلفة التاسيس لمحطات المعالجة
Larynx cancer is the most common cancer of the head and neck with the exception of the skin and it accounts for 2% of all cancer diagnoses, its genesis is directly associated with alcohol drinking and smoking, squamous cell carcinoma (SCC) is the m ost common histological type (95%) of larynx cancer. Aim: The purpose of this study was to evaluate 3D conformal radiotherapy of accelerated system for early larynx cancer and to estimate acute and late toxicity which appear because of irradiated normal tissues around the tumor (thyroid gland, spine cord, ….) and also to evaluate the rate of recurrence and survival. Materials and methods: The study was performed of 44 patients of scc larengeal cancer stage T1/T2, that underwent RT (2015 – 2017), 84% with glottic cancer, the median age was 63 years, all patients were treated 3D conformal RT, Total dose between (60-66) Gy, 2Gy/fraction (5 fractions in week).Our analysis was to evaluate the acute and late toxicity dure and after radiotherapy, and also the rate of recurrence and survival. Results: The most toxicity was dysphagia (42 PTs) 96%, radiodermitis (30 PTs) 70%, the least toxicity was tooth damage. No evidence of late toxicity, the rate of recurrence (11 PTs) 25%, metastases occur in (1 PT). (6/ 41 PTs) 16.4% were dead. Conclusion: Radiotherapy is the important role to control early larynx cancer, and 3DRadiotherapy is giving a large dose to treat the tumor and save the normal tissues around the tumor from effect of radiation, therefor absence the late toxicity.
The aim of the research was to clarify the pre-processing steps required for satellite images before starting to analyze and extract data from them using the ENVI program. Radiometric and topographic correction applied to the Landsat image 2017, an d then we calculated the NDVI index for this image before and after applying pre-processing. The results showed a difference in the spectral values of the image before and after the radiometric correction, especially in near infrared band. The reflection values were recorded in the original image between (40-50) and (300-3500) in the corrected image. The difference in the reflection values after the topographical correction was also visible on the near- infrared and infrared bands, especially in the points where shadows of the terrain. Differences in the values of NDVI for 2017 were observed before and after the application of pre-processing on the image, especially in points of good and very good vegetation coverage with high values of the index. The study concluded that it is important to follow the minimum number of steps required for preprocessing steps in order to avoid unnecessary steps and recommend well tested, readily available, and adequately documented data approaches and data products.
The study was conducted in 2016. The samples of rice Oryza sativa L. straw were collected from Dewania governorate/Iraq, and were brought to laboratory of Directorate of Environmental and Water in Ministry of Science and Technology. The samples we re cleaned and milled, then stored in sterile containers. Local cellulolytic bacterial isolate was cultivated and isolated on mineral and cellulose medium, at 37 ºC for (24± 2) hours, the bacterial isolate was diagnosed as Bacillus sp. depending on phenotypes of bacterial colonies on solid medium, microscopic characters and some biochemical tests. Milled rice straw was chemically treated with 1% of sodium hydroxide, then biological treatment by bacterial isolate Bacillus sp cultivate in mineral medium with alkali treated rice straw as carbone source and compared with cellulose standard medium. The bacterial growth was measured at 600 nm, which reached 0.974 in rice straw medium, while in cellulose medium reached 0.853. For glucose concentration, the value reached 250 μg/ml in rice straw medium, while in cellulose medium it was 210 μg/ml. The results concluded the possibility of getting rid of rice husks, which is an environmental contaminant, and to use it in the production of glucose.
This study aims to investigate the prevalence of sensory processing dysfunctions by the sample of Children With autism spectrum disorder and it’s relative with some variables(age, severity of autism).The researcher adopted the descriptive approach to achieve the aim of the study. The sample consisted of (30) child the range of (3-10) years selected randomly. The researcher used Sensory Profile to explore the sensory processing dysfunctions, consisted of (65) items distributed on (6) domains (Auditory Processing, Visual Processing, Vestibular Processing, Touch Processing, Multi sensory Processing, Oral Sensory Processing). The results of the study indicated )66.67%) of autistic children in this sample had sensory processing dysfunctions, There are significant statistical differences at the level of (0.05) between the children in the study sample on The Sensory Profile attributed to autism severity on the whole of a scale domains and on the four sub domains, There are no significant statistical differences at the level of (0.05) between the children in the study sample on the Sensory Profile attributed to age on the whole of a scale domains and on the sub domains. Based on the findings of the study the researcher pointed out the need of comparison studies about the prevalence the sensory processing dysfunctions between (children ,teenager, adult), and another studies as the relation between sensory processing dysfunctions and the adaptive behavior dysfunctions.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا