Do you want to publish a course? Click here

This scientific study aims to evaluate the effects of bowel preparation on the outcomes of scheduled colorectal surgery. The study included a group of 83 patients, 37 without bowel preparation and 46 with bowel preparation. Perioperative outcomes of patients were evaluated, including surgical site infection (SSI) rates, postoperative complications, and length of hospital stay. The results concluded that bowel preparation before scheduled colorectal surgery has no superiority in reducing SSI and postoperative complications (anastomotic leakage, occurrence of abdominal or pelvic abscesses), as well as shortening the length of hospital stay, and did not show any clear advantage over the patients without mechanical bowel preparation.
The study aims to shed light on the lean management method and its role in reducing waste in its various forms and to show the extent of application of this method in industrial companies. ) in reducing supply chain risks before, during and after the ir occurrence and demonstrating the role of agile management in the sustainability of supply chains. The researcher relied on the descriptive analytical approach. In the theoretical section of the research, the concept of lean management, its principles and objectives were studied, the concept of supply chain and the concept of supply chain risks and types of these risks were studied. In the practical section, a questionnaire was designed that included a set of statements related to the topic of research and appropriate statistical methods were used depending on The program (SPSS23) in analyzing data and testing hypotheses, the research community is represented by the administrative cadres in the industrial companies in the industrial city in Hasya, where an intentional sample of the administrative cadres in the industrial companies in Hasya was selected, where the size of the research sample was (64) individuals The study reached a (positive) significant relationship between lean management dimensions (organizing work site, continuous improvement, standard work, multi -functional workers, six Sigma) and reducing supply chain risks before it occurs. And a (positive) significant relationship between lean management dimensions (organizing work site, continuous improvement, standard work, multi -functional workers, six soms) and between reducing supply chain risks during its occurrence. And a (positive) significant relationship between lean management dimensions (organizing work site, continuous improvement, standard work, multi -functional workers, six Sigma) and between reducing supply chain risks after its occurrence.
We aim to automatically identify human action reasons in online videos. We focus on the widespread genre of lifestyle vlogs, in which people perform actions while verbally describing them. We introduce and make publicly available the WhyAct dataset, consisting of 1,077 visual actions manually annotated with their reasons. We describe a multimodal model that leverages visual and textual information to automatically infer the reasons corresponding to an action presented in the video.
Recent work has shown that monolingual masked language models learn to represent data-driven notions of language variation which can be used for domain-targeted training data selection. Dataset genre labels are already frequently available, yet remai n largely unexplored in cross-lingual setups. We harness this genre metadata as a weak supervision signal for targeted data selection in zero-shot dependency parsing. Specifically, we project treebank-level genre information to the finer-grained sentence level, with the goal to amplify information implicitly stored in unsupervised contextualized representations. We demonstrate that genre is recoverable from multilingual contextual embeddings and that it provides an effective signal for training data selection in cross-lingual, zero-shot scenarios. For 12 low-resource language treebanks, six of which are test-only, our genre-specific methods significantly outperform competitive baselines as well as recent embedding-based methods for data selection. Moreover, genre-based data selection provides new state-of-the-art results for three of these target languages.
Workplace communication (e.g. email, chat, etc.) is a central part of enterprise productivity. Healthy conversations are crucial for creating an inclusive environment and maintaining harmony in an organization. Toxic communications at the workplace c an negatively impact overall job satisfaction and are often subtle, hidden, or demonstrate human biases. The linguistic subtlety of mild yet hurtful conversations has made it difficult for researchers to quantify and extract toxic conversations automatically. While offensive language or hate speech has been extensively studied in social communities, there has been little work studying toxic communication in emails. Specifically, the lack of corpus, sparsity of toxicity in enterprise emails, and well-defined criteria for annotating toxic conversations have prevented researchers from addressing the problem at scale. We take the first step towards studying toxicity in workplace emails by providing (1) a general and computationally viable taxonomy to study toxic language at the workplace (2) a dataset to study toxic language at the workplace based on the taxonomy and (3) analysis on why offensive language and hate-speech datasets are not suitable to detect workplace toxicity.
يستكشفة هذا المنظور التحليلي أثار انتشار الذكاء الاصطناعي في مجالين رئيسين يتعلقان بالسياسات هما الأمن والتوظيف. وقد ركزنا ههنا نقاط الضعف وعدم الانصاف التي قد يفرضها استخدام الذكاء الاصطناعي على هذين البعدين للمجتمع. حدد فريق من الزملاء في مؤسسة رن د ذوي خبرات وتجارب متنوعة هذين المجالين من بين سواهما باعتبارهما يستحقان اهتماما دقيقا في عصر الذكاء الاصطناعي. ومن المجالات التي تمت الاشارة إليها أيضا نذكر: تأثير الذكاء الاصطناعي على الصحة وصنع القرارات وتسوية النزاعات في المنازعات والأمن الالكتروني. وتوضح الطبيعة متعدد التخصصات التي تتسم بها المشاكل التي خلصنا إليها الحاجة إلى مواصلة إشراك الباحثين والمحللين الذي يتمتعون بمجموعة متنوعة من الخبرات والتجارب من أجل اطلاع صناع القرارات المتعلقة بالسياسات على المواقف والخطوات الواجب القيام بها في ما يتعلق بالأدوات الاصطناعية والذكاء الصنعي على اوسع نطاق. يبين هذا البحث المحارو الشاملة لأثار الذكاء الاصطناعي: ١- الأدوات الاصطناعية هي في الواقع مضاعفات للانتباه قادرة على ان تحدث اثارا نظامية غير متوقعة وخطيرة. ٢- يزيد الاعتماد على الادوات الاصطناعية خطر تقلص المرونة. ٣- للذكاء الاصطناعي القدرة على التسبب بفوضى اقتصادية اجتماعية سريعة غير مسبوقة. ٤- تعد تفضيلات هجرة وتوظيف ذوي المواهب في مجال بحث وتطوير الذكاء الاصطناعي حول العالم من المخاوف الجغرافية السياسية المهمة.
This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.
This work describes analysis of nature and causes of MT errors observed by different evaluators under guidance of different quality criteria: adequacy and comprehension and and a not specified generic mixture of adequacy and fluency. We report result s for three language pairs and two domains and eleven MT systems. Our findings indicate that and despite the fact that some of the identified phenomena depend on domain and/or language and the following set of phenomena can be considered as generally challenging for modern MT systems: rephrasing groups of words and translation of ambiguous source words and translating noun phrases and and mistranslations. Furthermore and we show that the quality criterion also has impact on error perception. Our findings indicate that comprehension and adequacy can be assessed simultaneously by different evaluators and so that comprehension and as an important quality criterion and can be included more often in human evaluations.
The aimed of the research is to determine the role of career path dimensions through (training, promotion, incentives) in reducing job burnout (stress, and the limited work powers of workers at Tishreen University. The researcher relied on the deduct ive approach as a method for thinking in formulating the research hypotheses and selecting the hypothesized relationships between the research variables, and the descriptive and analytical approach by listing Arab and foreign books and periodicals, and other publications related to the research topic, and reviewing, studying and analyzing them in order to answer the research objectives and discuss their hypotheses. As for the research methods, the researcher relied on the questionnaire as a tool for collecting primary data, and the questionnaire was distributed to a sample of 167 workers at Tishreen University, of which 164 were recovered, and 161 questionnaires were suitable for analysis. Then she analyzed the data on the dependent variable and the independent variable and tested the hypotheses using the analysis program. Statistician SPSS version 20 to accept or reject hypotheses. One of the most important results of the study was the content of the training program that is not commensurate with the needs of workers at work, and the training content is not determined on the basis of compatibility with the different abilities of the trainees, and the worker does not participate in choosing the appropriate work method and style, and the workers do not feel bored and bored because of my work.
Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine tran slated text, involving different workflow processes and steps that can be more or less effective and favorable. In the present paper, we propose a 3-step Post-Editing Workflow (PEW). Drawing from industry insight, this paper aims to provide a basic framework for LSPs and Post-Editors on how to streamline Post-Editing workflows in order to improve quality, achieve higher profitability and better return on investment and standardize and facilitate internal processes in terms of management and linguist effort when it comes to PE services. We argue that a comprehensive PEW consists in three essential tasks: Pre-Editing, Post-Editing and Annotation/Machine Translation (MT) evaluation processes (Guerrero, 2018) supported by three essential roles: Pre-Editor, Post-Editor and Annotator (Gene, 2020). Furthermore, the pre-sent paper demonstrates the training challenges arising from this PEW, supported by empirical research results, as reflected in a digital survey among language industry professionals (Gene, 2020), which was conducted in the context of a Post-Editing Webinar. Its sample comprised 51 representatives of LSPs and 12 representatives of SLVs (Single Language Vendors) representatives.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا