Do you want to publish a course? Click here

A checkpoint/recovery Model based on work stealing for grid applications

نموذج تخزين / استرجاع لتطبيقات الحوسبة الشبكية بالاعتماد على سرقة العمل

1190   2   17   0 ( 0 )
 Publication date 2016
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

The study is researching the fault tolerance in the large distributed environments such as grid computing and clusters of computers in order to find the most effective ways to deal with the errors associated with the crash one of the devices in the environment or network disconnection to ensure the continuity of the application in the presence of the faults.In this paper we study a model of the distributed environment and the parallel applications within it. Then we provide a checkpoint mechanism that will enable us to ensure continuity of the work used by a virtual representation of the application (macro dataflow) and suitable for the applications which uses work stealing algorithm to distribute the tasks which are implemented in heterogeneous and dynamic environment. This mechanism will add a simple cost to the cost of parallel execution as a result of keeping part of the work during fault-free execution. The study also provides a mathematical model to calculate the time complexity i.e. the cost of this proposed mechanism.

References used
AVIZIENIS A, LAPRIE JC and RANDALL B, 2001, Fundamental Concepts of Dependability, in University of New castle upon Tyne, Computing Science
BALA A, CHANA I, 2012, Fault tolerance-challenges, techniques and implementation in cloud computing, in IJCSI Interna tional Journal of Computer Science Issues,Vol. 9, No 1
FRIGO M, LEISERSON CE, and RANDALL KH, 1998 ,The implementation of the Cilk-5 multithreaded language,inProc. ACM SIGPLAN conference on Programming language design and implementation,Pages 212 - 223
rate research

Read More

Due to the development of deep learning, the natural language processing tasks have made great progresses by leveraging the bidirectional encoder representations from Transformers (BERT). The goal of information retrieval is to search the most releva nt results for the user's query from a large set of documents. Although BERT-based retrieval models have shown excellent results in many studies, these models usually suffer from the need for large amounts of computations and/or additional storage spaces. In view of the flaws, a BERT-based Siamese-structured retrieval model (BESS) is proposed in this paper. BESS not only inherits the merits of pre-trained language models, but also can generate extra information to compensate the original query automatically. Besides, the reinforcement learning strategy is introduced to make the model more robust. Accordingly, we evaluate BESS on three public-available corpora, and the experimental results demonstrate the efficiency of the proposed retrieval model.
هدفنا من خلال هذه الدراسة في إطار المشروع الفصلي للسنة الرابعة إلى إلقاء الضوء على استرجاع الصور من مجموعة كبيرة بالاعتماد على محتوى صورة هدف , و قمنا بتدعيم هذه الدراسة بتطبيق ضمن بيئة الماتلاب لبرنامج بحث عن الصور المشابهة لصورة مدخلة . و قد تركز بحثنا على ميزتين هامتين يكاد لا يخلو منها أي نظام بحث عن الصور بالاعتماد على المحتوى و هما ميزتي الهيستوغرام اللوني و بنية الصورة texture , ووضحنا الخطوات التي يتم في ضوئها عملية الاسترجاع بدءاً من تحليل الصورة و استخلاص شعاع الواصفات الخاص فيها , و مطابقته مع أشعة الميزات الخاصة بالصور الموجودة في قاعدة البيانات ليتم ترتيب الصور بحسب مدى تشابهها من الصورة الهدف . و تطرقت الدراسة إلى استخدام الفضاء اللوني HMMD كبديل للفضاء اللوني RGB لاستخراج واصفات البنية اللونية على اعتبار أنه نموذج لوني موجه بالمستخدم user oriented و بالتالي نضمن أن نحصل على نتائج أفضل ترضي المستخدم . وقمنا بتدعيم الدراسة بعدد من الأشكال و الأمثلة و المخططات التي توضح محتوى الدراسة النظرية و ما قمنا بعمله في التطبيق ضمن بيئة الماتلاب .
Sentence-level Quality estimation (QE) of machine translation is traditionally formulated as a regression task, and the performance of QE models is typically measured by Pearson correlation with human labels. Recent QE models have achieved previously -unseen levels of correlation with human judgments, but they rely on large multilingual contextualized language models that are computationally expensive and make them infeasible for real-world applications. In this work, we evaluate several model compression techniques for QE and find that, despite their popularity in other NLP tasks, they lead to poor performance in this regression setting. We observe that a full model parameterization is required to achieve SoTA results in a regression task. However, we argue that the level of expressiveness of a model in a continuous range is unnecessary given the downstream applications of QE, and show that reframing QE as a classification problem and evaluating QE models using classification metrics would better reflect their actual performance in real-world applications.
Dialogue systems like chatbots, and tasks like question-answering (QA) have gained traction in recent years; yet evaluating such systems remains difficult. Reasons include the great variety in contexts and use cases for these systems as well as the h igh cost of human evaluation. In this paper, we focus on a specific type of dialogue systems: Time-Offset Interaction Applications (TOIAs) are intelligent, conversational software that simulates face-to-face conversations between humans and pre-recorded human avatars. Under the constraint that a TOIA is a single output system interacting with users with different expectations, we identify two challenges: first, how do we define a good' answer? and second, what's an appropriate metric to use? We explore both challenges through the creation of a novel dataset that identifies multiple good answers to specific TOIA questions through the help of Amazon Mechanical Turk workers. This view from the crowd' allows us to study the variations of how TOIA interrogators perceive its answers. Our contributions include the annotated dataset that we make publicly available and the proposal of Success Rate @k as an evaluation metric that is more appropriate than the traditional QA's and information retrieval's metrics.
In this paper, we introduce a continuous mathematical model to optimize the compromise between the overhead of fault tolerance mechanism and the faults impacts in the environment of execution. The fault tolerance mechanism considered in this rese arch is a coordinated checkpoint/recovery mechanism and the study based on stochastic model of different performance critics of parallel application on parallel and distributed environment.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا