Do you want to publish a course? Click here

Building a simulator for Superscalar processors and Vector processors and comparing their performance in processing parallelism at data level

بناء محاكي لبنيتي المعالجات فائقة التدرج و المعالجات الشعاعية و مقارنة أدائهما في معالجة التفرع على مستوى البيانات

1738   0   46   0 ( 0 )
 Publication date 2015
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

This paper presents parallel computers architectures especially Superscalar processors and Vector processors, building a simulator depending on the basic characteristics for each architecture, the simulator simulates their mechanism of work programmatically at the aim of comparing the performance of the two architectures in executing Data Level Parallelism (DLP) and Instruction Level Parallelism ILP. The results shows that the effectiveness of executing instructions in parallel depends significantly on choosing the appropriate architecture for execution, according to the type of parallelism that can be applied to instructions, and the vector features in the vector architecture achieve remarkable improvement in performance that cannot be ignored in execution of DLP, simplify the code and reduce the number of instruction. The provided simulator is a good core that can be developed and modified especially in the field of education for the students of Computer Science and Engineering and the research field.


Artificial intelligence review:
Research summary
تناقش هذه الورقة البحثية بنى المعالجات المتوازية، مع التركيز على بنيتين رئيسيتين هما المعالج فائق التدرج والمعالج الشعاعي. تم بناء محاكي برمجي لكل من هاتين البنيتين بهدف مقارنة أدائهما في معالجة التوازي على مستوى البيانات (DLP) والتوازي على مستوى التعليمات (ILP). أظهرت النتائج أن فعالية تنفيذ التعليمات تعتمد بشكل كبير على اختيار بنية المعالج المناسبة وفق نوع التوازي المطلوب. كما أظهرت أن ميزات الشعاع في البنية الشعاعية تحقق تحسينًا ملحوظًا في الأداء عند تنفيذ عمليات DLP، مما يسهم في تبسيط الكود البرمجي وتقليل عدد التعليمات. يعتبر المحاكي المقدم نواة جيدة يمكن تطويرها والإضافة عليها، خاصة في المجال التعليمي لطلاب علوم وهندسة الحاسب والمجال البحثي.
Critical review
دراسة نقدية: تعتبر هذه الورقة البحثية خطوة مهمة نحو فهم أفضل لبنى المعالجات المتوازية وأدائها. ومع ذلك، يمكن الإشارة إلى بعض النقاط التي قد تحتاج إلى تحسين. أولاً، قد يكون من المفيد تضمين المزيد من التفاصيل حول كيفية تحسين المحاكي ليشمل أنواعًا أخرى من المعالجات المتوازية. ثانيًا، يمكن أن تكون الدراسة أكثر شمولية إذا تضمنت تجارب عملية إضافية أو دراسات حالة تطبيقية. أخيرًا، قد يكون من المفيد تقديم تحليل أعمق لتأثيرات استخدام الذاكرة المخبئية في المعالجات الشعاعية وكيفية تحسين هذه النقطة لتحقيق أداء أفضل.
Questions related to the research
  1. ما هي البنيتين الرئيسيتين اللتين تم التركيز عليهما في هذه الورقة البحثية؟

    تم التركيز على بنية المعالج فائق التدرج وبنية المعالج الشعاعي.

  2. ما هو الهدف الرئيسي من بناء المحاكي في هذه الدراسة؟

    الهدف الرئيسي هو مقارنة أداء المعالجين في معالجة التوازي على مستوى البيانات (DLP) والتوازي على مستوى التعليمات (ILP).

  3. ما هي النتائج الرئيسية التي توصلت إليها الدراسة بشأن أداء المعالجات الشعاعية؟

    أظهرت الدراسة أن المعالجات الشعاعية تحقق تحسينًا ملحوظًا في الأداء عند تنفيذ عمليات DLP، مما يسهم في تبسيط الكود البرمجي وتقليل عدد التعليمات.

  4. ما هي التوصيات التي قدمتها الدراسة لتطوير بيئة المحاكاة؟

    أوصت الدراسة بتطوير بيئة المحاكاة لتشمل أكبر قدر ممكن من بنى المعالجات لتساهم في فهم الباحثين والمطورين لها، كون تنفيذها على أرض الواقع مكلف وغير متاح بصورة وافية.


References used
Asanovi´c,K.Vector Microprocessors. UNIVERSITY of CALIFORNIA, BERKELEY,1998, 6-8,21
Hennessy,J;Patterson,D.Computer Architecture A Quantitative Approach.Fifth Edition,University of California, Berkeley, 2012, 10-15,150-156
LEE,G.Future Information Engineering. WIT press, 2014, 666-674
Silc,J; Robic,B; Ungerer,T. Processor Architecture: From Dataflow to Superscalar and Beyond. Springer Science & Business Media, 1999, 18-32
GODSE,A,P;GODSE,D,A.Computer Architecture.fourth edition,Technical Publications, 2010, 55-60
rate research

Read More

Recent technological advances have greatly improved the performance and features of computers and mobile systems. This improvements leads to increase in power consumption which makes the task of managing their power consumption necessary. The proc essor considered as one of the most power consuming elements in the system so, this research aims to develop a new method for power management in multicore architecture which support most of the modern electronics. Power management techniques is an important field in multicore studies because it must balance between the demanding needs for higher performance/throughput and the impact of aggressive power consumption and negative thermal effects. Many techniques have been proposed in this research like (Dynamic Voltage and Frequency Scaling (DVFS), Asymmetric cores, Thread motion, variable size cores, core fusion) then after we summarized comparing table which clarifies the pros and cons of these techniques, we proposed a new technique for power management in multi-core processors implements the best of these techniques .
The field of Natural Language Processing (NLP) changes rapidly, requiring course offerings to adjust with those changes, and NLP is not just for computer scientists; it's a field that should be accessible to anyone who has a sufficient background. In this paper, I explain how students with Computer Science and Data Science backgrounds can be well-prepared for an upper-division NLP course at a large state university. The course covers probability and information theory, elementary linguistics, machine and deep learning, with an attempt to balance theoretical ideas and concepts with practical applications. I explain the course objectives, topics and assignments, reflect on adjustments to the course over the last four years, as well as feedback from students.
Most of the time, when dealing with a particular Natural Language Processing task, systems are compared on the basis of global statistics such as recall, precision, F1-score, etc. While such scores provide a general idea of the behavior of these syst ems, they ignore a key piece of information that can be useful for assessing progress and discerning remaining challenges: the relative difficulty of test instances. To address this shortcoming, we introduce the notion of differential evaluation which effectively defines a pragmatic partition of instances into gradually more difficult bins by leveraging the predictions made by a set of systems. Comparing systems along these difficulty bins enables us to produce a finer-grained analysis of their relative merits, which we illustrate on two use-cases: a comparison of systems participating in a multi-label text classification task (CLEF eHealth 2018 ICD-10 coding), and a comparison of neural models trained for biomedical entity detection (BioCreative V chemical-disease relations dataset).
The Branch and Bound algorithms which are refereed to as B & B are commonly used to solve NP - hard combinatorial optimization problems. Although these algorithms were efficient, the size of problems which can solved and proved the optimality of s olution by these algorithms was limited, because of the limitation of computers capabilities although of it’s highly development. When the parallel programming 46 and Multiprocessors computers were appeared, the researcher thought to use the capabilities of these techniques and machines to increase the size of solved problems. Three main anomalies may occur when the parallelism is used. This research aimed to design a new model of Branch and Bound algorithms in order to analyze the performance. This model based on a new rule to choose the best node among the equal evaluation node. Tight bounds of each rules were computed and proved the ability to achieve it. Sufficient and necessary condition anomalous are given regarding the predisposition for each of the three classes of behavior. In this research, we discussed and compared the results of further relaxations on the assumptions used in branch and bound algorithms. We suggested using the asynchronous models to have the utmost benefit of the capabilities of parallel programming.
To compare total and subtotal gastrectomy performed by laparoscopic technique with those performed by open surgery, and to determine whether laparoscopic surgery has an important role and advantages in gastric resection.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا