ترغب بنشر مسار تعليمي؟ اضغط هنا

Extended Abstract of Performance Analysis and Prediction of Model Transformation

110   0   0.0 ( 0 )
 نشر من قبل Vijayshree Vijayshree
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the software development process, model transformation is increasingly assimilated. However, systems being developed with model transformation sometimes grow in size and become complex. Meanwhile, the performance of model transformation tends to decrease. Hence, performance is an important quality of model transformation. According to current research model transformation performance focuses on optimising the engines internally. However, there exists no research activities to support transformation engineer to identify performance bottleneck in the transformation rules and hence, to predict the overall performance. In this paper we vision our aim at providing an approach of monitoring and profiling to identify the root cause of performance issues in the transformation rules and to predict the performance of model transformation. This will enable software engineers to systematically identify performance issues as well as predict the performance of model transformation.

قيم البحث

اقرأ أيضاً

Successful HPC software applications are long-lived. When ported across machines and their compilers, these applications often produce different numerical results, many of which are unacceptable. Such variability is also a concern while optimizing th e code more aggressively to gain performance. Efficient tools that help locate the program units (files and functions) within which most of the variability occurs are badly needed, both to plan for code ports and to root-cause errors due to variability when they happen in the field. In this work, we offer an enhanced version of the open-source testing framework FLiT to serve these roles. Key new features of FLiT include a suite of bisection algorithms that help locate the root causes of variability. Another added feature allows an analysis of the tradeoffs between performance and the degree of variability. Our new contributions also include a collection of case studies. Results on the MFEM finite-element library include variability/performance tradeoffs, and the identification of a (hitherto unknown) abnormal level of result-variability even under mild compiler optimizations. Results from studying the Laghos proxy application include identifying a significantly divergent floating-point result-variability and successful root-causing down to the problematic function over as little as 14 program executions. Finally, in an evaluation of 4,376 controlled injections of floating-point perturbations on the LULESH proxy application, we showed that the FLiT framework has 100 precision and recall in discovering the file and function locations of the injections all within an average of only 15 program executions.
We present a new probabilistic model checker Storm. Using state-of-the-art libraries, we aim for both high performance and versatility. This extended abstract gives a brief overview of the features of Storm.
364 - Diego Latella 2016
In this extended abstract a view on the role of Formal Methods in System Engineering is briefly presented. Then two examples of useful analysis techniques based on solid mathematical theories are discussed as well as the software tools which have bee n built for supporting such techniques. The first technique is Scalable Approximated Population DTMC Model-checking. The second one is Spatial Model-checking for Closure Spaces. Both techniques have been developed in the context of the EU funded project QUANTICOL.
Big data applications and analytics are employed in many sectors for a variety of goals: improving customers satisfaction, predicting market behavior or improving processes in public health. These applications consist of complex software stacks that are often run on cloud systems. Predicting execution times is important for estimating the cost of cloud services and for effectively managing the underlying resources at runtime. Machine Learning (ML), providing black box solutions to model the relationship between application performance and system configuration without requiring in-detail knowledge of the system, has become a popular way of predicting the performance of big data applications. We investigate the cost-benefits of using supervised ML models for predicting the performance of applications on Spark, one of todays most widely used frameworks for big data analysis. We compare our approach with textit{Ernest} (an ML-based technique proposed in the literature by the Spark inventors) on a range of scenarios, application workloads, and cloud system configurations. Our experiments show that Ernest can accurately estimate the performance of very regular applications, but it fails when applications exhibit more irregular patterns and/or when extrapolating on bigger data set sizes. Results show that our models match or exceed Ernests performance, sometimes enabling us to reduce the prediction error from 126-187% to only 5-19%.
In this paper, an analytical approach for the nonlinear distorted bit error rate performance of optical orthogonal frequency division multiplexing (O-OFDM) with single photon avalanche diode (SPAD) receivers is presented. Major distortion effects of passive quenching (PQ) and active quenching (AQ) SPAD receivers are analysed in this study. The performance analysis of DC-biased O-OFDM and asymmetrically clipped O-OFDM with PQ and AQ SPAD are derived. The comparison results show the maximum optical irradiance caused by the nonlinear distortion, which limits the transmission power and bit rate. The theoretical maximum bit rate of SPAD-based OFDM is found which is up to 1~Gbits/s. This approach supplies a closed-form analytical solution for designing an optimal SPAD-based system.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا