ترغب بنشر مسار تعليمي؟ اضغط هنا

Testing with Jupyter notebooks: NoteBook VALidation (nbval) plug-in for pytest

191   0   0.0 ( 0 )
 نشر من قبل Marijan Beg
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Notebook validation tool nbval allows to load and execute Python code from a Jupyter notebook file. While computing outputs from the cells in the notebook, these outputs are compared with the outputs saved in the notebook file, treating each cell as a test. Deviations are reported as test failures, with various configuration options available to control the behaviour. Application use cases include the validation of notebook-based documentation, tutorials and textbooks, as well as the use of notebooks as additional unit, integration and system tests for the libraries that are used in the notebook. Nbval is implemented as a plugin for the pytest testing software.

قيم البحث

اقرأ أيضاً

155 - Xuye Liu , Dakuo Wang , April Wang 2021
Jupyter notebook allows data scientists to write machine learning code together with its documentation in cells. In this paper, we propose a new task of code documentation generation (CDG) for computational notebooks. In contrast to the previous CDG tasks which focus on generating documentation for single code snippets, in a computational notebook, one documentation in a markdown cell often corresponds to multiple code cells, and these code cells have an inherent structure. We proposed a new model (HAConvGNN) that uses a hierarchical attention mechanism to consider the relevant code cells and the relevant code tokens information when generating the documentation. Tested on a new corpus constructed from well-documented Kaggle notebooks, we show that our model outperforms other baseline models.
Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.
Computational notebooks have emerged as the platform of choice for data science and analytical workflows, enabling rapid iteration and exploration. By keeping intermediate program state in memory and segmenting units of execution into so-called cells , notebooks allow users to execute their workflows interactively and enjoy particularly tight feedback. However, as cells are added, removed, reordered, and rerun, this hidden intermediate state accumulates in a way that is not necessarily correlated with the notebooks visible code, making execution behavior difficult to reason about, and leading to errors and lack of reproducibility. We present NBSafety, a custom Jupyter kernel that uses runtime tracing and static analysis to automatically manage lineage associated with cell execution and global notebook state. NBSafety detects and prevents errors that users make during unaided notebook interactions, all while preserving the flexibility of existing notebook semantics. We evaluate NBSafetys ability to prevent erroneous interactions by replaying and analyzing 666 real notebook sessions. Of these, NBSafety identified 117 sessions with potential safety errors, and in the remaining 549 sessions, the cells that NBSafety identified as resolving safety issues were more than $7times$ more likely to be selected by users for re-execution compared to a random baseline, even though the users were not using NBSafety and were therefore not influenced by its suggestions.
This paper proposes configuration testing--evaluating configuration values (to be deployed) by exercising the code that uses the values and assessing the corresponding program behavior. We advocate that configuration values should be systematically t ested like software code and that configuration testing should be a key reliability engineering practice for preventing misconfigurations from production deployment. The essential advantage of configuration testing is to put the configuration values (to be deployed) in the context of the target software program under test. In this way, the dynamic effects of configuration values and the impact of configuration changes can be observed during testing. Configuration testing overcomes the fundamental limitations of de facto approaches to combatting misconfigurations, namely configuration validation and software testing--the former is disconnected from code logic and semantics, while the latter can hardly cover all possible configuration values and their combinations. Our preliminary results show the effectiveness of configuration testing in capturing real-world misconfigurations. We present the principles of writing new configuration tests and the promises of retrofitting existing software tests to be configuration tests. We discuss new adequacy and quality metrics for configuration testing. We also explore regression testing techniques to enable incremental configuration testing during continuous integration and deployment in modern software systems.
As a part of the digital transformation, we interact with more and more intelligent gadgets. Today, these gadgets are often mobile devices, but in the advent of smart cities, more and more infrastructure---such as traffic and buildings---in our surro undings becomes intelligent. The intelligence, however, does not emerge by itself. Instead, we need both design techniques to create intelligent systems, as well as approaches to validate their correct behavior. An example of intelligent systems that could benefit smart cities are self-driving vehicles. Self-driving vehicles are continuously becoming both commercially available and common on roads. Accidents involving self-driving vehicles, however, have raised concerns about their reliability. Due to these concerns, the safety of self-driving vehicles should be thoroughly tested before they can be released into traffic. To ensure that self-driving vehicles encounter all possible scenarios, several millions of hours of testing must be carried out; therefore, testing self-driving vehicles in the real world is impractical. There is also the issue that testing self-driving vehicles directly in the traffic poses a potential safety hazard to human drivers. To tackle this challenge, validation frameworks for testing self-driving vehicles in simulated scenarios are being developed by academia and industry. In this chapter, we briefly introduce self-driving vehicles and give an overview of validation frameworks for testing them in a simulated environment. We conclude by discussing what an ideal validation framework at the state of the art should be and what could benefit validation frameworks for self-driving vehicles in the future.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا