ترغب بنشر مسار تعليمي؟ اضغط هنا

Enabling collaborative data science development with the Ballet framework

70   0   0.0 ( 0 )
 نشر من قبل Micah Smith
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

While the open-source software development model has led to successful large-scale collaborations in building software systems, data science projects are frequently developed by individuals or small teams. We describe challenges to scaling data science collaborations and present a conceptual framework and ML programming model to address them. We instantiate these ideas in Ballet, a lightweight framework for collaborative, open-source data science through a focus on feature engineering, and an accompanying cloud-based development environment. Using our framework, collaborators incrementally propose feature definitions to a repository which are each subjected to an ML performance evaluation and can be automatically merged into an executable feature engineering pipeline. We leverage Ballet to conduct a case study analysis of an income prediction problem with 27 collaborators, and discuss implications for future designers of collaborative projects.

قيم البحث

اقرأ أيضاً

The field of exoplanetary science has emerged over the past two decades, rising up alongside traditional solar system planetary science. Both fields focus on understanding the processes which form and sculpt planets through time, yet there has been l ess scientific exchange between the two communities than is ideal. This white paper explores some of the institutional and cultural barriers which impede cross-discipline collaborations and suggests solutions that would foster greater collaboration. Some solutions require structural or policy changes within NASA itself, while others are directed towards other institutions, including academic publishers, that can also facilitate greater interdisciplinarity.
In recent years, program verifiers and interactive theorem provers have become more powerful and more suitable for verifying large programs or proofs. This has demonstrated the need for improving the user experience of these tools to increase product ivity and to make them more accessible to non-experts. This paper presents an integrated development environment for Dafny-a programming language, verifier, and proof assistant-that addresses issues present in most state-of-the-art verifiers: low responsiveness and lack of support for understanding non-obvious verification failures. The paper demonstrates several new features that move the state-of-the-art closer towards a verification environment that can provide verification feedback as the user types and can present more helpful information about the program or failed verifications in a demand-driven and unobtrusive way.
Continuous, ubiquitous monitoring through wearable sensors has the potential to collect useful information about users context. Heart rate is an important physiologic measure used in a wide variety of applications, such as fitness tracking and health monitoring. However, wearable sensors that monitor heart rate, such as smartwatches and electrocardiogram (ECG) patches, can have gaps in their data streams because of technical issues (e.g., bad wireless channels, battery depletion, etc.) or user-related reasons (e.g. motion artifacts, user compliance, etc.). The ability to use other available sensor data (e.g., smartphone data) to estimate missing heart rate readings is useful to cope with any such gaps, thus improving data quality and continuity. In this paper, we test the feasibility of estimating raw heart rate using smartphone sensor data. Using data generated by 12 participants in a one-week study period, we were able to build both personalized and generalized models using regression, SVM, and random forest algorithms. All three algorithms outperformed the baseline moving-average interpolation method for both personalized and generalized settings. Moreover, our findings suggest that personalized models outperformed the generalized models, which speaks to the importance of considering personal physiology, behavior, and life style in the estimation of heart rate. The promising results provide preliminary evidence of the feasibility of combining smartphone sensor data with wearable sensor data for continuous heart rate monitoring.
Simplifying machine learning (ML) application development, including distributed computation, programming interface, resource management, model selection, etc, has attracted intensive interests recently. These research efforts have significantly impr oved the efficiency and the degree of automation of developing ML models. In this paper, we take a first step in an orthogonal direction towards automated quality management for human-in-the-loop ML application development. We build ease. ml/meter, a system that can automatically detect and measure the degree of overfitting during the whole lifecycle of ML application development. ease. ml/meter returns overfitting signals with strong probabilistic guarantees, based on which developers can take appropriate actions. In particular, ease. ml/meter provides principled guidelines to simple yet nontrivial questions regarding desired validation and test data sizes, which are among commonest questions raised by developers. The fact that ML application development is typically a continuous procedure further worsens the situation: The validation and test data sets can lose their statistical power quickly due to multiple accesses, especially in the presence of adaptive analysis. ease. ml/meter addresses these challenges by leveraging a collection of novel techniques and optimizations, resulting in practically tractable data sizes without compromising the probabilistic guarantees. We present the design and implementation details of ease. ml/meter, as well as detailed theoretical analysis and empirical evaluation of its effectiveness.
We propose a simulation framework for generating realistic instance-dependent noisy labels via a pseudo-labeling paradigm. We show that the distribution of the synthetic noisy labels generated with our framework is closer to human labels compared to independent and class-conditional random flipping. Equipped with controllable label noise, we study the negative impact of noisy labels across a few realistic settings to understand when label noise is more problematic. We also benchmark several existing algorithms for learning with noisy labels and compare their behavior on our synthetic datasets and on the datasets with independent random label noise. Additionally, with the availability of annotator information from our simulation framework, we propose a new technique, Label Quality Model (LQM), that leverages annotator features to predict and correct against noisy labels. We show that by adding LQM as a label correction step before applying existing noisy label techniques, we can further improve the models performance.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا