ترغب بنشر مسار تعليمي؟ اضغط هنا

THE CAVES Project - Collaborative Analysis Versioning Environment System; THE CODESH Project - Collaborative Development Shell

63   0   0.0 ( 0 )
 نشر من قبل Dimitri Bourilkov
 تاريخ النشر 2004
والبحث باللغة English
 تأليف Dimitri Bourilkov




اسأل ChatGPT حول البحث

A key feature of collaboration in science and software development is to have a {em log} of what and how is being done - for private use and reuse and for sharing selected parts with collaborators, which most often today are distributed geographically on an ever larger scale. Even better if this log is {em automatic}, created on the fly while a scientist or software developer is working in a habitual way, without the need for extra efforts. The {tt CAVES} and {tt CODESH} projects address this problem in a novel way, building on the concepts of {em virtual state} and {em virtual transition} to provide an automatic persistent logbook for sessions of data analysis or software development in a collaborating group. A repository of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions and sessions shared within or between collaborating groups.

قيم البحث

اقرأ أيضاً

109 - Dimitri Bourilkov 2004
The Collaborative Analysis Versioning Environment System (CAVES) project concentrates on the interactions between users performing data and/or computing intensive analyses on large data sets, as encountered in many contemporary scientific disciplines . In modern science increasingly larger groups of researchers collaborate on a given topic over extended periods of time. The logging and sharing of knowledge about how analyses are performed or how results are obtained is important throughout the lifetime of a project. Here is where virtual data concepts play a major role. The ability to seamlessly log, exchange and reproduce results and the methods, algorithms and computer programs used in obtaining them enhances in a qualitative way the level of collaboration in a group or between groups in larger organizations. The CAVES project takes a pragmatic approach in assessing the needs of a community of scientists by building series of prototypes with increasing sophistication. In extending the functionality of existing data analysis packages with virtual data capabilities these prototypes provide an easy and habitual entry point for researchers to explore virtual data concepts in real life applications and to provide valuable feedback for refining the system design. The architecture is modular based on Web, Grid and other services which can be plugged in as desired. As a proof of principle we build a first system by extending the very popular data analysis framework ROOT, widely used in high energy physics and other fields, making it virtual data enabled.
RooStats is a project to create advanced statistical tools required for the analysis of LHC data, with emphasis on discoveries, confidence intervals, and combined measurements. The idea is to provide the major statistical techniques as a set of C++ c lasses with coherent interfaces, so that can be used on arbitrary model and datasets in a common way. The classes are built on top of the RooFit package, which provides functionality for easily creating probability models, for analysis combinations and for digital publications of the results. We will present in detail the design and the implementation of the different statistical methods of RooStats. We will describe the various classes for interval estimation and for hypothesis test depending on different statistical techniques such as those based on the likelihood function, or on frequentists or bayesian statistics. These methods can be applied in complex problems, including cases with multiple parameters of interest and various nuisance parameters.
In this paper, we propose a novel method to compute the similarity between congeneric nodes in bipartite networks. Different from the standard Person correlation, we take into account the influence of nodes degree. Substituting this new definition of similarity for the standard Person correlation, we propose a modified collaborative filtering (MCF). Based on a benchmark database, we demonstrate the great improvement of algorithmic accuracy for both user-based MCF and object-based MCF.
142 - Ian Fisk 2010
In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collid er) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.
This paper describes the solution method taken by LeBuSiShu team for track1 in ACM KDD CUP 2011 contest (resulting in the 5th place). We identified two main challenges: the unique item taxonomy characteristics as well as the large data set size.To ha ndle the item taxonomy, we present a novel method called Matrix Factorization Item Taxonomy Regularization (MFITR). MFITR obtained the 2nd best prediction result out of more then ten implemented algorithms. For rapidly computing multiple solutions of various algorithms, we have implemented an open source parallel collaborative filtering library on top of the GraphLab machine learning framework. We report some preliminary performance results obtained using the BlackLight supercomputer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا