Do you want to publish a course? Click here

Task-Oriented API Usage Examples Prompting Powered By Programming Task Knowledge Graph

110   0   0.0 ( 0 )
 Added by Jiamou Sun
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Programming tutorials are often created to demonstrate programming tasks with code examples. However, our study of Stack Overflow questions reveals the low utilization of high-quality programming tutorials, which is caused task description mismatch and code information overload. Document search can find relevant tutorial documents, but they often cannot find specific programming actions and code solutions relevant to the developers task needs. The recently proposed activity-centric search over knowledge graph supports direct search of programming actions, but it has limitations in action coverage, natural language based task search, and coarse-grained code example recommendation. In this work, we enhance action coverage in knowledge graph with actions extracted from comments in code examples and more forms of activity sentences. To overcome the task description mismatch problem, we develop a code matching based task search method to find relevant programming actions and code examples to the code under development. We integrate our knowledge graph and task search method in the IDE, and develop an observe-push based tool to prompt developers with task-oriented API usage examples. To alleviate the code information overload problem, our tool highlights programming action and API information in the prompted tutorial task excerpts and code examples based on the underlying knowledge graph. Our evaluation confirms the high quality of the constructed knowledge graph, and show that our code matching based task search can recommend effective code solutions to programming issues asked on Stack Overflow. A small-scale user study demonstrates that our tool is useful for assisting developers in finding and using relevant programming tutorials in their programming tasks.



rate research

Read More

301 - Qi Shen , Shijun Wu , Yanzhen Zou 2021
Nowadays, developers often reuse existing APIs to implement their programming tasks. A lot of API usage patterns are mined to help developers learn API usage rules. However, there are still many missing variables to be synthesized when developers integrate the patterns into their programming context. To deal with this issue, we propose a comprehensive approach to integrate API usage patterns in this paper. We first perform an empirical study by analyzing how API usage patterns are integrated in real-world projects. We find the expressions for variable synthesis is often non-trivial and can be divided into 5 syntax types. Based on the observation, we promote an approach to help developers interactively complete API usage patterns. Compared to the existing code completion techniques, our approach can recommend infrequent expressions accompanied with their real-world usage examples according to the user intent. The evaluation shows that our approach could assist users to integrate APIs more efficiently and complete the programming tasks faster than existing works.
Due to the deprecation of APIs in the Android operating system,developers have to update usages of the APIs to ensure that their applications work for both the past and curre
The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders. In this work, we explore an alternative paradigm. We formulate semantic parsing as a dependency parsing task, applying graph-based decoding techniques developed for syntactic parsing. We compare various decoding techniques given the same pre-trained Transformer encoder on the TOP dataset, including settings where training data is limited or contains only partially-annotated examples. We find that our graph-based approach is competitive with sequence decoders on the standard setting, and offers significant improvements in data efficiency and settings where partially-annotated data is available.
145 - Jun Lin , Han Yu , Zhiqi Shen 2014
Agile Software Development (ASD) methodology has become widely used in the industry. Understanding the challenges facing software engineering students is important to designing effective training methods to equip students with proper skills required for effectively using the ASD techniques. Existing empirical research mostly focused on eXtreme Programming (XP) based ASD methodologies. There is a lack of empirical studies about Scrum-based ASD programming which has become the most popular agile methodology among industry practitioners. In this paper, we present empirical findings regarding the aspects of task allocation decision-making, collaboration, and team morale related to the Scrum ASD process which have not yet been well studied by existing research. We draw our findings from a 12 week long course work project in 2014 involving 125 undergraduate software engineering students from a renowned university working in 21 Scrum teams. Instead of the traditional survey or interview based methods, which suffer from limitations in scale and level of details, we obtain fine grained data through logging students activities in our online agile project management (APM) platform - HASE. During this study, the platform logged over 10,000 ASD activities. Deviating from existing preconceptions, our results suggest negative correlations between collaboration and team performance as well as team morale.
Knowledge distillation (KD) has recently emerged as an efficacious scheme for learning compact deep neural networks (DNNs). Despite the promising results achieved, the rationale that interprets the behavior of KD has yet remained largely understudied. In this paper, we introduce a novel task-oriented attention model, termed as KDExplainer, to shed light on the working mechanism underlying the vanilla KD. At the heart of KDExplainer is a Hierarchical Mixture of Experts (HME), in which a multi-class classification is reformulated as a multi-task binary one. Through distilling knowledge from a free-form pre-trained DNN to KDExplainer, we observe that KD implicitly modulates the knowledge conflicts between different subtasks, and in reality has much more to offer than label smoothing. Based on such findings, we further introduce a portable tool, dubbed as virtual attention module (VAM), that can be seamlessly integrated with various DNNs to enhance their performance under KD. Experimental results demonstrate that with a negligible additional cost, student models equipped with VAM consistently outperform their non-VAM counterparts across different benchmarks. Furthermore, when combined with other KD methods, VAM remains competent in promoting results, even though it is only motivated by vanilla KD. The code is available at https://github.com/zju-vipa/KDExplainer.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا