ترغب بنشر مسار تعليمي؟ اضغط هنا

83 - David Lowry-Duda 2021
We study sign changes in the sequence ${ A(n) : n = c^2 + d^2 }$, where $A(n)$ are the coefficients of a holomorphic cuspidal Hecke eigenform. After proving a variant of an axiomatization for detecting and quantifying sign changes introduced by Meher and Murty, we show that there are at least $X^{frac{1}{4} - epsilon}$ sign changes in each interval $[X, 2X]$ for $X gg 1$. This improves to $X^{frac{1}{2} - epsilon}$ many sign changes assuming the Generalized Lindel{o}f Hypothesis.
148 - Zhipeng Gao , Xin Xia , David Lo 2021
TODO comments are very widely used by software developers to describe their pending tasks during software development. However, after performing the task developers sometimes neglect or simply forget to remove the TODO comment, resulting in obsolete TODO comments. These obsolete TODO comments can confuse development teams and may cause the introduction of bugs in the future, decreasing the softwares quality and maintainability. In this work, we propose a novel model, named TDCleaner (TODO comment Cleaner), to identify obsolete TODO comments in software projects. TDCleaner can assist developers in just-in-time checking of TODO comments status and avoid leaving obsolete TODO comments. Our approach has two main stages: offline learning and online prediction. During offline learning, we first automatically establish <code_change, todo_comment, commit_msg> training samples and leverage three neural encoders to capture the semantic features of TODO comment, code change and commit message respectively. TDCleaner then automatically learns the correlations and interactions between different encoders to estimate the final status of the TODO comment. For online prediction, we check a TODO comments status by leveraging the offline trained model to judge the TODO comments likelihood of being obsolete. We built our dataset by collecting TODO comments from the top-10,000 Python and Java Github repositories and evaluated TDCleaner on them. Extensive experimental results show the promising performance of our model over a set of benchmarks. We also performed an in-the-wild evaluation with real-world software projects, we reported 18 obsolete TODO comments identified by TDCleaner to Github developers and 9 of them have already been confirmed and removed by the developers, demonstrating the practical usage of our approach.
The research in anomaly detection lacks a unified definition of what represents an anomalous instance. Discrepancies in the nature itself of an anomaly lead to multiple paradigms of algorithms design and experimentation. Predictive maintenance is a s pecial case, where the anomaly represents a failure that must be prevented. Related time-series research as outlier and novelty detection or time-series classification does not apply to the concept of an anomaly in this field, because they are not single points which have not been seen previously and may not be precisely annotated. Moreover, due to the lack of annotated anomalous data, many benchmarks are adapted from supervised scenarios. To address these issues, we generalise the concept of positive and negative instances to intervals to be able to evaluate unsupervised anomaly detection algorithms. We also preserve the imbalance scheme for evaluation through the proposal of the Preceding Window ROC, a generalisation for the calculation of ROC curves for time-series scenarios. We also adapt the mechanism from a established time-series anomaly detection benchmark to the proposed generalisations to reward early detection. Therefore, the proposal represents a flexible evaluation framework for the different scenarios. To show the usefulness of this definition, we include a case study of Big Data algorithms with a real-world time-series problem provided by the company ArcelorMittal, and compare the proposal with an evaluation method.
Modular forms are highly self-symmetric functions studied in number theory, with connections to several areas of mathematics. But they are rarely visualized. We discuss ongoing work to compute and visualize modular forms as 3D surfaces and to use the se techniques to make videos flying around the peaks and canyons of these modular terrains. Our goal is to make beautiful visualizations exposing the symmetries of these functions.
192 - David London 2021
Some models of leptogenesis involve a nearly-degenerate pair of heavy Majorana neutrinos $N_{1,2}$ whose masses can be small, $O({rm GeV})$. There can be heavy-light neutrino mixing parametrized by $|B_{ell N}|^2 = 10^{-5}$, which leads to the rare l epton-number-violating decay $W^pm to ell_1^pm ell_2^pm (q{bar q})^mp$. With contributions to this decay from both $N_1$ and $N_2$, a CP-violating rate difference between the decay and its CP-conjugate can be generated. In this talk, I describe the prospects for measuring such a CP asymmetry $A_{rm CP}$ at the LHC. I consider thre
92 - Tingting Bi , Xin Xia , David Lo 2021
Being able to access software in daily life is vital for everyone, and thus accessibility is a fundamental challenge for software development. However, given the number of accessibility issues reported by many users, e.g., in app reviews, it is not c lear if accessibility is widely integrated into current software projects and how software projects address accessibility issues. In this paper, we report a study of the critical challenges and benefits of incorporating accessibility into software development and design. We applied a mixed qualitative and quantitative approach for gathering data from 15 interviews and 365 survey respondents from 26 countries across five continents to understand how practitioners perceive accessibility development and design in practice. We got 44 statements grouped into eight topics on accessibility from practitioners viewpoints and different software development stages. Our statistical analysis reveals substantial gaps between groups, e.g., practitioners have Direct v.s. Indirect accessibility relevant work experience when they reviewed the summarized statements. These gaps might hinder the quality of accessibility development and design, and we use our findings to establish a set of guidelines to help practitioners be aware of accessibility challenges and benefit factors. We also propose some remedies to resolve the gaps and to highlight key future research directions.
Cellular networks have changed the world we are living in, and the fifth generation (5G) of radio technology is expected to further revolutionise our everyday lives, by enabling a high degree of automation, through its larger capacity, massive connec tivity, and ultra-reliable low latency communications. In addition, the third generation partnership project (3GPP) new radio (NR) specification also provides tools to significantly decrease the energy consumption and the green house emissions of next generations networks, thus contributing towards information and communication technology (ICT) sustainability targets. In this survey paper, we thoroughly review the state-of-the-art on current energy efficiency research. We first categorise and carefully analyse the different power consumption models and energy efficiency metrics, which have helped to make progress on the understanding of green networks. Then, as a main contribution, we survey in detail -- from a theoretical and a practical viewpoint -- the main energy efficiency enabling features that 3GPP NR provides, together with their main benefits and challenges. Special attention is paid to four key technology features, i.e., massive multiple-input multiple-output (MIMO), lean carrier design, and advanced idle modes, together with the role of artificial intelligence capabilities. We dive into their implementation and operational details, and thoroughly discuss their optimal operation points and theoretical-trade-offs from an energy consumption perspective. This will help the reader to grasp the fundamentals of -- and the status on -- green networking. Finally, the areas of research where more effort is needed to make future networks greener are also discussed.
120 - Yanming Yang , Xin Xia , David Lo 2020
In 2006, Geoffrey Hinton proposed the concept of training Deep Neural Networks (DNNs) and an improved model training method to break the bottleneck of neural network development. More recently, the introduction of AlphaGo in 2016 demonstrated the pow erful learning ability of deep learning and its enormous potential. Deep learning has been increasingly used to develop state-of-the-art software engineering (SE) research tools due to its ability to boost performance for various SE tasks. There are many factors, e.g., deep learning model selection, internal structure differences, and model optimization techniques, that may have an impact on the performance of DNNs applied in SE. Few works to date focus on summarizing, classifying, and analyzing the application of deep learning techniques in SE. To fill this gap, we performed a survey to analyse the relevant studies published since 2006. We first provide an example to illustrate how deep learning techniques are used in SE. We then summarize and classify different deep learning techniques used in SE. We analyzed key optimization technologies used in these deep learning models, and finally describe a range of key research topics using DNNs in SE. Based on our findings, we present a set of current challenges remaining to be investigated and outline a proposed research road map highlighting key opportunities for future work.
Some models of leptogenesis involve a quasi-degenerate pair of heavy neutrinos $N_{1,2}$ whose masses can be small, $O({rm GeV})$. Such neutrinos can contribute to the rare lepton-number-violating (LNV) decay $W^pm to ell_1^pm ell_2^pm (q{bar q})^mp$ . If both $N_1$ and $N_2$ contribute, there can be a CP-violating rate difference between the LNV decay of a $W^-$ and its CP-conjugate decay. In this paper, we examine the prospects for measuring such a CP asymmetry $A_{rm CP}$ at the LHC. We assume a value for the heavy-light neutrino mixing parameter $|B_{ell N}|^2 = 10^{-5}$, which is allowed by the present experimental constraints, and consider $5~{rm GeV} le M_N le 80~{rm GeV}$. We consider thr
236 - Chao Liu , Xin Xia , David Lo 2020
Code search is a core software engineering task. Effective code search tools can help developers substantially improve their software development efficiency and effectiveness. In recent years, many code search studies have leveraged different techniq ues, such as deep learning and information retrieval approaches, to retrieve expected code from a large-scale codebase. However, there is a lack of a comprehensive comparative summary of existing code search approaches. To understand the research trends in existing code search studies, we systematically reviewed 81 relevant studies. We investigated the publication trends of code search studies, analyzed key components, such as codebase, query, and modeling technique used to build code search tools, and classified existing tools into focusing on supporting seven different search tasks. Based on our findings, we identified a set of outstanding challenges in existing studies and a research roadmap for future code search research.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا