ترغب بنشر مسار تعليمي؟ اضغط هنا

Literature review on vulnerability detection using NLP technology

60   0   0.0 ( 0 )
 نشر من قبل Jiajie Wu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Jiajie Wu




اسأل ChatGPT حول البحث

Vulnerability detection has always been the most important task in the field of software security. With the development of technology, in the face of massive source code, automated analysis and detection of vulnerabilities has become a current research hotspot. For special text files such as source code, using some of the hottest NLP technologies to build models and realize the automatic analysis and detection of source code has become one of the most anticipated studies in the field of vulnerability detection. This article does a brief survey of some recent new documents and technologies, such as CodeBERT, and summarizes the previous technologies.



قيم البحث

اقرأ أيضاً

Cloud computing has become a powerful and indispensable technology for complex, high performance and scalable computation. The exponential expansion in the deployment of cloud technology has produced a massive amount of data from a variety of applica tions, resources and platforms. In turn, the rapid rate and volume of data creation has begun to pose significant challenges for data management and security. The design and deployment of intrusion detection systems (IDS) in the big data setting has, therefore, become a topic of importance. In this paper, we conduct a systematic literature review (SLR) of data mining techniques (DMT) used in IDS-based solutions through the period 2013-2018. We employed criterion-based, purposive sampling identifying 32 articles, which constitute the primary source of the present survey. After a careful investigation of these articles, we identified 17 separate DMTs deployed in an IDS context. This paper also presents the merits and disadvantages of the various works of current research that implemented DMTs and distributed streaming frameworks (DSF) to detect and/or prevent malicious attacks in a big data environment.
In this work we propose Dynamit, a monitoring framework to detect reentrancy vulnerabilities in Ethereum smart contracts. The novelty of our framework is that it relies only on transaction metadata and balance data from the blockchain system; our app roach requires no domain knowledge, code instrumentation, or special execution environment. Dynamit extracts features from transaction data and uses a machine learning model to classify transactions as benign or harmful. Therefore, not only can we find the contracts that are vulnerable to reentrancy attacks, but we also get an execution trace that reproduces the attack.
Wearable devices generate different types of physiological data about the individuals. These data can provide valuable insights for medical researchers and clinicians that cannot be availed through traditional measures. Researchers have historically relied on survey responses or observed behavior. Interestingly, physiological data can provide a richer amount of user cognition than that obtained from any other sources, including the user himself. Therefore, the inexpensive consumer-grade wearable devices have become a point of interest for the health researchers. In addition, they are also used in continuous remote health monitoring and sometimes by the insurance companies. However, the biggest concern for such kind of use cases is the privacy of the individuals. There are a few privacy mechanisms, such as abstraction and k-anonymity, are widely used in information systems. Recently, Differential Privacy (DP) has emerged as a proficient technique to publish privacy sensitive data, including data from wearable devices. In this paper, we have conducted a Systematic Literature Review (SLR) to identify, select and critically appraise researches in DP as well as to understand different techniques and exiting use of DP in wearable data publishing. Based on our study we have identified the limitations of proposed solutions and provided future directions.
Vulnerability detection is an important issue in software security. Although various data-driven vulnerability detection methods have been proposed, the task remains challenging since the diversity and complexity of real-world vulnerable code in synt ax and semantics make it difficult to extract vulnerable features with regular deep learning models, especially in analyzing a large program. Moreover, the fact that real-world vulnerable codes contain a lot of redundant information unrelated to vulnerabilities will further aggravate the above problem. To mitigate such challenges, we define a novel code representation named Slice Property Graph (SPG), and then propose VulSPG, a new vulnerability detection approach using the improved R-GCN model with triple attention mechanism to identify potential vulnerabilities in SPG. Our approach has at least two advantages over other methods. First, our proposed SPG can reflect the rich semantics and explicit structural information that may be relevance to vulnerabilities, while eliminating as much irrelevant information as possible to reduce the complexity of graph. Second, VulSPG incorporates triple attention mechanism in R-GCNs to achieve more effective learning of vulnerability patterns from SPG. We have extensively evaluated VulSPG on two large-scale datasets with programs from SARD and real-world projects. Experimental results prove the effectiveness and efficiency of VulSPG.
As a new programming paradigm, deep learning has expanded its application to many real-world problems. At the same time, deep learning based software are found to be vulnerable to adversarial attacks. Though various defense mechanisms have been propo sed to improve robustness of deep learning software, many of them are ineffective against adaptive attacks. In this work, we propose a novel characterization to distinguish adversarial examples from benign ones based on the observation that adversarial examples are significantly less robust than benign ones. As existing robustness measurement does not scale to large networks, we propose a novel defense framework, named attack as defense (A2D), to detect adversarial examples by effectively evaluating an examples robustness. A2D uses the cost of attacking an input for robustness evaluation and identifies those less robust examples as adversarial since less robust examples are easier to attack. Extensive experiment results on MNIST, CIFAR10 and ImageNet show that A2D is more effective than recent promising approaches. We also evaluate our defence against potential adaptive attacks and show that A2D is effective in defending carefully designed adaptive attacks, e.g., the attack success rate drops to 0% on CIFAR10.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا