Do you want to publish a course? Click here

Survey on Incremental Approaches for Network Anomaly Detection

103   0   0.0 ( 0 )
 Added by Monowar Bhuyan H
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

As the communication industry has connected distant corners of the globe using advances in network technology, intruders or attackers have also increased attacks on networking infrastructure commensurately. System administrators can attempt to prevent such attacks using intrusion detection tools and systems. There are many commercially available signature-based Intrusion Detection Systems (IDSs). However, most IDSs lack the capability to detect novel or previously unknown attacks. A special type of IDSs, called Anomaly Detection Systems, develop models based on normal system or network behavior, with the goal of detecting both known and unknown attacks. Anomaly detection systems face many problems including high rate of false alarm, ability to work in online mode, and scalability. This paper presents a selective survey of incremental approaches for detecting anomaly in normal system or network traffic. The technological trends, open problems, and challenges over anomaly detection using incremental approach are also discussed.

rate research

Read More

Internet has played a vital role in this modern world, the possibilities and opportunities offered are limitless. Despite all the hype, Internet services are liable to intrusion attack that could tamper the confidentiality and integrity of important information. An attack started with gathering the information of the attack target, this gathering of information activity can be done as either fast or slow attack. The defensive measure network administrator can take to overcome this liability is by introducing Intrusion Detection Systems (IDSs) in their network. IDS have the capabilities to analyze the network traffic and recognize incoming and on-going intrusion. Unfortunately the combination of both modules in real time network traffic slowed down the detection process. In real time network, early detection of fast attack can prevent any further attack and reduce the unauthorized access on the targeted machine. The suitable set of feature selection and the correct threshold value, add an extra advantage for IDS to detect anomalies in the network. Therefore this paper discusses a new technique for selecting static threshold value from a minimum standard features in detecting fast attack from the victim perspective. In order to increase the confidence of the threshold value the result is verified using Statistical Process Control (SPC). The implementation of this approach shows that the threshold selected is suitable for identifying the fast attack in real time.
While variable selection is essential to optimize the learning complexity by prioritizing features, automating the selection process is preferred since it requires laborious efforts with intensive analysis otherwise. However, it is not an easy task to enable the automation due to several reasons. First, selection techniques often need a condition to terminate the reduction process, for example, by using a threshold or the number of features to stop, and searching an adequate stopping condition is highly challenging. Second, it is uncertain that the reduced variable set would work well; our preliminary experimental result shows that well-known selection techniques produce different sets of variables as a result of reduction (even with the same termination condition), and it is hard to estimate which of them would work the best in future testing. In this paper, we demonstrate the potential power of our approach to the automation of selection process that incorporates well-known selection methods identifying important variables. Our experimental results with two public network traffic data (UNSW-NB15 and IDS2017) show that our proposed method identifies a small number of core variables, with which it is possible to approximate the performance to the one with the entire variables.
This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called GraphPrints. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets -- small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84% at the time-interval level, and 0.05% at the IP-level with 100% true positive rates at both.
Most of the peers accessing the services are under the assumption that the service accessed in a P2P network is utmost secured. By means of prevailing hard security mechanisms, security goals like authentication, authorization, privacy, non repudiation of services and other hard security issues are resolved. But these mechanisms fail to provide soft security. An exhaustive survey of existing trust and reputation models in P2P network regarding service provisioning is presented and challenges are listed.p2p Trust issues like trust bootstrapping, trust evidence procurement, trust assessment, trust interaction outcome evaluation and other trust based classification of peers behaviour into trusted, inconsistent, un trusted, malicious, betraying, redemptive are discussed.
Sixth-generation (6G) mobile networks will have to cope with diverse threats on a space-air-ground integrated network environment, novel technologies, and an accessible user information explosion. However, for now, security and privacy issues for 6G remain largely in concept. This survey provides a systematic overview of security and privacy issues based on prospective technologies for 6G in the physical, connection, and service layers, as well as through lessons learned from the failures of existing security architectures and state-of-the-art defenses. Two key lessons learned are as follows. First, other than inheriting vulnerabilities from the previous generations, 6G has new threat vectors from new radio technologies, such as the exposed location of radio stripes in ultra-massive MIMO systems at Terahertz bands and attacks against pervasive intelligence. Second, physical layer protection, deep network slicing, quantum-safe communications, artificial intelligence (AI) security, platform-agnostic security, real-time adaptive security, and novel data protection mechanisms such as distributed ledgers and differential privacy are the top promising techniques to mitigate the attack magnitude and personal data breaches substantially.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا