ترغب بنشر مسار تعليمي؟ اضغط هنا

Internet Anomaly Detection based on Complex Network Path

324   0   0.0 ( 0 )
 نشر من قبل Jinfa Wang
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Detecting the anomaly behaviors such as network failure or Internet intentional attack in the large-scale Internet is a vital but challenging task. While numerous techniques have been developed based on Internet traffic in past years, anomaly detection for structured datasets by complex network have just been of focus recently. In this paper, a anomaly detection method for large-scale Internet topology is proposed by considering the changes of network crashes. In order to quantify the dynamic changes of Internet topology, the network path changes coefficient(NPCC) is put forward which will highlight the Internet abnormal state after it is attacked continuously. Furthermore we proposed the decision function which is inspired by Fibonacci Sequence to determine whether the Internet is abnormal or not. That is the current Internet is abnormal if its NPCC is beyond the normal domain which structured by the previous k NPCCs of Internet topology. Finally the new Internet anomaly detection method was tested over the topology data of three Internet anomaly events. The results show that the detection accuracy of all events are over 97%, the detection precision of each event are 90.24%, 83.33% and 66.67%, when k = 36. According to the experimental values of the index F_1, we found the the better the detection performance is, the bigger the k is, and our method has better performance for the anomaly behaviors caused by network failure than that caused by intentional attack. Compared with traditional anomaly detection, our work may be more simple and powerful for the government or organization in items of detecting large-scale abnormal events.

قيم البحث

اقرأ أيضاً

The problem of detecting anomalies in time series from network measurements has been widely studied and is a topic of fundamental importance. Many anomaly detection methods are based on packet inspection collected at the network core routers, with co nsequent disadvantages in terms of computational cost and privacy. We propose an alternative method in which packet header inspection is not needed. The method is based on the extraction of a normal subspace obtained by the tensor decomposition technique considering the correlation between different metrics. We propose a new approach for online tensor decomposition where changes in the normal subspace can be tracked efficiently. Another advantage of our proposal is the interpretability of the obtained models. The flexibility of the method is illustrated by applying it to two distinct examples, both using actual data collected on residential routers.
111 - Jinfa Wang , Hai Zhao , Xiao Liu 2016
From biosystem to complex system,the study of life is always an important area. Inspired by hyper-cycle theory about the evolution of non-life system, we study the metabolism, self-replication and mutation behavior in the Internet based on node entit y, connection relationship and function subgraph--motif--of network topology. Firstly a framework of complex network evolution is proposed to analyze the birth and death phenomena of Internet topology from January 1998 to August 2013. Then we find the Internet metabolism behavior from angle of node, motif to global topology, i.e. one born node is only added into Internet, subsequently takes part in the local reconstruction activities. Meanwhile there are nodes and motifs death. In process of the local reconstruction, although the Internet system replicates motifs repeatedly by adding or removing actions, the system characteristics and global structure are not destroyed. Statistics about the motif M3 which is a full connectivity subgraph shows that the process of its metabolism is fluctuation that causes mutation of Internet. Furthermore we find that mutation is instinctive reaction of Internet when its influenced from inside or outside environment, such as Internet bubble, social network rising and finance crisis. The behaviors of metabolism, self-replication and mutation of Internet indicate its life characteristic as a complex artificial life. And our work will inspire people to study the life-like phenomena of other complex systems from angle of topology structure.
Internet routing can often be sub-optimal, with the chosen routes providing worse performance than other available policy-compliant routes. This stems from the lack of visibility into route performance at the network layer. While this is an old probl em, we argue that recent advances in programmable hardware finally open up the possibility of performance-aware routing in a deployable, BGP-compatible manner. We introduce ROUTESCOUT, a hybrid hardware/software system supporting performance-based routing at ISP scale. In the data plane, ROUTESCOUT leverages P4-enabled hardware to monitor performance across policy-compliant route choices for each destination, at line-rate and with a small memory footprint. ROUTESCOUTs control plane then asynchronously pulls aggregated performance metrics to synthesize a performance-aware forwarding policy. We show that ROUTESCOUT can monitor performance across most of an ISPs traffic, using only 4 MB of memory. Further, its control can flexibly satisfy a variety of operator objectives, with sub-second operating times.
This paper proposes to develop a network phenotyping mechanism based on network resource usage analysis and identify abnormal network traffic. The network phenotyping may use different metrics in the cyber physical system (CPS), including resource an d network usage monitoring, physical state estimation. The set of devices will collectively decide a holistic view of the entire system through advanced image processing and machine learning methods. In this paper, we choose the network traffic pattern as a study case to demonstrate the effectiveness of the proposed method, while the methodology may similarly apply to classification and anomaly detection based on other resource metrics. We apply image processing and machine learning on the network resource usage to extract and recognize communication patterns. The phenotype method is experimented on four real-world decentralized applications. With proper length of sampled continuous network resource usage, the overall recognition accuracy is about 99%. Additionally, the recognition error is used to detect the anomaly network traffic. We simulate the anomaly network resource usage that equals to 10%, 20% and 30% of the normal network resource usage. The experiment results show the proposed anomaly detection method is efficient in detecting each intensity of anomaly network resource usage.
Link dimensioning is used by ISPs to properly provision the capacity of their network links. Operators have to make provisions for sudden traffic bursts and network failures to assure uninterrupted operations. In practice, traffic averages are used t o roughly estimate required capacity. More accurate solutions often require traffic statistics easily obtained from packet captures, e.g. variance. Our investigations on real Internet traffic have emphasized that the traffic shows high variations at small aggregation times, which indicates that the traffic is self-similar and has a heavy-tailed characteristics. Self-similarity and heavy-tailedness are of great importance for network capacity planning purposes. Traffic modeling process should consider all Internet traffic characteristics. Thereby, the quality of service (QoS) of the network would not affected by any mismatching between the real traffic properties and the reference statistical model. This paper proposes a new class of traffic profiles that is better suited for metering bursty Internet traffic streams. We employ bandwidth provisioning to determine the lowest required bandwidth capacity level for a network link, such that for a given traffic load, a desired performance target is met. We validate our approach using packet captures from real IP-based networks. The proposed link dimensioning approach starts by measuring the statistical parameters of the available traces, and then the degree of fluctuations in the traffic has been measured. This is followed by choosing a proper model to fit the traffic such as lognormal and generalized extreme value distributions. Finally, the optimal capacity for the link can be estimated by deploying the bandwidth provisioning approach. It has been shown that the heavy tailed distributions give more precise values for the link capacity than the Gaussian model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا