ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards a CAN IDS based on a neural-network data field predictor

80   0   0.0 ( 0 )
 نشر من قبل Krzysztof Pawelec
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Modern vehicles contain a few controller area networks (CANs), which allow scores of on-board electronic control units (ECUs) to communicate messages critical to vehicle functions and driver safety. CAN provide a lightweight and reliable broadcast protocol but is bereft of security features. As evidenced by many recent research works, CAN exploits are possible both remotely and with direct access, fueling a growing CAN intrusion detection system (IDS) body of research. A challenge for pioneering vehicle-agnostic IDSs is that passenger vehicles CAN message encodings are proprietary, defined and held secret by original equipment manufacturers (OEMs). Targeting detection of next-generation attacks, in which messages are sent from the expected ECU at the expected time but with malicious content, researchers are now seeking to leverage CAN data models, which predict future CAN message contents and use prediction error to identify anomalous, hopefully malicious CAN messages. Yet, current works model CAN signals post-translation, i.e., after applying OEM-donated or reverse-engineered translations from raw data. In this work, we present initial IDS results testing deep neural networks used to predict CAN data at the bit level, thereby providing IDS capabilities but avoiding reverse engineering proprietary encodings. Our results suggest the method is promising for continuous signals in CAN data, but struggles for discrete, e.g., binary, signals.



قيم البحث

اقرأ أيضاً

The Controller Area Network (CAN) protocol is ubiquitous in modern vehicles, but the protocol lacks many important security properties, such as message authentication. To address these insecurities, a rapidly growing field of research has emerged tha t seeks to detect tampering, anomalies, or attacks on these networks; this field has developed a wide variety of novel approaches and algorithms to address these problems. One major impediment to the progression of this CAN anomaly detection and intrusion detection system (IDS) research area is the lack of high-fidelity datasets with realistic labeled attacks, without which it is difficult to evaluate, compare, and validate these proposed approaches. In this work we present the first comprehensive survey of publicly available CAN intrusion datasets. Based on a thorough analysis of the data and documentation, for each dataset we provide a detailed description and enumerate the drawbacks, benefits, and suggested use cases. Our analysis is aimed at guiding researchers in finding appropriate datasets for testing a CAN IDS. We present the Real ORNL Automotive Dynamometer (ROAD) CAN Intrusion Dataset, providing the first dataset with real, advanced attacks to the existing collection of open datasets.
256 - J.B. Satinover 2008
Using an artificial neural network (ANN), a fixed universe of approximately 1500 equities from the Value Line index are rank-ordered by their predicted price changes over the next quarter. Inputs to the network consist only of the ten prior quarterly percentage changes in price and in earnings for each equity (by quarter, not accumulated), converted to a relative rank scaled around zero. Thirty simulated portfolios are constructed respectively of the 10, 20,..., and 100 top ranking equities (long portfolios), the 10, 20,..., 100 bottom ranking equities (short portfolios) and their hedged sets (long-short portfolios). In a 29-quarter simulation from the end of the third quarter of 1994 through the fourth quarter of 2001 that duplicates real-world trading of the same method employed during 2002, all portfolios are held fixed for one quarter. Results are compared to the S&P 500, the Value Line universe itself, trading the universe of equities using the proprietary ``Value Line Ranking System (to which this method is in some ways similar), and to a Martingale method of ranking the same equities. The cumulative returns generated by the network predictor significantly exceed those generated by the S&P 500, the overall universe, the Martingale and Value Line prediction methods and are not eroded by trading costs. The ANN shows significantly positive Jensens alpha, i.e., anomalous risk-adjusted expected return. A time series of its global performance shows a clear antipersistence. However, its performance is significantly better than a simple one-step Martingale predictor, than the Value Line system itself and than a simple buy and hold strategy, even when transaction costs are accounted for.
Statistical characteristics of network traffic have attracted a significant amount of research for automated network intrusion detection, some of which looked at applications of natural statistical laws such as Zipfs law, Benfords law and the Pareto distribution. In this paper, we present the application of Benfords law to a new network flow metric flow size difference, which have not been studied before by other researchers, to build an unsupervised flow-based intrusion detection system (IDS). The method was inspired by our observation on a large number of TCP flow datasets where normal flows tend to follow Benfords law closely but malicious flows tend to deviate significantly from it. The proposed IDS is unsupervised, so it can be easily deployed without any training. It has two simple operational parameters with a clear semantic meaning, allowing the IDS operator to set and adapt their values intuitively to adjust the overall performance of the IDS. We tested the proposed IDS on two (one closed and one public) datasets, and proved its efficiency in terms of AUC (area under the ROC curve). Our work showed the flow size difference has a great potential to improve the performance of any flow-based network IDSs.
Neural network applications have become popular in both enterprise and personal settings. Network solutions are tuned meticulously for each task, and designs that can robustly resolve queries end up in high demand. As the commercial value of accurate and performant machine learning models increases, so too does the demand to protect neural architectures as confidential investments. We explore the vulnerability of neural networks deployed as black boxes across accelerated hardware through electromagnetic side channels. We examine the magnetic flux emanating from a graphics processing units power cable, as acquired by a cheap $3 induction sensor, and find that this signal betrays the detailed topology and hyperparameters of a black-box neural network model. The attack acquires the magnetic signal for one query with unknown input values, but known input dimensions. The network reconstruction is possible due to the modular layer sequence in which deep neural networks are evaluated. We find that each layer components evaluation produces an identifiable magnetic signal signature, from which layer topology, width, function type, and sequence order can be inferred using a suitably trained classifier and a joint consistency optimization based on integer programming. We study the extent to which network specifications can be recovered, and consider metrics for comparing network similarity. We demonstrate the potential accuracy of this side channel attack in recovering the details for a broad range of network architectures, including random designs. We consider applications that may exploit this novel side channel exposure, such as adversarial transfer attacks. In response, we discuss countermeasures to protect against our method and other similar snooping techniques.
Modern processors have suffered a deluge of danger- ous side channel and speculative execution attacks that exploit vulnerabilities rooted in branch predictor units (BPU). Many such attacks exploit the shared use of the BPU between un- related proces ses, which allows malicious processes to retrieve sensitive data or enable speculative execution attacks. Attacks that exploit collisions between different branch instructions inside the BPU are among the most dangerous. Various protections and mitigations are proposed such as CPU microcode updates, secured cache designs, fencing mechanisms, invisible speculations. While some effectively mitigate speculative execution attacks, they overlook BPU as an attack vector, leaving BPU prone to malicious collisions and resulting critical penalty such as advanced micro-op cache attacks. Furthermore, some mitigations severely hamper the accuracy of the BPU resulting in increased CPU performance overhead. To address these, we present the secret token branch predictor unit (STBPU), a branch predictor design that mitigates collision-based speculative execution attacks and BPU side channel whilst incurring little to no performance overhead. STBPU achieves this by customizing inside data representations for each software entity requiring isolation. To prevent more advanced attacks, STBPU monitors hardware events and preemptively changes how STBPU data is stored and interpreted.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا