ترغب بنشر مسار تعليمي؟ اضغط هنا

Effective Email Spam Detection System using Extreme Gradient Boosting

88   0   0.0 ( 0 )
 نشر من قبل Ismail Mustapha
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The popularity, cost-effectiveness and ease of information exchange that electronic mails offer to electronic device users has been plagued with the rising number of unsolicited or spam emails. Driven by the need to protect email users from this growing menace, research in spam email filtering/detection systems has being increasingly active in the last decade. However, the adaptive nature of spam emails has often rendered most of these systems ineffective. While several spam detection models have been reported in literature, the reported performance on an out of sample test data shows the room for more improvement. Presented in this research is an improved spam detection model based on Extreme Gradient Boosting (XGBoost) which to the best of our knowledge has received little attention spam email detection problems. Experimental results show that the proposed model outperforms earlier approaches across a wide range of evaluation metrics. A thorough analysis of the model results in comparison to the results of earlier works is also presented.



قيم البحث

اقرأ أيضاً

85 - Yang Liu , Zhuo Ma , Ximeng Liu 2019
Recently, Google and other 24 institutions proposed a series of open challenges towards federated learning (FL), which include application expansion and homomorphic encryption (HE). The former aims to expand the applicable machine learning models of FL. The latter focuses on who holds the secret key when applying HE to FL. For the naive HE scheme, the server is set to master the secret key. Such a setting causes a serious problem that if the server does not conduct aggregation before decryption, a chance is left for the server to access the users update. Inspired by the two challenges, we propose FedXGB, a federated extreme gradient boosting (XGBoost) scheme supporting forced aggregation. FedXGB mainly achieves the following two breakthroughs. First, FedXGB involves a new HE based secure aggregation scheme for FL. By combining the advantages of secret sharing and homomorphic encryption, the algorithm can solve the second challenge mentioned above, and is robust to the user dropout. Then, FedXGB extends FL to a new machine learning model by applying the secure aggregation scheme to the classification and regression tree building of XGBoost. Moreover, we conduct a comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness, and efficiency of FedXGB. The results indicate that FedXGB achieves less than 1% accuracy loss compared with the original XGBoost, and can provide about 23.9% runtime and 33.3% communication reduction for HE based model update aggregation of FL.
Nowadays, with the rise of Internet access and mobile devices around the globe, more people are using social networks for collaboration and receiving real-time information. Twitter, the microblogging that is becoming a critical source of communicatio n and news propagation, has grabbed the attention of spammers to distract users. So far, researchers have introduced various defense techniques to detect spams and combat spammer activities on Twitter. To overcome this problem, in recent years, many novel techniques have been offered by researchers, which have greatly enhanced the spam detection performance. Therefore, it raises a motivation to conduct a systematic review about different approaches of spam detection on Twitter. This review focuses on comparing the existing research techniques on Twitter spam detection systematically. Literature review analysis reveals that most of the existing methods rely on Machine Learning-based algorithms. Among these Machine Learning algorithms, the major differences are related to various feature selection methods. Hence, we propose a taxonomy based on different feature selection methods and analyses, namely content analysis, user analysis, tweet analysis, network analysis, and hybrid analysis. Then, we present numerical analyses and comparative studies on current approaches, coming up with open challenges that help researchers develop solutions in this topic.
100 - T. Cinto 2020
Space weather events may cause damage to several fields, including aviation, satellites, oil and gas industries, and electrical systems, leading to economic and commercial losses. Solar flares are one of the most significant events, and refer to sudd en radiation releases that can affect the Earths atmosphere within a few hours or minutes. Therefore, it is worth designing high-performance systems for forecasting such events. Although in the literature there are many approaches for flare forecasting, there is still a lack of consensus concerning the techniques used for designing these systems. Seeking to establish some standardization while designing flare predictors, in this study we propose a novel methodology for designing such predictors, further validated with extreme gradient boosting tree classifiers and time series. This methodology relies on the following well-defined machine learning based pipeline: (i) univariate feature selection; (ii) randomized hyper-parameter optimization; (iii) imbalanced data treatment; (iv) adjustment of cut-off point of classifiers; and (v) evaluation under operational settings. To verify our methodology effectiveness, we designed and evaluated three proof-of-concept models for forecasting $geq C$ class flares up to 72 hours ahead. Compared to baseline models, those models were able to significantly increase their scores of true skill statistics (TSS) under operational forecasting scenarios by 0.37 (predicting flares in the next 24 hours), 0.13 (predicting flares within 24-48 hours), and 0.36 (predicting flares within 48-72 hours). Besides increasing TSS, the methodology also led to significant increases in the area under the ROC curve, corroborating that we improved the positive and negative recalls of classifiers while decreasing the number of false alarms.
Cybersecurity is a domain where the data distribution is constantly changing with attackers exploring newer patterns to attack cyber infrastructure. Intrusion detection system is one of the important layers in cyber safety in todays world. Machine le arning based network intrusion detection systems started showing effective results in recent years. With deep learning models, detection rates of network intrusion detection system are improved. More accurate the model, more the complexity and hence less the interpretability. Deep neural networks are complex and hard to interpret which makes difficult to use them in production as reasons behind their decisions are unknown. In this paper, we have used deep neural network for network intrusion detection and also proposed explainable AI framework to add transparency at every stage of machine learning pipeline. This is done by leveraging Explainable AI algorithms which focus on making ML models less of black boxes by providing explanations as to why a prediction is made. Explanations give us measurable factors as to what features influence the prediction of a cyberattack and to what degree. These explanations are generated from SHAP, LIME, Contrastive Explanations Method, ProtoDash and Boolean Decision Rules via Column Generation. We apply these approaches to NSL KDD dataset for intrusion detection system and demonstrate results.
Cyber attacks pose crucial threats to computer system security, and put digital treasuries at excessive risks. This leads to an urgent call for an effective intrusion detection system that can identify the intrusion attacks with high accuracy. It is challenging to classify the intrusion events due to the wide variety of attacks. Furthermore, in a normal network environment, a majority of the connections are initiated by benign behaviors. The class imbalance issue in intrusion detection forces the classifier to be biased toward the majority/benign class, thus leave many attack incidents undetected. Spurred by the success of deep neural networks in computer vision and natural language processing, in this paper, we design a new system named DeepIDEA that takes full advantage of deep learning to enable intrusion detection and classification. To achieve high detection accuracy on imbalanced data, we design a novel attack-sharing loss function that can effectively move the decision boundary towards the attack classes and eliminates the bias towards the majority/benign class. By using this loss function, DeepIDEA respects the fact that the intrusion mis-classification should receive higher penalty than the attack mis-classification. Extensive experimental results on three benchmark datasets demonstrate the high detection accuracy of DeepIDEA. In particular, compared with eight state-of-the-art approaches, DeepIDEA always provides the best class-balanced accuracy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا