ترغب بنشر مسار تعليمي؟ اضغط هنا

A Novel Approach for Protection of Accounts Names against Hackers Combining Cluster Analysis and Chaotic Theory

69   0   0.0 ( 0 )
 نشر من قبل Ina Taralova
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The last years of the 20 th century and the beginning of the 21 th mark the facilitation trend of our real life due to the big development and progress of the computers and other intelligent devices. Algorithms based on artificial intelligence are basically a part of the software. The transmitted information by Internet or LAN arises continuously and it is expected that the protection of the data has been ensured. The aim of the present paper is to reveal false names of users accounts as a result of hackers attacks. The probability a given account to be either false or actual is calculated using a novel approach combining machine learning analysis (especially clusters analysis) with chaos theory. The suspected account will be used as a pattern and by classification techniques clusters will be formed with a respective probability this name to be false. This investigation puts two main purposes: First, to determine if there exists a trend of appearance of the similar usernames, which arises during the creation of new accounts. Second, to detect the false usernames and to discriminate those from the real ones, independently of that if two types of accounts are generated with the same speed. These security systems are applied in different areas, where the security of the data in users accounts is strictly required. For example, they can be used in on-line voting for balloting, in studying the social opinion by inquiries, in protection of the information in different user accounts of given system etc.



قيم البحث

اقرأ أيضاً

The rise in the adoption of blockchain technology has led to increased illegal activities by cyber-criminals costing billions of dollars. Many machine learning algorithms are applied to detect such illegal behavior. These algorithms are often trained on the transaction behavior and, in some cases, trained on the vulnerabilities that exist in the system. In our approach, we study the feasibility of using metadata such as Domain Name (DN) associated with the account in the blockchain and identify whether an account should be tagged malicious or not. Here, we leverage the temporal aspects attached to the DNs. Our results identify 144930 DNs that show malicious behavior, and out of these, 54114 DNs show persistent malicious behavior over time. Nonetheless, none of these identified malicious DNs were reported in new officially tagged malicious blockchain DNs.
We present a large-scale characterization of attacker activity across 111 real-world enterprise organizations. We develop a novel forensic technique for distinguishing between attacker activity and benign activity in compromised enterprise accounts t hat yields few false positives and enables us to perform fine-grained analysis of attacker behavior. Applying our methods to a set of 159 compromised enterprise accounts, we quantify the duration of time attackers are active in accounts and examine thematic patterns in how attackers access and leverage these hijacked accounts. We find that attackers frequently dwell in accounts for multiple days to weeks, suggesting that delayed (non-real-time) detection can still provide significant value. Based on an analysis of the attackers timing patterns, we observe two distinct modalities in how attackers access compromised accounts, which could be explained by the existence of a specialized market for hijacked enterprise accounts: where one class of attackers focuses on compromising and selling account access to another class of attackers who exploit the access such hijacked accounts provide. Ultimately, our analysis sheds light on the state of enterprise account hijacking and highlights fruitful directions for a broader space of detection methods, ranging from new features that home in on malicious account behavior to the development of non-real-time detection methods that leverage malicious activity after an attacks initial point of compromise to more accurately identify attacks.
We present a detailed discussion of our novel diagrammatic coupled cluster Monte Carlo (diagCCMC) [Scott et al. J. Phys. Chem. Lett. 2019, 10, 925]. The diagCCMC algorithm performs an imaginary-time propagation of the similarity-transformed coupled c luster Schrodinger equation. Imaginary-time updates are computed by stochastic sampling of the coupled cluster vector function: each term is evaluated as a randomly realised diagram in the connected expansion of the similarity-transformed Hamiltonian. We highlight similarities and differences between deterministic and stochastic linked coupled cluster theory when the latter is re-expressed as a sampling of the diagrammatic expansion, and discuss details of our implementation that allow for a walker-less realisation of the stochastic sampling. Finally, we demonstrate that in the presence of locality, our algorithm can obtain a fixed errorbar per electron while only requiring an asymptotic computational effort that scales quartically with system size, independently of truncation level in coupled cluster theory. The algorithm only requires an asymptotic memory costs scaling linearly, as demonstrated previously. These scaling reductions require no ad hoc modifications to the approach.
In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan. AdvTrojan is stealthy because it can be activated only when: 1) a carefully craft ed adversarial perturbation is injected into the input examples during inference, and 2) a Trojan backdoor is implanted during the training process of the model. We leverage adversarial noise in the input space to move Trojan-infected examples across the model decision boundary, making it difficult to detect. The stealthiness behavior of AdvTrojan fools the users into accidentally trust the infected model as a robust classifier against adversarial examples. AdvTrojan can be implemented by only poisoning the training data similar to conventional Trojan backdoor attacks. Our thorough analysis and extensive experiments on several benchmark datasets show that AdvTrojan can bypass existing defenses with a success rate close to 100% in most of our experimental scenarios and can be extended to attack federated learning tasks as well.
100 - Jianyu Xu , Bin Liu , Huadong Mo 2021
The cybersecurity of smart grids has become one of key problems in developing reliable modern power and energy systems. This paper introduces a non-stationary adversarial cost with a variation constraint for smart grids and enables us to investigate the problem of optimal smart grid protection against cyber attacks in a relatively practical scenario. In particular, a Bayesian multi-node bandit (MNB) model with adversarial costs is constructed and a new regret function is defined for this model. An algorithm called Thompson-Hedge algorithm is presented to solve the problem and the superior performance of the proposed algorithm is proven in terms of the convergence rate of the regret function. The applicability of the algorithm to real smart grid scenarios is verified and the performance of the algorithm is also demonstrated by numerical examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا