ترغب بنشر مسار تعليمي؟ اضغط هنا

Unveiling the I2P web structure: a connectivity analysis

94   0   0.0 ( 0 )
 نشر من قبل Roberto Mag\\'an-Carri\\'on Dr.
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Web is a primary and essential service to share information among users and organizations at present all over the world. Despite the current significance of such a kind of traffic on the Internet, the so-called Surface Web traffic has been estimated in just about 5% of the total. The rest of the volume of this type of traffic corresponds to the portion of Web known as Deep Web. These contents are not accessible by search engines because they are authentication protected contents or pages that are only reachable through the well known as darknets. To browse through darknets websites special authorization or specific software and configurations are needed. Despite TOR is the most used darknet nowadays, there are other alternatives such as I2P or Freenet, which offer different features for end users. In this work, we perform an analysis of the connectivity of websites in the I2P network (named eepsites) aimed to discover if different patterns and relationships from those used in legacy web are followed in I2P, and also to get insights about its dimension and structure. For that, a novel tool is specifically developed by the authors and deployed on a distributed scenario. Main results conclude the decentralized nature of the I2P network, where there is a structural part of interconnected eepsites while other several nodes are isolated probably due to their intermittent presence in the network.

قيم البحث

اقرأ أيضاً

In late 2017, a sudden proliferation of malicious JavaScript was reported on the Web: browser-based mining exploited the CPU time of website visitors to mine the cryptocurrency Monero. Several studies measured the deployment of such code and develope d defenses. However, previous work did not establish how many users were really exposed to the identified mining sites and whether there was a real risk given common user browsing behavior. In this paper, we present a retroactive analysis to close this research gap. We pool large-scale, longitudinal data from several vantage points, gathered during the prime time of illicit cryptomining, to measure the impact on web users. We leverage data from passive traffic monitoring of university networks and a large European ISP, with suspected mining sites identified in previous active scans. We corroborate our results with data from a browser extension with a large user base that tracks site visits. We also monitor open HTTP proxies and the Tor network for malicious injection of code. We find that the risk for most Web users was always very low, much lower than what deployment scans suggested. Any exposure period was also very brief. However, we also identify a previously unknown and exploited attack vector on mobile devices.
During disasters, crisis, and emergencies the public relies on online services provided by official authorities to receive timely alerts, trustworthy information, and access to relief programs. It is therefore crucial for the authorities to reduce ri sks when accessing their online services. This includes catering to secure identification of service, secure resolution of name to network service, and content security and privacy as a minimum base for trustworthy communication. In this paper, we take a first look at Alerting Authorities (AA) in the US and investigate security measures related to trustworthy and secure communication. We study the domain namespace structure, DNSSEC penetration, and web certificates. We introduce an integrative threat model to better understand whether and how the online presence and services of AAs are harmed. As an illustrative example, we investigate 1,388 Alerting Authorities. We observe partial heightened security relative to the global Internet trends, yet find cause for concern as about 78% of service providers fail to deploy measures of trustworthy service provision. Our analysis shows two major shortcomings. First, how the DNS ecosystem is leveraged: about 50% of organizations do not own their dedicated domain names and are dependent on others, 55% opt for unrestricted-use namespaces, which simplifies phishing, and less than 4% of unique AA domain names are secured by DNSSEC, which can lead to DNS poisoning and possibly to certificate misissuance. Second, how Web PKI certificates are utilized: 15% of all hosts provide none or invalid certificates, thus cannot cater to confidentiality and data integrity, 64% of the hosts provide domain validation certification that lack any identity information, and shared certificates have gained on popularity, which leads to fate-sharing and can be a cause for instability.
The Trusted Platform Module (TPM) version 2.0 provides a two-phase key exchange primitive which can be used to implement three widely-standardized authenticated key exchange protocols: the Full Unified Model, the Full MQV, and the SM2 key exchange pr otocols. However, vulnerabilities have been found in all of these protocols. Fortunately, it seems that the protections offered by TPM chips can mitigate these vulnerabilities. In this paper, we present a security model which captures TPMs protections on keys and protocols computation environments and in which multiple protocols can be analyzed in a unified way. Based on the unified security model, we give the first formal security analysis of the key exchange primitive of TPM 2.0, and the analysis results show that, with the help of hardware protections of TPM chips, the key exchange primitive indeed satisfies the well-defined security property of our security model, but unfortunately under some impractical limiting conditions, which would prevent the application of the key exchange primitive in real-world networks. To make TPM 2.0 applicable to real-world networks, we present a revision of the key exchange primitive of TPM 2.0, which can be secure without the limiting conditions. We give a rigorous analysis of our revision, and the results show that our revision achieves not only the basic security property of modern AKE security models but also some further security properties.
We study the detection and delay performance impacts of a feature-based physical layer authentication (PLA) protocol in mission-critical machine-type communication (MTC) networks. The PLA protocol uses generalized likelihood-ratio testing based on th e line-of-sight (LOS), single-input multiple-output channel-state information in order to mitigate impersonation attempts from an adversary node. We study the detection performance, develop a queueing model that captures the delay impacts of erroneous decisions in the PLA (i.e., the false alarms and missed detections), and model three different adversary strategies: data injection, disassociation, and Sybil attacks. Our main contribution is the derivation of analytical delay performance bounds that allow us to quantify the delay introduced by PLA that potentially can degrade the performance in mission-critical MTC networks. For the delay analysis, we utilize tools from stochastic network calculus. Our results show that with a sufficient number of receive antennas (approx. 4-8) and sufficiently strong LOS components from legitimate devices, PLA is a viable option for securing mission-critical MTC systems, despite the low latency requirements associated to corresponding use cases. Furthermore, we find that PLA can be very effective in detecting the considered attacks, and in particular, it can significantly reduce the delay impacts of disassociation and Sybil attacks.
Modern browsers give access to several attributes that can be collected to form a browser fingerprint. Although browser fingerprints have primarily been studied as a web tracking tool, they can contribute to improve the current state of web security by augmenting web authentication mechanisms. In this paper, we investigate the adequacy of browser fingerprints for web authentication. We make the link between the digital fingerprints that distinguish browsers, and the biological fingerprints that distinguish Humans, to evaluate browser fingerprints according to properties inspired by biometric authentication factors. These properties include their distinctiveness, their stability through time, their collection time, their size, and the accuracy of a simple verification mechanism. We assess these properties on a large-scale dataset of 4,145,408 fingerprints composed of 216 attributes, and collected from 1,989,365 browsers. We show that, by time-partitioning our dataset, more than 81.3% of our fingerprints are shared by a single browser. Although browser fingerprints are known to evolve, an average of 91% of the attributes of our fingerprints stay identical between two observations, even when separated by nearly 6 months. About their performance, we show that our fingerprints weigh a dozen of kilobytes, and take a few seconds to collect. Finally, by processing a simple verification mechanism, we show that it achieves an equal error rate of 0.61%. We enrich our results with the analysis of the correlation between the attributes, and of their contribution to the evaluated properties. We conclude that our browser fingerprints carry the promise to strengthen web authentication mechanisms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا