Do you want to publish a course? Click here

An Empirical Study of the Cost of DNS-over-HTTPS

97   0   0.0 ( 0 )
 Added by Gareth Tyson
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

DNS is a vital component for almost every networked application. Originally it was designed as an unencrypted protocol, making user security a concern. DNS-over-HTTPS (DoH) is the latest proposal to make name resolution more secure. In this paper we study the current DNS-over-HTTPS ecosystem, especially the cost of the additional security. We start by surveying the current DoH landscape by assessing standard compliance and supported features of public DoH servers. We then compare different transports for secure DNS, to highlight the improvements DoH makes over its predecessor, DNS-over-TLS (DoT). These improvements explain in part the significantly larger take-up of DoH in comparison to DoT. Finally, we quantify the overhead incurred by the additional layers of the DoH transport and their impact on web page load times. We find that these overheads only have limited impact on page load times, suggesting that it is possible to obtain the improved security of DoH with only marginal performance impact.



rate research

Read More

We quantify, over inter-continental paths, the ageing of TCP packets, throughput and delay for different TCP congestion control algorithms containing a mix of loss-based, delay-based and hybrid congestion control algorithms. In comparing these TCP variants to ACP+, an improvement over ACP, we shed better light on the ability of ACP+ to deliver timely updates over fat pipes and long paths. ACP+ estimates the network conditions on the end-to-end path and adapts the rate of status updates to minimize age. It achieves similar average age as the best (age wise) performing TCP algorithm but at end-to-end throughputs that are two orders of magnitude smaller. We also quantify the significant improvements that ACP+ brings to age control over a shared multiaccess channel.
Virtually every Internet communication typically involves a Domain Name System (DNS) lookup for the destination server that the client wants to communicate with. Operators of DNS recursive resolvers---the machines that receive a clients query for a domain name and resolve it to a corresponding IP address---can learn significant information about client activity. Past work, for example, indicates that DNS queries reveal information ranging from web browsing activity to the types of devices that a user has in their home. Recognizing the privacy vulnerabilities associated with DNS queries, various third parties have created alternate DNS services that obscure a users DNS queries from his or her Internet service provider. Yet, these systems merely transfer trust to a different third party. We argue that no single party ought to be able to associate DNS queries with a client IP address that issues those queries. To this end, we present Oblivious DNS (ODNS), which introduces an additional layer of obfuscation between clients and their queries. To do so, ODNS uses its own authoritative namespace; the authoritative servers for the ODNS namespace act as recursive resolvers for the DNS queries that they receive, but they never see the IP addresses for the clients that initiated these queries. We present an initial deployment of ODNS; our experiments show that ODNS introduces minimal performance overhead, both for individual queries and for web page loads. We design ODNS to be compatible with existing DNS protocols and infrastructure, and we are actively working on an open standard with the IETF.
A Direct Numerical Simulation (DNS) of the incompressible flow around a rectangular cylinder with chord-to-thickness ratio 5:1 (also known as the BARC benchmark) is presented. The work replicates the first DNS of this kind recently presented by Cimarelli et al (2018), and intends to contribute to a solid numerical benchmark, albeit at a relatively low value of the Reynolds number. The study differentiates from previous work by using an in-house finite-differences solver instead of the finite-volumes toolbox OpenFOAM, and by employing finer spatial discretization and longer temporal average. The main features of the flow are described, and quantitative differences with the existing results are highlighted. The complete set of terms appearing in the budget equation for the components of the Reynolds stress tensor is provided for the first time. The different regions of the flow where production, redistribution and dissipation of each component take place are identified, and the anisotropic and inhomogeneous nature of the flow is discussed. Such information is valuable for the verification and fine-tuning of turbulence models in this complex separating and reattaching flow.
Because of its important role in health policy-shaping, population health monitoring (PHM) is considered a fundamental block for public health services. However, traditional public health data collection approaches, such as clinic-visit-based data integration or health surveys, could be very costly and time-consuming. To address this challenge, this paper proposes a cost-effective approach called Compressive Population Health (CPH), where a subset of a given area is selected in terms of regions within the area for data collection in the traditional way, while leveraging inherent spatial correlations of neighboring regions to perform data inference for the rest of the area. By alternating selected regions longitudinally, this approach can validate and correct previously assessed spatial correlations. To verify whether the idea of CPH is feasible, we conduct an in-depth study based on spatiotemporal morbidity rates of chronic diseases in more than 500 regions around London for over ten years. We introduce our CPH approach and present three extensive analytical studies. The first confirms that significant spatiotemporal correlations do exist. In the second study, by deploying multiple state-of-the-art data recovery algorithms, we verify that these spatiotemporal correlations can be leveraged to do data inference accurately using only a small number of samples. Finally, we compare different methods for region selection for traditional data collection and show how such methods can further reduce the overall cost while maintaining high PHM quality.
Sociological studies on transnational migration are often based on surveys or interviews, an expensive and time consuming approach. On the other hand, the pervasiveness of mobile phones and location aware social networks has introduced new ways to understand human mobility patterns at a national or global scale. In this work, we leverage geo located information obtained from Twitter as to understand transnational migration patterns between two border cities (San Diego, USA and Tijuana, Mexico). We obtained 10.9 million geo located tweets from December 2013 to January 2015. Our method infers human mobility by inspecting tweet submissions and users home locations. Our results depict a trans national community structure that exhibits the formation of a functional metropolitan area that physically transcends international borders. These results show the potential for re analysing sociology phenomena from a technology based empirical perspective.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا